[Pkg-clamav-commits] [SCM] Debian repository for ClamAV branch, debian/unstable, updated. debian/0.95+dfsg-1-6156-g094ec9b

Török Edvin edwin at clamav.net
Sun Apr 4 01:12:00 UTC 2010


The following commit has been merged in the debian/unstable branch:
commit d58f4f0a0f14f3870ae560cb03fae5357e1db907
Author: Török Edvin <edwin at clamav.net>
Date:   Sat Dec 12 11:59:14 2009 +0200

    Update to LLVM upstream SVN r91214.
    
    Squashed commit of the following:
    
    commit 2fdb8cfc44fb50a50bda26ac7774692a15c00412
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sat Dec 12 09:25:50 2009 +0000
    
        Fix some CHECK lines which were ignored by accident.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91214 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cbfd1ed3c3d611d3d36d6853b99f6d615eaf96f1
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Sat Dec 12 06:18:46 2009 +0000
    
        Revert r91208.  Something on Linux prevents the JIT from looking up a symbol
        defined in the test, and I don't have time tonight to figure it out.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91209 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fcfc5e88a362367990b85c708d9656c9e9150f5e
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Sat Dec 12 05:58:14 2009 +0000
    
        Fix available_externally linkage for globals.  It's probably still not
        supported by emitGlobals, but I don't have a test case for that.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91208 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 37fa76624c9c11ec6745b5b609a8b537f0cd8425
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Sat Dec 12 04:08:32 2009 +0000
    
        Make it easier to use the llvm_unreachable and DEBUG macros without "using
        namespace llvm" by qualifying their implementations with ::llvm::.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91206 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 241896971376c9bf4b5856c44c65084c8bf6e3cb
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Sat Dec 12 01:40:06 2009 +0000
    
        Framework for atomic binary operations. The emitter for the pseudo instructions
        just issues an error for the moment. The front end won't yet generate these
        intrinsics for ARM, so this is behind the scenes until complete.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91200 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4407a9dd3a5829f2385c49b2cdbe96c33076c384
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Dec 11 23:47:40 2009 +0000
    
        Revise scalar replacement to be more flexible about handle bitcasts and GEPs.
        While scanning through the uses of an alloca, keep track of the current offset
        relative to the start of the alloca, and check memory references to see if
        the offset & size correspond to a component within the alloca.  This has the
        nice benefit of unifying much of the code from isSafeUseOfAllocation,
        isSafeElementUse, and isSafeUseOfBitCastedAllocation.  The code to rewrite
        the uses of a promoted alloca, after it is determined to be safe, is
        reorganized in the same way.
    
        Also, when rewriting GEP instructions, mark them as "in-bounds" since all the
        indices are known to be safe.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91184 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 552423de4ce3985cee3e44d2b34afc81aef5b5b4
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 11 23:26:08 2009 +0000
    
        Delete an unnecessary line. The VTSDNode on a SIGN_EXTEND_REG is never
        a vector type.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91181 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f40cec0ee6d37ccce844d771f177f33c929a4b86
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Fri Dec 11 23:01:29 2009 +0000
    
        Lower setcc branchless, if this is profitable.
        Based on the patch by Brian Lucas!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91175 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3dd8cadf857bd134ef24aebb48aa22278cedaff1
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 11 21:47:36 2009 +0000
    
        Don't try to move a MBB into the fall-through position if it's a landing pad or
        branches only to a landing pad. Without this check, the compiler would go into
        an infinite loop because the branch to a landing pad is an "abnormal" edge which
        wasn't being taken into account.
    
        This is the meat of that fix:
    
          if (!PrevBB.canFallThrough() && !MBB->BranchesToLandingPad(MBB)) {
    
        The other stuff is simplification of the "branches to a landing pad" code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91161 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b9f2c6bf74e416fb21026ef9c779c6c4cdbce9d4
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Dec 11 21:37:07 2009 +0000
    
        Construct CompileUnits lazily.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91159 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d501bd6024424a0d85836f2e61a5fcd3f717d23
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 11 21:31:27 2009 +0000
    
        Implement vector widening, splitting, and scalarizing for SIGN_EXTEND_INREG.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91158 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c6095894e7cd3c7144575ba6e2596bb5a3d3adc0
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Dec 11 20:29:53 2009 +0000
    
        memory barrier instructions by definition have side effects. This prevents the post-RA scheduler from moving them around.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91150 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 432c8afd5e7ab193fbf348436da8c4d88a780f93
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 11 20:09:21 2009 +0000
    
        Change this to the correct PR number.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91148 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10ef46f126972755ae7dac376fc98f26ddd31dfc
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 11 20:05:23 2009 +0000
    
        Make getUniqueExitBlocks's precondition assert more precise, to
        avoid spurious failures. This fixes PR5758.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91147 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2c4c4dc0dc9c48f7ec593798916cab05c68ec44b
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 11 19:50:50 2009 +0000
    
        Fix the result type of SELECT nodes lowered from Select instructions with
        aggregate return values. This fixes PR5754.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91145 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7767af5f34dccb5d093cd46004f2f2687095019c
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Fri Dec 11 19:39:55 2009 +0000
    
        Honour setHasCalls() set from isel.
        This is used in some weird cases like general dynamic TLS model.
        This fixes PR5723
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91144 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7e6324988a184400c4a79d684746f91c3c5bfad3
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Fri Dec 11 19:37:26 2009 +0000
    
        Store Register Exclusive should leave the source register Inst{3-0} unspecified.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91143 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7853967b2bec37e6093fa595cc42aabbad964059
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Dec 11 18:52:41 2009 +0000
    
        Update properties.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91140 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit efa9046e1673863e2b091556e5c92b834cc4bdf5
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Fri Dec 11 15:30:07 2009 +0000
    
        Simplify this class by removing the result cache.
    
        This change removes the DefaultConstructible
        and CopyAssignable constraints on the template
        parameter T (the first one).
    
        The second template parameter (R) is defaulted to be
        identical to the first and controls the result type.
        By specifying it to be (const T&) additionally the
        CopyConstructible constraint on T can be removed.
    
        This allows to use StringSwitch e.g. for llvm::Constant
        instances.
    
        Regarding the other review feedback regarding performance
        because of taking pointers, this class should be completely
        optimizable like before, since all methods are inline and
        the pointer dereferencing and result value caching should be
        possible behind the scenes by the "as-if" rule.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91123 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 624ebdda54ecb7db70ffb105148e68dd34940f8e
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 11 10:43:41 2009 +0000
    
        Revert part of r91101 which was causing an infinite loop in the self-hosting
        build bots.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91113 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9039ae61fdf29b92f04b84e85c13d0119fa39268
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Dec 11 08:36:17 2009 +0000
    
        Add utility method for determining whether a function argument
        has the 'nest' attribute.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91109 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b880e97295cce0cffac154c7643945435cede34d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Dec 11 06:02:21 2009 +0000
    
        Tests for 91103 and 91104.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91105 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f031da831176f05449c6c0dd32f45f2fca403f2d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Dec 11 06:01:48 2009 +0000
    
        Add support to 3-addressify 16-bit instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91104 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f486334d5c3e5d686f35f5fb9594842a76596496
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Dec 11 06:01:00 2009 +0000
    
        Coalesce insert_subreg undef, x first to avoid phase ordering issue.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91103 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4d2fd87d90f53f74faab9b231809bad929ffdf28
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 11 03:14:18 2009 +0000
    
        Address comments on last patch:
    
        - Loosen the restrictions when checking of it branches to a landing pad.
        - Make the loop more efficient by checking the '.insert' return value.
        - Do cheaper checks first.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91101 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 573d1d59306f14c74a519b77e1d677e9a6df3a65
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 11 01:49:14 2009 +0000
    
        A machine basic block may end in an unconditional branch, however it may have
        more than one successor. Normally, these extra successors are dead. However,
        some of them may branch to exception handling landing pads. If we remove those
        successors, then the landing pads could go away if all predecessors to it are
        removed. Before, it was checking if the direct successor was the landing
        pad. But it could be the result of jumping through multiple basic blocks to get
        to it. If we were to only check for the existence of an EH_LABEL in the basic
        block and not remove successors if it's in there, then it could stop actually
        dead basic blocks from being removed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91092 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 437d699dc2b7e690254435cb93d7bbd21bb88217
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Fri Dec 11 01:42:04 2009 +0000
    
        Rough first pass at compare_and_swap atomic builtins for ARM mode. Work in progress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91090 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 307e718947561a527c51e03e38cd77ca1efc6998
    Author: Anders Carlsson <andersca at mac.com>
    Date:   Fri Dec 11 01:04:42 2009 +0000
    
        Add qualifiers for calls to member functions in dependent bases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91087 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d47963bc2b99fc828e706072d94b4faab5c6703
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Dec 10 23:25:41 2009 +0000
    
        If VariableDIe is not created (may be because global was optimzed away) then do not try to use the variable die.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91077 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fcd3705bd25d9e1406adc67a11b200f8454d840e
    Author: Eric Christopher <echristo at apple.com>
    Date:   Thu Dec 10 21:11:40 2009 +0000
    
        Add a test for the fix in revision 91009.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91062 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fda100228c44e2ce87a2059d879c3da7f6cf043d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Dec 10 20:59:45 2009 +0000
    
        It's not safe to coalesce a move where src and dst registers have different subregister indices. e.g.:
        %reg16404:1<def> = MOV8rr %reg16412:2<kill>
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91061 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8d73e4308d9773535794f2f00ea213981310a6ba
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Dec 10 19:52:22 2009 +0000
    
        Remove a broken, unused header
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91058 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1a8f9a8322f0de194328f3ab2cbf941504015f5f
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Dec 10 19:14:49 2009 +0000
    
        Refactor code that finds context for a given die.
        Create global variable DIEs after creating subprogram DIEs. This allows function level static variable's to find their context at the time of DIE creation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91055 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 37582c3385a5259c279f4da155b7659f4272ff71
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Dec 10 18:35:32 2009 +0000
    
        Add instruction encoding for DMB/DSB
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91053 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7f139c19245027f4f367bf09db322e16ceca76f5
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Dec 10 18:05:33 2009 +0000
    
        Refactor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91051 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a0f793a956a6aeff1386647a714c627102bfdc88
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Dec 10 17:48:32 2009 +0000
    
        Also attempt trivial coalescing for live intervals that end in a copy.
    
        The coalescer is supposed to clean these up, but when setting up parameters
        for a function call, there may be copies to physregs. If the defining
        instruction has been LICM'ed far away, the coalescer won't touch it.
    
        The register allocation hint does not always work - when the register
        allocator is backtracking, it clears the hints.
    
        This patch is more conservative than r90502, and does not break
        483.xalancbmk/i686. It still breaks the PowerPC bootstrap, so it is disabled
        by default, and can be enabled with the -trivial-coalesce-ends option.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91049 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d9ba4b1d7364823556c97a78142ecd28048e3a3
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Thu Dec 10 10:01:47 2009 +0000
    
        Comparing std::string with NULL is a bad idea, so just check whether its empty.
    
        This code was crashing always with oprofile enabled, since it tried to create a StringRef
        out of NULL, which run strlen on NULL.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91046 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a4ea68be726443c35446332dbdeff00f42f7e48b
    Author: Eric Christopher <echristo at apple.com>
    Date:   Thu Dec 10 00:25:41 2009 +0000
    
        Make sure the immediate dominator isn't NULL through iterations
        of the loop. We could get to this condition via indirect
        branches.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91009 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c97c9a0f7c7aa3572a819aef0315a6ddb7a2dfe8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 10 00:11:45 2009 +0000
    
        Fix PR5744, a case where we were getting the pointer size instead of the
        value size.  This only manifested when memdep inprecisely returns clobber,
        which is do to a caching issue in the PR5744 testcase.  We can 'efficiently
        emulate' this by using '-no-aa'
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91004 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ba744f662e83517b2b940145a24d6cbb453f52f9
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Dec 10 00:11:09 2009 +0000
    
        Add memory barrier intrinsic support for ARM. Moving towards adding the atomic operations intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91003 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e559e790ba60791726da785535d194ca71f47dee
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 10 00:04:46 2009 +0000
    
        allow this to build when the #if 0's are enabled.  No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90999 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6ab68f2d2516ed58e49016dfca15e2ad4bc865e6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Dec 9 22:55:01 2009 +0000
    
        Dereference loopHeader after checking for null rather than before.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90990 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5252e89c88fc55ce4b4842b600031f6ca536dc5d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Dec 9 22:24:42 2009 +0000
    
        Fix test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90988 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e31a26af8b770854d3a630081e74ee52899482b4
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Dec 9 21:00:30 2009 +0000
    
        Optimize splat of a scalar load into a shuffle of a vector load when it's legal. e.g.
        vector_shuffle (scalar_to_vector (i32 load (ptr + 4))), undef, <0, 0, 0, 0>
        =>
        vector_shuffle (v4i32 load ptr), undef, <1, 1, 1, 1>
    
        iff ptr is 16-byte aligned (or can be made into 16-byte aligned).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90984 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eef9c8f43d719d072f7f027510125f45a0e02956
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Dec 9 18:48:53 2009 +0000
    
        Reuse the Threshold value to size these containers because it's
        currently somewhat convenient for them to have the same value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90980 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe0be137457e5a2fe51000ed2844f888e9e19b64
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Dec 9 18:24:21 2009 +0000
    
        Reapply r90858, a cleanup patch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90979 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 091a1d20fcf979b5f266fa2b085215bc3bfe2d46
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 18:21:46 2009 +0000
    
        fix hte last remaining known (by me) phi translation bug.  When we reanalyze
        clobbers to forward pieces of large stores to small loads, we need to consider
        the properly phi translated pointer in the store block.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90978 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2737cb4b2a2a648f07f3272d932ba564a03a368a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 18:13:28 2009 +0000
    
        change GetStoreValueForLoad to use IRBuilder, which is cleaner and
        implicitly constant folds.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90977 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 39d7706d74c6237258b9a78a4aaf841f28c58a1d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Dec 9 18:05:27 2009 +0000
    
        Fix a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90975 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e4661381e82fb4d6ba33d57155dcda400280d3a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 17:27:45 2009 +0000
    
        Add a minor optimization: if we haven't changed the operands of an
        add, there is no need to scan the world to find the same add again.
        This invalidates the previous testcase, which wasn't wonderful anyway,
        because it needed a run of instcombine to permute the use-lists in
        just the right way to before GVN was run (so it was really fragile).
        Not a big loss.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90973 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de76e8dccddd7578a9d52dd04bcf9ca36da417d8
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Wed Dec 9 17:26:02 2009 +0000
    
        Add note about loadable modules on windows.
        Patch by Gregory Petrosyan!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90972 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 13c6e945a6f59aca2d21b46c036521c59d1f61bc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 17:18:49 2009 +0000
    
        fix PR5733, a case where we'd replace an add with a lexically identical
        binary operator that wasn't an add.  In this case, a xor.  Whoops.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90971 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b4ae24a6d11afa1453e2cfce2c702316ad7e093
    Author: David Goodwin <david_goodwin at apple.com>
    Date:   Wed Dec 9 17:18:22 2009 +0000
    
        <rdar://problem/7453528>. Track only physical registers that are valid for the target.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90970 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e1ab6f944f31f5db68788036bb48cc6fef160be6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 17:17:26 2009 +0000
    
        merge crash-2.ll into crash.ll
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90969 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 179201113b146232ba8ed6cc32c3b7c1b0cdd481
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Dec 9 08:29:32 2009 +0000
    
        Silence conversion warning from 64 to 32-bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90962 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd9feaabf12a8dd559794b5a779d12c9a5e9dfa3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 07:37:07 2009 +0000
    
        change AnalyzeLoadFromClobberingMemInst/AnalyzeLoadFromClobberingStore
        to require the load ty/ptr to be passed in, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90960 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 598abfd68824c718b392e6da1ff78a278b390bdf
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 07:34:10 2009 +0000
    
        change AnalyzeLoadFromClobberingWrite and clients to pass in type
        and pointer instead of the load.  No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90959 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca02fc16abf5caf64bf21a3efdad9b52368591c5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 07:31:04 2009 +0000
    
        enhance NonLocalDepEntry to keep the per-block phi translated address
        of the query.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90958 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c831c64f16749af9b1a782ace983aee600d59c43
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Dec 9 07:19:48 2009 +0000
    
        DeltaAlgorithm: Add a virtual destructor and home.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90957 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1a957967ec0e908f69717c8887d427eeaee8e5fc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 07:08:01 2009 +0000
    
        change NonLocalDepEntry from being a typedef for an std::pair to be its
        own small class.  No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90956 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 75730ab628f309e281beab31cb081b7487579f6e
    Author: Lang Hames <lhames at gmail.com>
    Date:   Wed Dec 9 05:39:12 2009 +0000
    
        Added a new "splitting" spiller.
    
        When a call is placed to spill an interval this spiller will first try to
        break the interval up into its component values. Single value intervals and
        intervals which have already been split (or are the result of previous splits)
        are spilled by the default spiller.
    
        Splitting intervals as described above may improve the performance of generated
        code in some circumstances. This work is experimental however, and it still
        miscompiles many benchmarks. It's not recommended for general use yet.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90951 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 22a458ae9214261c3e420910fb22ee81c358395d
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Dec 9 03:26:33 2009 +0000
    
        Remove spurious extern.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90937 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd96f3416558738874517ad53b30b0fa13fac75f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Dec 9 02:58:09 2009 +0000
    
        Remove unneeded ';' and a class/struct mismatch (noticed by clang).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90934 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 521f16433f6a3ae9e35a47efc577dcb9162a01ef
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 02:43:05 2009 +0000
    
        the code in GVN that tries to forward large loads to small
        stores is not phi translating, thus it miscompiles really
        crazy testcases.  This is from inspection, I haven't seen
        this in the wild.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90930 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c61909de5f6f6a721080017aa11ab846b24bdfb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 02:41:54 2009 +0000
    
        add some aborts to #if 0's.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90929 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit efff322bd17b640f544a1d28c580b59499a28ff8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 01:59:31 2009 +0000
    
        Switch GVN and memdep to use PHITransAddr, which correctly handles
        phi translation of complex expressions like &A[i+1].  This has the
        following benefits:
    
        1. The phi translation logic is all contained in its own class with
           a strong interface and verification that it is self consistent.
    
        2. The logic is more correct than before.  Previously, if intermediate
           expressions got PHI translated, we'd miss the update and scan for
           the wrong pointers in predecessor blocks.  @phi_trans2 is a testcase
           for this.
    
        3. We have a lot less code in memdep.
    
        We can handle phi translation across blocks of things like @phi_trans3,
        which is pretty insane :).
    
        This patch should fix the miscompiles of 255.vortex, and I tested it
        with a bootstrap of llvm-gcc, llvm-test and dejagnu of course.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90926 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76ebe86e74ce108a129259fb52663edb0483da9d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Dec 9 01:53:58 2009 +0000
    
        Teach InferPtrAlignment to infer GV+cst alignment and use it to simplify x86 isl lowering code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90925 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ada03e86adc6f39bc024e7e147f4e8a84929c6b2
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Dec 9 01:46:00 2009 +0000
    
        Remove tests that are not suitable anymore. Plus they are not testing the original bugfixes anymore. These tests were inserted to check bug fixes in code that handled debug info intrinsics. These intrinsics are no longer used and now llvm parser simply ignores old .dbg intrinsics from these dead tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90923 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1a029cbee8f0bb7d1798aa474bdc46f017ba1871
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Dec 9 01:36:00 2009 +0000
    
        Move isConsecutiveLoad to SelectionDAG. It's not target dependent and it's primary used by selectdag passes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90922 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3b1d6462fe599fc9e17811b2d6bc8fad77183647
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 01:19:16 2009 +0000
    
        fix a nasty variable that was shadowing the real CurBB but with the wrong value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90920 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a711e2664a880bc4b941d2690371cf811167bf95
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Dec 9 01:17:24 2009 +0000
    
        Infer alignment for non-fixed stack object.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90919 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d3caa135e4a12b56cf586dc4d2a3d89d821b2f46
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Dec 9 01:10:37 2009 +0000
    
        Add const qualifier.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90918 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 684647dabc65cd039ddd73daf25112d1b6a98072
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Dec 9 01:04:59 2009 +0000
    
        Refactor InferAlignment out of DAGCombine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90917 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a3d7e0b6f0723224cf9db5b12d41d0bc82948782
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 00:56:14 2009 +0000
    
        fix many input tracking bugs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90915 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cb8c775db77dfe0194a6169f437342eacaa46792
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 00:41:28 2009 +0000
    
        when opt crashes, print its command line arguments as a pretty stack trace.
        Somehow opt was missed when this was added.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90912 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa30b76c39c82f6e0973d597511c9a6980a914d4
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Dec 9 00:28:42 2009 +0000
    
        Fix a typo in a comment, and adjust SmallSet and SmallVector sizes,
        that Chris noticed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90910 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 604d78539ab4276a96f99287599fe6b341707dc0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 00:18:13 2009 +0000
    
        fix PHI  translation to take the PHI out of the instinputs set and add
        the translated value back to it if an instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90909 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67f80492b9822f893a83641bb204d81e916f8d3e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 00:10:55 2009 +0000
    
        instructions defined in CurBB may be intermediate nodes of the computation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90908 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7bc7cb6375e49b89417a44bddad4300ec5b2c5ea
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 9 00:01:00 2009 +0000
    
        add dumping and sanity checking support.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90906 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 096cad10650493a4addf69d3bdf81f2e2c38d943
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Dec 8 23:59:12 2009 +0000
    
        Put a threshold on the number of users PointerMayBeCaptured
        examines; fall back to a conservative answer if there are
        more. This works around some several compile time problems
        resulting from BasicAliasAnalysis calling PointerMayBeCaptured.
    
        The value has been chosen arbitrarily.
    
        This fixes rdar://7438917 and may partially address PR5708.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90905 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c88612dd681daa28178107881b0526b5ad4b0d1c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 8 23:42:51 2009 +0000
    
        make sure that PHITransAddr keeps its 'InstInputs' list up to
        date when instsimplify kicks in.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90901 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de2d3688d9711628c15ee2b8410fa3e936ca698c
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Dec 8 23:21:45 2009 +0000
    
        Revert 90858 90875 and 90805 for now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90898 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9e5c8a8cdfc3a0267b026851bf08a9507474578e
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 8 23:06:22 2009 +0000
    
        - Support inline asm 'w' constraint for 128-bit vector types.
        - Also support the 'q' NEON registers asm code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90894 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1d9aeda1a20c306e7639e6df6613d570e793ab00
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Dec 8 19:49:30 2009 +0000
    
        lit: Prevent crash-on-invalid (when run on directory which has no test suite).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90871 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb331cbe3eb64ba46f885189aa8797c3027a69ea
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Dec 8 19:48:01 2009 +0000
    
        Set svn:ignore on tools/clang.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90870 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 58c778873ae2dac1a1c615bf4cab3cd788ec4db9
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Dec 8 19:47:36 2009 +0000
    
        CMake/lit: Add llvm_{unit_,}site_config parameters, and always pass them when running tests from the project files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90869 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e56790b2a9d8fbdbbcba5750f646488f1073d9f
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 8 19:34:53 2009 +0000
    
        Revert 90789 for now. It caused massive compile time regression. Post-ra scheduler slowed down dramatically with this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90868 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d11c2b3c143afb32f32fcc203190454a63fdd0c1
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Dec 8 18:27:03 2009 +0000
    
        Some superficial cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90866 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb4b0d5499b3b7095b560734f1b47d26187ce644
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Dec 8 18:22:03 2009 +0000
    
        Clean up dead operands left around after SROA replaces a mem intrinsic.
        I'm not aware that this does anything significant on its own, but it's
        needed for another patch that I'm working on.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90864 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7b294d7538f15995e446a6a2dd17965813133cc0
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Dec 8 15:31:31 2009 +0000
    
        Cleanup.
        There is no need to supply ModuleCU to addType() as a parameter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90858 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 23a55d34fc5a37cc60122ac3db057177f91e2528
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Dec 8 15:01:35 2009 +0000
    
        Do not try to push dead variable's debug info into namespace info.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90857 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f8bbaf864380c3d5ea1c21d4fade7cb22fcced25
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Tue Dec 8 13:07:38 2009 +0000
    
        Remove useless calls to c_str().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90855 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 35138e397b334bffa8f1c9121fa97f9129880d09
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Dec 8 10:10:20 2009 +0000
    
        Teach GlobalOpt to delete aliases with internal linkage (after
        forwarding any uses).  GlobalDCE can also do this, but is only
        run at -O3.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90850 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3147011bdf615c47e44b7ffbc5e94263d42aa6ed
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 8 06:06:26 2009 +0000
    
        fix a typo (and -> add) and fix GetAvailablePHITranslatedSubExpr to not
        side-effect the current object.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90837 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 54518e654b0b98ffeefec0b0958500883ba2832a
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Tue Dec 8 05:45:41 2009 +0000
    
        Remove unnecessary #include "llvm/LLVMContext.h".
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90836 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 48df1cc8234d91de705451688bb54bbbe161d51a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 8 05:31:46 2009 +0000
    
        whitespace cleanup
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90834 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 62d57ae9ef18b9cfaa8c3a977db1ed31694a03d5
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Tue Dec 8 02:49:54 2009 +0000
    
        Removed VC++ compatibility code from DataTypes.h.in.
    
        This header file is not used on VC++ builds.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90829 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit da3ef8a038a93fa4b4879fd203abd2f9377ab977
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Tue Dec 8 02:40:09 2009 +0000
    
        For VC++, define the ?INT*_C macros only it they are not yet defined.
    
        Some compatibility updates like the Boost TR1 compatibility headers
        define them.
    
        Patch contributed by OvermindDL1!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90828 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3caef71dd23157310e52ed947a2fb44579f79db3
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Tue Dec 8 01:03:04 2009 +0000
    
        Reduce (cmp 0, and_su (foo, bar)) into (bit foo, bar). This saves extra instruction. Patch inspired by Brian Lucas!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90819 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9688c52873170b7096ec6f7e13ddea8cb13ca6d8
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Dec 7 23:11:03 2009 +0000
    
        Watch out for duplicated PHI instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90816 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cb900da68f1eef3242a89e4f3f2078bbccb093b3
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Dec 7 23:10:34 2009 +0000
    
        Follow up to 90488. Turn a check into an assertion.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90815 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d5d28ebb4b6d5cf3f6b15f0a1be3e3e82816b03b
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Mon Dec 7 22:32:38 2009 +0000
    
        Fix the OProfileJITEventListener for StringRef being returned from debug info.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90813 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a8b457e8247f989a5d6d19468fc23e7e48a7118
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Mon Dec 7 21:54:43 2009 +0000
    
        Rename DIFactory::InsertValue() as DIFactory::InsertDbgValueIntrinsic()
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90807 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 641f820036ddf1823bf6b175f4bab0034749b537
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Dec 7 21:41:32 2009 +0000
    
        Add support to emit debug info for c++ style namespaces.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90805 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 03cbedef57d2f8bba7823df2f17ffc29e089ba43
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Dec 7 21:19:33 2009 +0000
    
        Delete code accidentally left behind.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90804 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b0f617ce7b174a87c19e6f21bcadecc68654ff38
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 7 19:52:57 2009 +0000
    
        fix typo
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90793 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 78a0c48a34b4ae0f766bc94e841018cd9d3557ae
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 7 19:45:30 2009 +0000
    
        add accessor, improve comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90792 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ef78de4b1c6169cbee2671bbe2a4291d21231cf
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Dec 7 19:42:22 2009 +0000
    
        Test case for 90787.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90791 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef778032ceda17e7b5209045d28761c5190b5730
    Author: David Greene <greened at obbligato.org>
    Date:   Mon Dec 7 19:40:26 2009 +0000
    
        Use FileCheck and set nounwind on calls.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90790 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 654f5abad31952e2e447a598635ba9ce3a582a45
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 7 19:38:26 2009 +0000
    
        Apply Pekka Jääskeläinen's patch to raise the first virtual register
        number in order to accomodate targets with more than 1024 registers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90789 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f2379900d4ebb0568cc18e7102f1d3567f2fabea
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Mon Dec 7 19:36:34 2009 +0000
    
        Introduce the "@llvm.dbg.value" debug intrinsic.
    
        The semantics of llvm.dbg.value are that starting from where it is executed, an offset into the specified user source variable is specified to get a new value.
    
        An example:
          call void @llvm.dbg.value(metadata !{ i32 7 }, i64 0, metadata !2)
        Here the user source variable associated with metadata #2 gets the value "i32 7" at offset 0.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90788 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6fc593ad30a6640abdcd0e27aefcfae35eb874f5
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Dec 7 19:16:13 2009 +0000
    
        Simplify a bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90785 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 43f5c616b9938842aebaa8de4451aca52c408f04
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Dec 7 19:15:57 2009 +0000
    
        Throw 'const char*' instead of 'std::string'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90784 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 726e81006a54d5ee21372b593b6d0aebed32eade
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 7 19:04:49 2009 +0000
    
        add support for phi translation and incorpation of new expression.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90782 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ad1cc9c85c6ebd136855b133cfa0397741c2dcbf
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 7 19:04:31 2009 +0000
    
        Don't enable the post-RA scheduler on x86 except at -O3. In its
        current form, it is too expensive in compile time.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90781 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a3fce13cb0bd4c4afcf1c28f788a398df6a560dc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 7 18:36:53 2009 +0000
    
        checkpoint of the new PHITransAddr code, still not done and not used by
        anything.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90779 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e420496fa6dc7f2aa0cbec0a37f61197e5b402bc
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Dec 7 18:26:24 2009 +0000
    
        Regenerate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90776 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b045d745c550cdd03ae35f9ea4a9e325cbd50bbb
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Dec 7 18:26:11 2009 +0000
    
        Documentation update.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90775 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6b292727d8198ca5f85d501a741cb9e69f611038
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Dec 7 18:25:54 2009 +0000
    
        Deprecate 'unpack_values'.
    
        Use 'forward_values' + 'comma_separated' instead.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90774 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4b6e9d258dcb26f889bfd4c5c0139bfea7421b19
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Dec 7 17:03:21 2009 +0000
    
        Pass '-msse' and friends to llc as '-mattr=+/-'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90771 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f77867899cf80b436831429e58bac1f6a1ae66a1
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Dec 7 17:03:05 2009 +0000
    
        Implement 'forward_value' and 'forward_transformed_value'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90770 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1d52e693236fe83a47e438438a186d337f149ab0
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Dec 7 10:51:55 2009 +0000
    
        Refactoring, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90764 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c02f3dca734ac6709bf51d5768ed7b3f97b4a9e7
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Dec 7 10:15:19 2009 +0000
    
        Pre-regalloc tale duplication. Work in progress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90759 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 768358c2b996a799013fb0caa691ffe3b4f6e950
    Author: John Mosby <ojomojo at gmail.com>
    Date:   Mon Dec 7 09:06:37 2009 +0000
    
        fixed some typos in method comments, reworded some comments for clarity
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90754 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4aafc85f5a94802c121050ec113973249e295577
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Mon Dec 7 05:29:59 2009 +0000
    
        Fixes the Atomic implementation if compiled by MSVC compiler.
    
        sys::cas_flag should be long on this platform, InterlockedAdd() is
        defined only for the Itanium architecture (according to MSDN).
    
        Patch by Michael Beck!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90748 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 18552eb96f80811caa9f48141219ebeb2eadc5b3
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Dec 7 03:07:01 2009 +0000
    
        If BB is empty, insert PHI before end() instead of front().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90744 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e7fb83a04b4c05856465db53ad0981a8db72265c
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Dec 7 02:28:41 2009 +0000
    
        Some pretty-printing
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90742 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c15c59e7601f48d83728faa47d5bf676ad084383
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Dec 7 02:28:26 2009 +0000
    
        Truncate the arguments of llvm.frameaddress / llvm.returnaddress intrinsics from i32 to platform's largest native type
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90741 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 71169a722a14e9f381d61253aa8ebafd28f3e5d9
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Dec 7 02:28:10 2009 +0000
    
        Add lowering of returnaddr and frameaddr intrinsics. Shamelessly stolen from x86 :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90740 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5a39d69350dc6cdf4088ee6b1864070e619ef6d3
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Dec 7 02:27:53 2009 +0000
    
        Initial codegen support for MSP430 ISRs
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90739 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56998002fed819fcf5d9dc28c49287454425506f
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Dec 7 02:27:35 2009 +0000
    
        Add MSP430 interrupt calling conv. No functionality change yet.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90738 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 57f67276b60064d0bf4c97ad459241377dff7e08
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Mon Dec 7 02:27:08 2009 +0000
    
        Add ability to select hw multiplier mode and select appropriate libcalls.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90737 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7363a6e009625b257d01c6c1a64653e17e2676cc
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Mon Dec 7 00:27:35 2009 +0000
    
        Fix typos. Thanks to John Tytgat for noticing it!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90728 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f7374f702cd95402ff597457c8b3772cf3770ecf
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Dec 6 22:39:50 2009 +0000
    
        Dynamic stack realignment use of sp register as source/dest register
        in "bic sp, sp, #15" leads to unpredicatble behaviour in Thumb2 mode.
        Emit the following code instead:
        mov r4, sp
        bic r4, r4, #15
        mov sp, r4
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90724 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0cbb985ccdaaf5bc818e25d941a6e31d611deca4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Dec 6 17:17:23 2009 +0000
    
        fix PR5698
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90708 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ae4038f66b81c6a6017cac78817aa1c8dadd37d8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Dec 6 16:58:41 2009 +0000
    
        remove extraneous comma clang warns about
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90707 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4bb632ff31a7a273cd8c7411319b27dd5b532f9f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Dec 6 05:29:56 2009 +0000
    
        constant fold loads from memcpy's from global constants.  This is important
        because clang lowers nontrivial automatic struct/array inits to memcpy from
        a global array.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90698 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a96e53ae92d46707d04f238679ecb012ff07df99
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Dec 6 04:54:31 2009 +0000
    
        add support for forwarding mem intrinsic values to non-local loads.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90697 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d59cb5d5442025e89d4b6c16bf685a6f770d7add
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Dec 6 04:16:05 2009 +0000
    
        gvn is optimizing this better now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90696 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cb00f73e99d788c3cb30168d32e46f6970c93d48
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Dec 6 01:57:02 2009 +0000
    
        Handle forwarding local memsets to loads.  For example, we optimize this:
    
        short x(short *A) {
          memset(A, 1, sizeof(*A)*100);
          return A[42];
        }
    
        to 'return 257' instead of doing the load.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90695 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e77e9b0238cff581abc86626cd153f726478804
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Dec 6 01:56:22 2009 +0000
    
        Add helper methods for forming shift operations with a constant
        shift amount.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90694 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8e1e701a5bb2515de949ee95e457d208cbd36d19
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Dec 6 01:47:24 2009 +0000
    
        merge two tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90691 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 46999ba6514604c021f8ec30df15e47568840c85
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Sun Dec 6 00:06:33 2009 +0000
    
        CheckAtomic.cmake: Put all C++ code inside CHECK_CXX_SOURCE_COMPILES.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90685 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ac88a7eecd22b24ede69845dd74cae286ff14816
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Sat Dec 5 23:19:33 2009 +0000
    
        Fix for atomic intrinsics detection when using MSVC.
    
        Patch by Michael Beck!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90683 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a182b3f8ba53a7b042461fd206a0807697ae6e85
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 17:56:26 2009 +0000
    
        Remove old DBG_LABEL code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90669 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 955748e8b311172018ee5ab478e84b1f663d6547
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 17:51:33 2009 +0000
    
        Remove the unused DisableLegalizeTypes option and related code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90668 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d5b718ce82f52d669b9657ec5534f5b37a152637
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Sat Dec 5 07:59:04 2009 +0000
    
        Calling InvalidateEntry during the refinement was breaking the bootstrap.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90656 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b347993aa457ca3c18247735eb301388d1fe2e71
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Sat Dec 5 07:46:49 2009 +0000
    
        Final cleanups:
    
        - Privatize a typedef.
        - Call the InvalidateEntry when refining a type.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90655 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 27cae32667e68db30ebfdee6fa4727696895967f
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Sat Dec 5 07:30:23 2009 +0000
    
        Temporarily revert r90502. It was causing the llvm-gcc bootstrap on PPC to fail.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90653 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9c9f6a6c8f5226419d4b0bac7c0cd5cf3f8d644d
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Dec 5 06:37:52 2009 +0000
    
        Document that memory use intrinsics may also return Def results.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90651 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea2236d55ea9e19aaedb6e4402a3be714aed7efd
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Dec 5 06:37:24 2009 +0000
    
        Fix indentation in switch statement.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90650 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b7bc81e5026c1c950667f0d96f499edc671e49c
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Dec 5 05:00:00 2009 +0000
    
        Generalize this optimization to work on equality comparisons between any two
        integers that are constant except for a single bit (the same n-th bit in each).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90646 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a1151bf0deb01cc4fd87b8a394196f1123c78566
    Author: Eric Christopher <echristo at apple.com>
    Date:   Sat Dec 5 02:46:03 2009 +0000
    
        More updates to objectsize intrinsic docs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90644 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1792bc68d2991042dceaa55f508a1c6c6a43b58a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 02:00:34 2009 +0000
    
        Don't print a space before the : between the file name and line number.
        And separate the directory and file name with a '/'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90641 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7ea7e173d2eb27dc6a36b0c93c3e48ebea5da821
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Sat Dec 5 01:46:01 2009 +0000
    
        Inline methods which are called only once.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90640 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ce8b25281afb6b4b39f9e8442275007de6ef920
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Sat Dec 5 01:43:33 2009 +0000
    
        Refactor some code. No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90639 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cfca6e367887ec0820cf5c664eec4c0714166980
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 01:42:34 2009 +0000
    
        Print newlines after printing labels for debug info, so that the output
        isn't cluttered with things like "Llabel47:Llabel48:  movq  (%rsi), %xmm3"
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90638 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c5b50fbff2c7d0f0e5f225da1c2f3f58299b35da
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 01:29:04 2009 +0000
    
        Don't blindly set the debug location for PHI node copies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90637 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 581cdf90ade3d318dedde0c645d478ffede09e0d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 01:27:58 2009 +0000
    
        Make TargetSelectInstruction protected and called from FastISel.cpp
        instead of SelectionDAGISel.cpp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90636 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 01696988ff3c98744cce105372803d4ed903f988
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 00:44:40 2009 +0000
    
        Remove the target hook TargetInstrInfo::BlockHasNoFallThrough in favor of
        MachineBasicBlock::canFallThrough(), which is target-independent and more
        thorough.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90634 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bc18967dc5cbda59461d704cc1543b4fbd57f592
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 00:32:59 2009 +0000
    
        Simplify this code: don't call AnalyzeBranch before doing simpler checks.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90633 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ca1ac3967582cd4c02a12054b927adc1e111e52
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 00:27:08 2009 +0000
    
        The debug information for an LLVM Instruction applies to that Instruction
        and that Instruction only. Implement this by setting the "current debug position"
        back to Unknown after processing each instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90632 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4698bab134cd68e41ae098168d9fd78e64b69904
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 00:23:29 2009 +0000
    
        Fix this code to use DIScope instead of DICompileUnit, as in r90181.
        Don't print "SrcLine"; just print the filename and line number, which
        is obvious enough and more informative.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90631 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef92a646d90a7dc618598e7b836638f96238f245
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 00:20:51 2009 +0000
    
        Don't print the debug directory; it's often long and uninteresting. Omit
        the column number if it is not known. Handle the case of a missing filename
        better.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90630 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a45e4926a3f8191749e5796b1829387c729a8a41
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 00:05:43 2009 +0000
    
        Minor code simplification.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90628 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1db628b4bc909ddc0d17ac0b2930cbf00a61191a
    Author: David Greene <greened at obbligato.org>
    Date:   Sat Dec 5 00:03:24 2009 +0000
    
        Remove an unneeded include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90627 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b62af25fd3f6b7b9efd33f31c2db6c806b5cb6c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Dec 5 00:02:37 2009 +0000
    
        Remove now-redundant llvm-as invocations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90626 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a2b73bd6b0f0b3271a9d63a9933bafbcba2555fe
    Author: David Greene <greened at obbligato.org>
    Date:   Fri Dec 4 23:55:07 2009 +0000
    
        Remove an unneeded include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90625 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 546f68a11248be2f32331f07c5baccf8d9ebb00d
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 4 23:29:57 2009 +0000
    
        Add testcase for PR4262.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90623 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e26680ba1f829b15338f0f128855432c5bfd7210
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Dec 4 23:19:55 2009 +0000
    
        Print a space between the comment character and the text.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90621 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6329f4f55dcdfb70f68498843c88b5a73dcd4c03
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 4 23:16:56 2009 +0000
    
        Temporarily revert r72620 because r72619 was reverted.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90619 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d3df6974d831cc29b86026ce3ee3467eb489ea7f
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Dec 4 23:10:24 2009 +0000
    
        In TAG_subrange_type, uppder bound is zero indexed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90617 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef682b7fdd6c1fc93490a01b7ba4093e4c935771
    Author: David Greene <greened at obbligato.org>
    Date:   Fri Dec 4 23:08:02 2009 +0000
    
        Fix a bad merge.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90616 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 989d418a4e80243a7c1ef377b24129218ba99cd5
    Author: David Greene <greened at obbligato.org>
    Date:   Fri Dec 4 23:00:50 2009 +0000
    
        Update the TargetInstrInfo interfaces so hasLoad/StoreFrom/ToStackSlot
        can return a MachineMemOperand.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90615 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ef2d0921f1b88f1be8eb8c3380b4e17aee82bae
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Dec 4 22:46:47 2009 +0000
    
        Fix indentation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90613 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 11a046f10f723cac2c598dd0a626383c43aecba2
    Author: David Greene <greened at obbligato.org>
    Date:   Fri Dec 4 22:46:04 2009 +0000
    
        Use new interfaces to print spill size.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90611 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 647636ff709918827317acb14e19f1f78731d140
    Author: David Greene <greened at obbligato.org>
    Date:   Fri Dec 4 22:38:46 2009 +0000
    
        Have hasLoad/StoreFrom/ToStackSlot return the relevant MachineMemOperand.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90608 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 17bd7fec405d4836c56e7f36d46c89d7a19dc25f
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Dec 4 21:57:37 2009 +0000
    
        Fix up some comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90603 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fcb8eeed72b0f2838ccf134e814aad2feb254192
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Dec 4 21:51:35 2009 +0000
    
        Fix 80-column violations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90601 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cfd40e5a54cf4299c3ec340152e1ae0350530b37
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Dec 4 21:41:24 2009 +0000
    
        OptParser: Emit HelpText field for option groups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90599 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c461971d6dea16b5a464a801979b133746016d2f
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Dec 4 21:03:02 2009 +0000
    
        Some code cleanup. No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90588 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit deb54c890a835d61f87054cd0f0fd89929fd7207
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Dec 4 20:07:10 2009 +0000
    
        Avoid creating a metadata slot for all metadata that contains an instruction
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90581 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 99298f61d3a280ade30c73faee24fd72e0cdc228
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Dec 4 19:09:10 2009 +0000
    
        Handle recursive PHI's.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90575 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit abf657c53cc476649c440c08edc622f71cbc61ba
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Dec 4 18:29:23 2009 +0000
    
        Fix crasher when N->getElement(n) is NULL
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90572 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 371fcef75743c634263720bcd3a61a1615b90163
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Dec 4 09:42:45 2009 +0000
    
        Add a pre-regalloc tail duplication pass.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90567 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6941493d76d1aaaffb2d5d505eef249f9145d07b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Dec 4 09:23:37 2009 +0000
    
        Don't try to be cute with undef optimization here. Let ProcessImplicitDefs handle it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90566 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c4c88cfd8ee8bff54a68d0642d671ea778782e6d
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Dec 4 08:42:17 2009 +0000
    
        Add note about a subtle bug in this code.  Does not effect the main
        architectures that LLVM targets, because they don't use this code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90564 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6f27b6231b04b85963be239e4b4e38a4ca4ad282
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Dec 4 08:17:07 2009 +0000
    
        Fix typo and add missing include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90557 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 078fc85321a73e8e62dac97c35361306ffd0205c
    Author: Andreas Neustifter <astifter at gmx.at>
    Date:   Fri Dec 4 06:58:24 2009 +0000
    
        Added debug output for inherited passes that are invalidated.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90553 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 14d122d1e12cccd89611132195c0c884ff20f08c
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Fri Dec 4 06:38:45 2009 +0000
    
        Forward -m32/-m64 to the linker.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90548 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b6244a12df638a1e891969f5d98c39061a4afedb
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Fri Dec 4 06:38:28 2009 +0000
    
        Support -march/-mtune/-mcpu.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90547 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a9c1d1c0c9628ce3b99e16e610ec41affcbaf614
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Dec 4 06:29:29 2009 +0000
    
        Fix PR5551 by not ignoring the top level constantexpr when
        folding a load from constant.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90545 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 264a3832a12498c4cf895d43ca3642faea987617
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Dec 4 04:15:36 2009 +0000
    
        add to cmake
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90539 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f3271997b64f2881fbfb1152031ed2e14337ecba
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Dec 4 02:12:12 2009 +0000
    
        Small and carefully crafted testcase showing a miscompilation by GVN
        that I'm working on.  This is manifesting as a miscompile of 255.vortex
        on some targets.  No check lines yet because it fails.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90520 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9003793e8d154cb9b14cc2391c46c68f97e2182c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Dec 4 02:10:16 2009 +0000
    
        add the start of a class used to handle phi translation in memdep and
        gvn (this is just a skeleton so far).  This will ultimately be used
        to fix a nasty miscompilation with GVN.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90518 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1358217bcbc89fe174040d453fbb63f4a2ca79d5
    Author: Mike Stump <mrs at apple.com>
    Date:   Fri Dec 4 01:53:15 2009 +0000
    
        Create yet another helper for Invoke.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90514 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 634753dbd76b1db3ed7a48af1d1fbc94da3daa93
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Dec 4 01:35:02 2009 +0000
    
        Teach AsmWriter to write inline (not via a global metadata slot) metadata that contains an instruction
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90512 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a0c7dd2577afbb02d62879c8b6ac4ecd6d372d62
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Dec 4 01:33:04 2009 +0000
    
        Fix a comment typo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90511 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cdbf27b90f3018b6bb3b66b3f58db960d445ef59
    Author: Mike Stump <mrs at apple.com>
    Date:   Fri Dec 4 01:26:26 2009 +0000
    
        Add some helpers for Invoke to mirror CreateCall helpers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90508 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6bf11742990180e29ca58bbe8f1149773ec6090a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Dec 4 01:03:32 2009 +0000
    
        add an assert to make it really clear what this is doing.  Return singularval as
        a compile time perf optimization to avoid a load.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90507 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 88d11c3a214da464deb05eb17922d633e5af77a1
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Dec 4 00:16:04 2009 +0000
    
        Also attempt trivial coalescing for live intervals that end in a copy.
    
        The coalescer is supposed to clean these up, but when setting up parameters
        for a function call, there may be copies to physregs. If the defining
        instruction has been LICM'ed far away, the coalescer won't touch it.
    
        The register allocation hint does not always work - when the register
        allocator is backtracking, it clears the hints.
    
        This patch takes care of a few more cases that r90163 missed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90502 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 723be602ecdfde30a50c3afe1780575fdf8f0f56
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Dec 4 00:09:05 2009 +0000
    
        - If the reaching definition is an undef and the use is a PHI, add the implicit_def to the end of the source block.
        - When reaching value is replaced with another, update the cache as well.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90501 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4285ddbfd5fd218038058a6fdb2366ae3247a99d
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Dec 3 23:46:57 2009 +0000
    
        Insert composite type DIE into the map before processing type fields. This allows fields to find their context DIE from the map.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90498 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 223c7146f919727a834a1fa5471f25ce169ab2a6
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Thu Dec 3 23:40:58 2009 +0000
    
        Add ParseInlineMetadata() which can parses metadata that refers to an instruction.  Extend ParseParameterList() to use this new function so that calls to llvm.dbg.declare can pass inline metadata
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90497 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d003075f1b3921f20ac9da8e0310afa4cd9b2f04
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Dec 3 21:55:01 2009 +0000
    
        remove out of date FIXME.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90490 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 21d9a013924f79957deaa9dac748d722cc40b40a
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Dec 3 21:51:55 2009 +0000
    
        Handle undef values properly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90489 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 886ea36230307f67689a2f318adf47ec80dd8be4
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Dec 3 21:50:58 2009 +0000
    
        Watch out for PHI instruction with no source operands.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90488 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 505f207509d2814d54c957bd9324c59b332f35ba
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Dec 3 21:47:07 2009 +0000
    
        Fix a comment typo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90487 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b956e74994c0d0e41f2862e79d96ec699de782de
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Thu Dec 3 21:37:32 2009 +0000
    
        Fix ExpandShiftWithUnknownAmountBit, which was completely bogus.
        Pointed out by Javier Martinez (who also provided a patch).  Since
        this logic is not used on (for example) x86, I guess nobody noticed.
        Tested by generating SHL, SRL, SRA on various choices of i64 for all
        possible shift amounts, and comparing with gcc.  Since I did this on
        x86-32, I had to force the use of ExpandShiftWithUnknownAmountBit.
        What I'm saying here is that I don't have a testcase I can add to the
        repository.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90482 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d0afa909ad8d1b108b4ad6925ca6b3a5de2817e2
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Dec 3 20:49:10 2009 +0000
    
        Clean up some loop logic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90481 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 188c85dc4247e23ddaa63e055972253de3901fdb
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Dec 3 19:11:07 2009 +0000
    
        Add support to emit debug info for virtual functions and virtual base classes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90474 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96d7b188600be74e1936f6ed871b1d83d017bbf8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Dec 3 19:03:18 2009 +0000
    
        Print a newline after the Args: line so that unrelated errs() output doesn't
        end up on the same line.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90473 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5e1265d554b730da926b3fee830c38e5dd9eefae
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Dec 3 13:23:03 2009 +0000
    
        Fix MSVC build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90454 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e047045d670b654099260d18939e7c997ef382b6
    Author: Andreas Neustifter <astifter at gmx.at>
    Date:   Thu Dec 3 12:55:57 2009 +0000
    
        Convert ProfileVerifier to template so it can be used for different types of ProfileInfo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90451 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f2aee46c884e3f2ef217f9c923eeb1f9b303f784
    Author: Andreas Neustifter <astifter at gmx.at>
    Date:   Thu Dec 3 12:41:14 2009 +0000
    
        Do not create negative edge weights in ProfileEstimator.
        Use integer values for weights to prevent rounding errors.
        Make ProfileEstimator more robust in general CFGs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90449 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eee658317491fb9c16588d9490a953cbe099099c
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Dec 3 11:12:42 2009 +0000
    
        Add an implementation of the delta debugging algorithm.
         - This is a pretty slow / memory intensive implementation, and I will likely
           change it to an iterative model, but it works.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90447 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 86f33f89980cab23051ee961585883bb3627d7a8
    Author: Andreas Neustifter <astifter at gmx.at>
    Date:   Thu Dec 3 11:00:37 2009 +0000
    
        Use ProfileInfo-API in ProfileInfo Loader and do more assertions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90446 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5c3770a68b8978d3b7ab6abe6176590b8bbbe59e
    Author: Andreas Neustifter <astifter at gmx.at>
    Date:   Thu Dec 3 09:30:12 2009 +0000
    
        Converted ProfileInfo to template, added more API for ProfileInfo-preserving.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90445 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e83c9b214205978873745b7368df84cf9f117996
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Dec 3 08:43:53 2009 +0000
    
        Teach tail duplication to update SSA form. Work in progress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90432 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 112929ef9a90dece270087545a7b173e64dc6245
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 3 07:43:46 2009 +0000
    
        expand note.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90429 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dddb3a3b9e782e72a9568a31639ede110cf05131
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 3 07:41:54 2009 +0000
    
        add a note
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90428 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea5d5940df6894c2efc4c6ac76700922df97d8b2
    Author: Nate Begeman <natebegeman at mac.com>
    Date:   Thu Dec 3 07:11:29 2009 +0000
    
        Don't pull vector sext through both hands of a logical operation, since doing so prevents the fusion of vector sext and setcc into vsetcc.
        Add a testcase for the above transformation.
        Fix a bogus use of APInt noticed while tracking this down.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90423 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 12fac174a70da350938ffa425d60e5f314ca5a8d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 3 06:58:32 2009 +0000
    
        fix a build problem with VC++, PR5664, patch by Alp Toker!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90419 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 171422980d99d7689b2cac01504b981e87b61905
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Dec 3 06:40:55 2009 +0000
    
        Recognize canonical forms of vector shuffles where the same vector is used for
        both source operands.  In the canonical form, the 2nd operand is changed to an
        undef and the shuffle mask is adjusted to only reference elements from the 1st
        operand.  Radar 7434842.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90417 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ac6732b47ec36df1b6d990a62d21da02fddee2b
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Dec 3 05:15:35 2009 +0000
    
        Don't call getValueType() on a null SDValue
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90415 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2c405b9b2073b226155ef3e928ffa4f07e69607d
    Author: Owen Anderson <resistor at mac.com>
    Date:   Thu Dec 3 03:43:29 2009 +0000
    
        Fix this crasher, and add a FIXME for a missed optimization.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90408 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 615cc8769c570cab6cb910255c0fab10256cf272
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Dec 3 02:31:43 2009 +0000
    
        Fill out codegen SSA updater. It's not yet tested.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90395 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ff1f7f7ab396cf2f68d64a7bc54d04f7dde178ae
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Dec 3 01:54:07 2009 +0000
    
        Revert r90371. It was causing build failures.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90383 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 26f2fb73faa4634ef52b931f6581f127c79bc2f2
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Dec 3 01:49:56 2009 +0000
    
        Don't hang on to pointers or references after vector::push_back.
    
        The MO reference to a MachineOperand can be invalidated by
        MachineInstr::addOperand. Don't even use it for debugging.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90381 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5c17063f8a9795289eee9454110f066e225a72bd
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 3 01:46:18 2009 +0000
    
        add a failing testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90380 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c1df879dbd07bb4d63e7cc6358ed73263c631c12
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Dec 3 01:25:38 2009 +0000
    
        Emit method definition DIE at module level (even for methods with inlined functino body at soure level) so that the debugger can invoke it. This fixes many test failures in gdb test suite.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90375 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07af48541ca53034b99a5290bca9b8d45849860c
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Dec 3 01:15:46 2009 +0000
    
        Further improvements: refactoring code that does the same thing into one
        function, converting "dyn_cast" to "cast", asserting the correct things, and
        other general cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90371 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 610e0bd83840b8254dbc2ad0a4ba352eca16a720
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 3 01:10:05 2009 +0000
    
        yay for case insensitive file systems (?)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90370 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56de6c29b3028dc1d6e5dc203292de7bb44275d5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 3 01:05:45 2009 +0000
    
        fix PR5673 by being more careful about pointers to functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90369 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a262eaa4f9eb214d28918a397c07a80b6158bf0b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 3 00:55:04 2009 +0000
    
        remove some dead std::ostream using code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90366 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b44b429549bc4c4f950d56d4f6fa0ba486856cc6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Dec 3 00:50:42 2009 +0000
    
        improve portability to avoid conflicting with std::next in c++'0x.
        Patch by Howard Hinnant!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90365 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56eb10842f19c43f3970f6f6ba2d8b7dbf4807e9
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Dec 3 00:17:12 2009 +0000
    
        This initial code is meant to convert TargetData to use an AbstractTypesUser so
        that it doesn't have dangling pointers when abstract types are resolved. This
        modifies it somewhat to address comments: making the "StructLayoutMap" an
        anonymous structure, calling "removeAbstractTypeUser" when appropriate, and
        adding asserts where helpful.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90362 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 33221d9fcccbb6f41d5664fd0b89c9e6f71a0875
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Wed Dec 2 22:19:31 2009 +0000
    
        Fix CMake makefiles
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90354 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 023a88ee8ad038b8e7686187cee877299cdefbca
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Dec 2 22:02:52 2009 +0000
    
        Skeleton for MachineInstr level SSA updater.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90353 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd6fa9f3032a0781be6c8b4acd8aba62d5b96285
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Dec 2 22:02:20 2009 +0000
    
        Remove unnecessary check.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90352 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dee452ff502feea9a09247af651d25107748bc0e
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Dec 2 19:31:07 2009 +0000
    
        Add MaxStackAlignment.cpp to CMake
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90337 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4d20ee6d9a35d9498ad170c55ba714b346237a55
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Dec 2 19:30:24 2009 +0000
    
        Factor the stack alignment calculations out into a target independent pass.
        No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90336 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5eb5f091ab381b612c271a1683e6e0870394d0c4
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Dec 2 17:15:24 2009 +0000
    
        Don't count PHI instructions toward the limit for tail duplicating a block.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90326 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3168d8c736130382a1c7b92a452f3707cc69783a
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Dec 2 17:06:45 2009 +0000
    
        Move EliminateDuplicatePHINodes() from SimplifyCFG.cpp to Local.cpp
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90324 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 00e637ba6ff6e9749f67cbed852462839d14241b
    Author: Andreas Neustifter <astifter at gmx.at>
    Date:   Wed Dec 2 15:57:15 2009 +0000
    
        Cheap, mostly strict, stable sorting.
    
        This is necessary for tests so the results are comparable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90320 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aca44457101e481c0efc60e9e3f8a0c5a53e45aa
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Wed Dec 2 15:33:44 2009 +0000
    
        Silence compiler warnings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90319 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df0f215750690b1c58f76b72f751d5a5c4da46d8
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Dec 2 15:25:16 2009 +0000
    
        Clarify that DIEString does not keep a copy of the string.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90318 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f187daf03250b0e1763bf07437d5048f6320789c
    Author: Owen Anderson <resistor at mac.com>
    Date:   Wed Dec 2 07:35:19 2009 +0000
    
        Cleanup/remove some parts of the lifetime region handling code in memdep and GVN,
        per Chris' comments.  Adjust testcases to match.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90304 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bc6fccc2682a75a1dcbf8904334bc3b137fdc213
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 2 06:44:58 2009 +0000
    
        factor some code better.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90299 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 648dc5a2106ef92818f54ef831ac051f7d238264
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 2 06:35:55 2009 +0000
    
        formatting cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90298 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 262590904ae923498bc704da374110de14289a2a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 2 06:05:42 2009 +0000
    
        tidy up, remove dependence on order of evaluation of function args from EmitMemCpy.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90297 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 53f9966b7a5454affde814ceb105c7f26f330617
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 2 05:34:35 2009 +0000
    
        merge sext-2 into sext.ll
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90293 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f7fc4e0d23f3c2feb72d872843465c572774479
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 2 05:32:33 2009 +0000
    
        rename test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90292 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 79e59484e54d1fb722525de5f7e23101e128e1b8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Dec 2 05:32:16 2009 +0000
    
        filecheckize
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90291 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3adb22982168a536036386dc27dd0522f0839658
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Wed Dec 2 04:59:58 2009 +0000
    
        Fixed an assertion failure for tracking sext of a vector of integers
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90290 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 69668fe63a3e714b363c251307dc34e243b1d101
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Dec 1 23:09:02 2009 +0000
    
        Add utility routine to create subprogram definition entry from subprogram declaration entry.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90282 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 920ffac27e66522138cddf6aac63f4965c8103dd
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Dec 1 23:07:59 2009 +0000
    
        Reuse existing subprogram DIE.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90281 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b05c79891c16cf8f9b86fc0a35e496f398ec5ebb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 1 22:51:41 2009 +0000
    
        return more useful error messages by using strerror to format errno
        instead of returning an ambiguous reason.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90275 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ad60d2d9e46713fde0421b91ebcd0a1369b142e5
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Dec 1 22:28:41 2009 +0000
    
        Update per Bill's comments. Work in progress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90271 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 578f3231d6c9ba6d6bd8c683033748cf4d68b623
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 1 22:25:00 2009 +0000
    
        Fix PR5391: support early clobber physical register def tied with a use (ewwww)
        - A valno should be set HasRedefByEC if there is an early clobber def in the middle of its live ranges. It should not be set if the def of the valno is defined by an early clobber.
        - If a physical register def is tied to an use and it's an early clobber, it just means the HasRedefByEC is set since it's still one continuous live range.
        - Add a couple of missing checks for HasRedefByEC in the coalescer. In general, it should not coalesce a vr with a physical register if the physical register has a early clobber def somewhere. This is overly conservative but that's the price for using such a nasty inline asm "feature".
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90269 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 79affd8fb46d75087c98aa4095dfc207c820d9f1
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Dec 1 21:53:51 2009 +0000
    
        test case for IV-Users simplification loop improvement
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90260 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 536fff516dd00816acf36d0b118e391a119e182b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 1 21:16:01 2009 +0000
    
        rename some variables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90258 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 39380bfcb91f84d404d5adbf38e4b3916769cd82
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 1 21:15:15 2009 +0000
    
        tidy
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90257 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 69be7878c045ac40c9c847fe8d04f44e51663c04
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Dec 1 19:20:00 2009 +0000
    
        Add edge source labels to SelectionDAG graphs, now that the graph printing
        framework omits differentiated edge sources in the case where the labels
        are empty strings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90254 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8cb3747a8bb51ba03b38cf87e606e2bb33032d6e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Dec 1 19:16:15 2009 +0000
    
        Minor cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90253 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0bf95848e06bdfa04ec8a28cc22730ae139b9786
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Dec 1 19:13:27 2009 +0000
    
        Trim an unnecessary #include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90252 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 584f5f5e52830802e809b1c03c48de7dcdf87c4a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Dec 1 19:11:36 2009 +0000
    
        Don't default warnings to ON on MSVC, the spew is enough to triple the build time. :/
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90251 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67533ab25e3fd9b66779dee173f5cb3cfb93cb99
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Dec 1 18:13:48 2009 +0000
    
        Clear function specific containers while processing end of a function, even if DW_TAG_subprogram for current function is not found.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90247 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a0fb7c4e1ae9b7c87037bc6a72adce439d23484c
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Dec 1 18:10:36 2009 +0000
    
        Thumb1 exception handling setjmp
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90246 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e1b1ad3abeb24f6da12afe63ac7a1074c1e3dbf
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Dec 1 17:37:06 2009 +0000
    
        For VLDM/VSTM (Advanced SIMD), set encoding bits Inst{11-8} to 0b1011.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90243 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a929cf5ff12aac2838d89389e1a90fceb731437
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Dec 1 17:13:31 2009 +0000
    
        Move PHIElimination::isLiveOut method to LiveVariables.
    
        We want LiveVariables clients to use methods rather than accessing the
        getVarInfo data structure directly. That way it will be possible to change the
        LiveVariables representation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90240 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cdb7736af562a153a1a3821737ec86876f0a817e
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Tue Dec 1 15:53:33 2009 +0000
    
        typo
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90236 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f3798876dfe29818e97cff43cba6785648253867
    Author: Gabor Greif <ggreif at gmail.com>
    Date:   Tue Dec 1 12:53:56 2009 +0000
    
        demonstrate usage of Cases() mapping several strings to the same value; remove trailing spaces
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90230 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 24dd55c3e41c2d655eefe760c446d9e24e44c287
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Dec 1 09:47:11 2009 +0000
    
        Add relocation model options.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90222 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 977dffe8fde833b6c312d14b40d619e33d6ba866
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Dec 1 09:19:09 2009 +0000
    
        Typo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90221 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 825f4e1e87f1afff90a026ffe0876eff313bcf56
    Author: Tobias Grosser <grosser at fim.uni-passau.de>
    Date:   Tue Dec 1 08:43:33 2009 +0000
    
        Fix copy paste bug
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90220 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf50d21c781661a1637c48dbe4e2a412764729f3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 1 07:33:32 2009 +0000
    
        fix 255.vortex again, third time's the charm.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90217 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 39e844369d0a07edab15cf20037d6da328f8bd6c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 1 07:30:01 2009 +0000
    
        minimize this a bit more.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90216 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 488fe85c61807145b2d052a9104ceca80e17df74
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Dec 1 06:51:30 2009 +0000
    
        Forward -save-temps to llvm-gcc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90214 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 333b7e1d9e904eff9b98e20984cbd8439b86282f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 1 06:22:10 2009 +0000
    
        merge 2009-11-29-ReverseMap.ll into crash.ll
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90212 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a0cb431faf56460af265f0d9b92b7d7e49eb75b4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 1 06:04:43 2009 +0000
    
        fix PR5640 by tracking whether a block is the header of a loop more
        precisely, which prevents us from infinitely peeling the loop.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90211 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5a25c251af8588b170f76e7e479cdc947ca924e5
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Dec 1 05:59:55 2009 +0000
    
        Support -[weak_]framework and -F in llvmc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90210 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6cb75ac35fdf87efaf27010947a6b19fae785ff4
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Dec 1 03:18:26 2009 +0000
    
        Remove the gcc builtins from the intrinsics, we'll lower them
        explicitly so we can check arguments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90199 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c79abe2ba9e6bd0909b9a00eddb54a8d500ec11c
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Dec 1 03:03:00 2009 +0000
    
        Use CFG connectedness as a secondary sort key when deciding the order of copy coalescing.
    
        This means that well connected blocks are copy coalesced before the less connected blocks. Connected blocks are more difficult to
        coalesce because intervals are more complicated, so handling them first gives a greater chance of success.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90194 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b958caea8cf2cd06c543188b32fd3f17bcd5652
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Dec 1 02:26:01 2009 +0000
    
        Add a soft link so that in an apple style build we can find libLTO.dylib.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90189 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3b581b33c31386a16bd1ae4ff121b0908e43c64a
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Tue Dec 1 02:21:51 2009 +0000
    
        Add two CMake flags LLVM_ENABLE_PEDANTIC and LLVM_ENABLE_WERROR,
        PEDANTIC defaults to ON and WERROR default to off.
    
        Also add MSVC warnings. To disable warnings add the flags
        LLVM_ENABLE_WARNINGS (default on).
    
        Patch by Tobias Grosser!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90188 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d8bceade9b026d229b621399e196e8376396d0b7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 1 01:56:27 2009 +0000
    
        fix PR5649 by making fib use the JIT instead of the interpreter, patch by Perry Lorier!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90186 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b7aaf7b998d0006b775c3e494ea4b6d614f3ef81
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Dec 1 01:38:10 2009 +0000
    
        Add a comment about A[i+(j+1)].
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90185 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 05837cdf183324700fbe8231a655e9f7101057c6
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 1 00:59:58 2009 +0000
    
        Remove some validation errors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90184 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d90060edfdeaadf4563de2521221c201924e1790
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 1 00:53:11 2009 +0000
    
        Some formatting and spelling fixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90182 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af20e7cfb5d44c7a0caad3a152978fcca1f4b438
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Dec 1 00:45:56 2009 +0000
    
        Devang pointed out that this code should use DIScope instead of
        DICompileUnit. This code now prints debug filenames successfully.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90181 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1e99614c66e92b4f6c26531b8c84e0f42a35c902
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 1 00:44:45 2009 +0000
    
        Fix PR5614: parts of a physical register def may be killed the rest.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90180 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de4b5d00385da6273b0902e9385928b99aeda670
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Dec 1 00:13:06 2009 +0000
    
        Test case for r90175.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90176 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cf4fad276042adc440ff0d119b9fb67a94814ccf
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Dec 1 00:02:02 2009 +0000
    
        For VMOV (immediate), make some of the encoding bits (cmode and op) unspecified.
        For VMOVv*i[16,32], op bit is don't care, and some cmode bits vary depending on
        the immediate values.
    
        Ref: Table A7-15 Modified immediate values for Advanced SIMD instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90173 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76b806763b89ea64337bd967614662a72f237191
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Nov 30 23:56:56 2009 +0000
    
        If pointer type has a name then do not ignore the name.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90172 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 63c90f69fe61ee5b0ff0ee8acf73274452da933e
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Mon Nov 30 23:50:14 2009 +0000
    
        * CMakeLists.txt: Adds warnings flags for g++. Fixes PR 5647.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90170 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a86f9f058bf09127b7cf306770c353a6ffc41b3f
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Mon Nov 30 23:48:51 2009 +0000
    
        * cmake/modules/LLVMLibDeps.cmake: Updated library dependencies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90169 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 26219b78e2e252a08c7c59aceb05a86751cdf20e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 30 23:33:53 2009 +0000
    
        Minor whitespace fixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90166 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 66fe2bca32f043bf5bd15c732c80627621d7abb7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 30 23:33:37 2009 +0000
    
        Fix a minor inconsistency.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90165 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ff03c6950bb5ed48831ad9cd6b53fcd9fe7dfa8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Nov 30 23:30:43 2009 +0000
    
        Fix typos in comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90164 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c6e14f94aae2689841a94bc7aa059a8771e53867
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Mon Nov 30 22:55:54 2009 +0000
    
        New virtual registers created for spill intervals should inherit allocation hints from the original register.
    
        This helps us avoid silly copies when rematting values that are copied to a physical register:
    
        leaq	_.str44(%rip), %rcx
        movq	%rcx, %rsi
        call	_strcmp
    
        becomes:
    
        leaq	_.str44(%rip), %rsi
        call	_strcmp
    
        The coalescer will not touch the movq because that would tie down the physical register.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90163 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 180e3e4b507427ba5d81408a064adc5a3055557f
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Nov 30 22:23:29 2009 +0000
    
        Debug info is disabled on PPC Darwin.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90160 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3d0daa24dde6984c75c7fe1933653429169418f4
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 30 18:56:45 2009 +0000
    
        Reprioritize tests for tail duplication to be aggressive about indirect
        branches even when optimizing for code size.  Unless we find evidence to the
        contrary in the future, the special treatment for indirect branches does not
        have a significant effect on code size, and performance still matters with -Os.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90147 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 90868102bf6144cae08570ddcb96099d9d63c06d
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 30 18:35:03 2009 +0000
    
        Remove isProfitableToDuplicateIndirectBranch target hook.  It is profitable
        for all the processors where I have tried it, and even when it might not help
        performance, the cost is quite low.  The opportunities for duplicating
        indirect branches are limited by other factors so code size does not change
        much due to tail duplicating indirect branches aggressively.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90144 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3bff42e81121019af3ba61a13f087bba6ed37c54
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Nov 30 17:47:19 2009 +0000
    
        Fix some more ARM unified syntax warnings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90141 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 49462255e92b03de36a013dd23845d9c8e35821c
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Mon Nov 30 15:52:29 2009 +0000
    
        Fix odd declaration.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90138 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf4ea99c2887e55d18206e093bccdc43636c9b1b
    Author: Tobias Grosser <grosser at fim.uni-passau.de>
    Date:   Mon Nov 30 13:34:51 2009 +0000
    
        Fix last DOTGraphTraits problems in CompilationGraph.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90136 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9b1114ae03b624deb0ed6e6d1bb26cc2cd2685e
    Author: Tobias Grosser <grosser at fim.uni-passau.de>
    Date:   Mon Nov 30 13:14:13 2009 +0000
    
        Remove forgotten ShortNames in Trie and CompilationGraph
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90135 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 810b18c4cc8b022f79f0dd886cc9092a3e4f6d2c
    Author: Tobias Grosser <grosser at fim.uni-passau.de>
    Date:   Mon Nov 30 12:38:47 2009 +0000
    
        Remove ShortNames from getNodeLabel in DOTGraphTraits
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90134 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e2c3aec2cf16ee6ea233d3ec13a83e71e21523b9
    Author: Tobias Grosser <grosser at fim.uni-passau.de>
    Date:   Mon Nov 30 12:38:13 2009 +0000
    
        Instantiate DefaultDOTGraphTraits
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90133 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd748c9df057cd33fe255b9ff81e720518f82cb3
    Author: Tobias Grosser <grosser at fim.uni-passau.de>
    Date:   Mon Nov 30 12:37:39 2009 +0000
    
        Do not point edge heads to source labels
    
        If no destination label is available, just point to the node itself
        instead of pointing to some source label. Source and destination labels are
        not related in any way.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90132 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 989dd8528652afdfcf5125f7d206bc0f075861e7
    Author: Tobias Grosser <grosser at fim.uni-passau.de>
    Date:   Mon Nov 30 12:24:40 2009 +0000
    
        Only print edgeSourceLabels if they are not empty
    
        Graphviz can layout the graphs better if a node does not contain source
        ports. Therefore only print the ports if the source ports are useful,
        that means are not labeled with the empty string "".
        This patch also simplifies graphs without any edgeSourceLabels e.g. the
        dominance trees.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90131 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d6054ab3ef09773025dbcee2b923913a73a919d
    Author: Tobias Grosser <grosser at fim.uni-passau.de>
    Date:   Mon Nov 30 12:06:37 2009 +0000
    
        Small PostDominatorTree improvements
    
         * Do not SEGFAULT if tree entryNode() is NULL
         * Print function names in dotty printer
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90130 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e148e0b0e803768e12df1884159f143bf08f6e40
    Author: Tobias Grosser <grosser at fim.uni-passau.de>
    Date:   Mon Nov 30 11:55:24 2009 +0000
    
        Remove ":" after BB name in -view-cfg-only
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90129 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 767a372b1df0e26fbbe0dd8d3667e85d98c91109
    Author: Eric Christopher <echristo at apple.com>
    Date:   Mon Nov 30 08:03:53 2009 +0000
    
        First pass at llvm.objectsize documentation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90116 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9ae7435b67e6375d2e72cb4bd43955c0a58edad
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 30 07:05:51 2009 +0000
    
        Revert r90107, fixing test/Transforms/GVN/2009-11-29-ReverseMap.ll and the
        llvm-gcc build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90113 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 30a3e84e9b7aca47aaee2ed11bc20cf6336bd189
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 30 07:02:18 2009 +0000
    
        Add a testcase for the current llvm-gcc build failure.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90112 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 21c4cada89c23d0c6c943d7044d045c279804c05
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 30 04:23:17 2009 +0000
    
        Remove the 'simple jit' tutorial as it wasn't really being maintained and its
        material is covered by the Kaleidoscope tutorial.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90111 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f950eb5454f482bc93a1d58b179f3e9be707e686
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Mon Nov 30 02:42:27 2009 +0000
    
        Add test case for r90108
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90109 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c707f3ff7cdb427155b7ca707a3efd2c8b34bcd1
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Mon Nov 30 02:42:02 2009 +0000
    
        Added support to allow clients to custom widen. For X86, custom widen vectors for
        divide/remainder since these operations can trap by unroll them and adding undefs
        for the resulting vector.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90108 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e1f454174e8ad32da74193471ca079a12c2bb18d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Nov 30 02:26:29 2009 +0000
    
        reapply r90093 with an addition of keeping the forward
        and reverse nonlocal memdep maps in synch, this should
        fix 255.vortex.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90107 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bfd72282b887511b8d0859777a6ca9851e062a85
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 30 02:23:57 2009 +0000
    
        Fix this test on 64-bit systems which seem to use i64 for gep indices sometimes
        while 32-bit gcc uses i32.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90106 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f215dece88649838a64b0d7740184713c487cf54
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Mon Nov 30 00:38:56 2009 +0000
    
        Commit r90099 made LLVM simplify one of these constant expressions a little
        more. Update the syntax we're checking for and filecheckize it too.
    
        This will fix the selfhost buildbots but will 'break' the others (sigh) because
        they're still linked against older LLVM which is emitting less optimized IR.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90104 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0425dc773b82135ec8b6e5e6a5140c06de86c6ce
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 29 21:40:55 2009 +0000
    
        Teach ConstantFolding to do a better job when folding gep(bitcast).
    
        This permits the devirtualization of llvm.org/PR3100#c9 when compiled by clang.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90099 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c04f8adbc1539eb593879d8e0f8977ac5155e605
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sun Nov 29 21:17:48 2009 +0000
    
        Revert r90089 for now, it's breaking selfhost.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90097 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b4ba06f907aac7ef1991e339ff4c262a301047f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 29 21:14:59 2009 +0000
    
        revert this patch for now, it causes failures of:
            LLVM::Transforms/GVN/2009-02-17-LoadPRECrash.ll
            LLVM::Transforms/GVN/2009-06-17-InvalidPRE.ll
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90096 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 02507fdead4d86a04c1c914549b2d2ab295c47c7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 29 21:09:36 2009 +0000
    
        Fix a really nasty caching bug I introduced in memdep.  An entry
        was being added to the Result vector, but not being put in the
        cache.  This means that if the cache was reused wholesale for a
        later query that it would be missing this entry and we'd do an
        incorrect load elimination.
    
        Unfortunately, it's not really possible to write a useful
        testcase for this, but this unbreaks 255.vortex.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90093 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7719b59199b6920ada130675f658bf3ec8521cd6
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sun Nov 29 20:29:30 2009 +0000
    
        Fix two FIXMEs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90089 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eac76e8c39d2f6370692677fdfc59c7d62e29e07
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Nov 29 18:10:39 2009 +0000
    
        Detabify.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90085 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9e73b69d560efe55408748561f294bfed36d7079
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Sun Nov 29 17:42:58 2009 +0000
    
        Remove dead returns.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90083 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cfd64f93fe4a893fe6ffda056c825d0a9e3f272a
    Author: Kovarththanan Rajaratnam <kovarththanan.rajaratnam at gmail.com>
    Date:   Sun Nov 29 17:19:48 2009 +0000
    
        This patch ensures that Path::GetMainExecutable is able to handle the
        case where realpath() fails. When this occurs we segfault trying to
        create a std::string from a NULL pointer.
    
        Fixes PR5635.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90082 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7143f3420c2832c342182ec8e5b87a4e5a19f99c
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Nov 29 08:30:24 2009 +0000
    
        Fix FileCheck crash when fuzzy scanning starting at the end of the file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90065 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dbbf1b238bc38c26fc4f7b53a26e0d0538880d8a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 29 02:57:29 2009 +0000
    
        add testcases for the foo_with_overflow op xforms added recently and
        fix bugs exposed by the tests.  Testcases from Alastair Lynn!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90056 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 565b96428b6e7af0b87275a81471c5395fcbdc16
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 29 02:44:33 2009 +0000
    
        mark all the 'foo with overflow' intrinsics as readnone.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90055 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8aa4c019b5bb50ba9a3a5e2188a64081beb2f02b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 29 02:19:52 2009 +0000
    
        update and consolidate the load pre notes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90050 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6756dffb3a4685abfaa0455078c4d8f4c33ddc28
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 29 01:28:58 2009 +0000
    
        add PR#
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90049 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea7509881f1e6d5efaa80638b0a5ab5290c77651
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 29 01:15:43 2009 +0000
    
        Add a testcase for:
    
        void test(int N, double* G) {
          long j;
          for (j = 1; j < N - 1; j++)
              G[j] = G[j] + G[j+1] + G[j-1];
        }
    
        which we now compile to one load in the loop:
    
        LBB1_2:                                                     ## %bb
        	movsd	16(%rsi,%rax,8), %xmm2
        	incq	%rdx
        	addsd	%xmm2, %xmm1
        	addsd	%xmm1, %xmm0
        	movapd	%xmm2, %xmm1
        	movsd	%xmm0, 8(%rsi,%rax,8)
        	incq	%rax
        	cmpq	%rcx, %rax
        	jne	LBB1_2
    
        instead of:
    
        LBB1_2:                                                     ## %bb
        	movsd	8(%rsi,%rax,8), %xmm0
        	addsd	16(%rsi,%rax,8), %xmm0
        	addsd	(%rsi,%rax,8), %xmm0
        	movsd	%xmm0, 8(%rsi,%rax,8)
        	incq	%rax
        	cmpq	%rcx, %rax
        	jne	LBB1_2
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90048 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96bd9d9931be198b46c3197b67ed02aefc734bd4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 29 01:04:40 2009 +0000
    
        add a testcase for
    
        void test9(int N, double* G) {
          long j;
          for (j = 1; j < N - 1; j++)
              G[j+1] = G[j] + G[j+1];
        }
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90047 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 163e6ab29947e801b555e688e19af8460c8c7903
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Nov 29 00:51:17 2009 +0000
    
        Implement PR5634.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90046 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2dbc3f24619726bb5def46ed36116ed5bbb2ac85
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sat Nov 28 21:27:49 2009 +0000
    
        Teach memdep to look for memory use intrinsics during dependency queries. Fixes
        PR5574.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90045 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 80c535b9194e1ce2927ca9b55d010a8fcb5833b3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 28 16:08:18 2009 +0000
    
        reenable load address insertion in load pre.  This allows us to
        handle cases like this:
        void test(int N, double* G) {
          long j;
          for (j = 1; j < N - 1; j++)
              G[j+1] = G[j] + G[j+1];
        }
    
        where G[1] isn't live into the loop.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90041 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1c2de2bd49cd56c50c9251f22727a126dea78f43
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 28 15:39:14 2009 +0000
    
        Enhance InsertPHITranslatedPointer to be able to return a list of newly
        inserted instructions.  No functionality change until someone starts using it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90039 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c343983d722c5552dea693c85bf62406217dc097
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 28 15:12:41 2009 +0000
    
        implement a FIXME: limit the depth that DecomposeGEPExpression goes the same
        way that getUnderlyingObject does it.
    
        This fixes the 'DecomposeGEPExpression and getUnderlyingObject disagree!'
        assertion on sqlite3.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90038 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed81875b336e7ec36612a2ae49fbbdf7867f08d2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Nov 28 14:54:10 2009 +0000
    
        enable code to handle un-phi-translatable cases more aggressively:
        if we don't have an address expression available in a predecessor,
        then model this as the value being clobbered at the end of the pred
        block instead of being modeled as a complete phi translation failure.
        This is important for PRE of loads because we want to see that the
        load is available in all but this predecessor, and complete phi
        translation failure results in not getting any information about
        predecessors.
    
        This doesn't do anything until I renable code insertion since PRE
        now sees that it is available in all but one predecessors, but can't
        insert the addressing in the predecessor that is missing it to
        eliminate the redundancy.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90037 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de0b030de1873764c1f3571b75b5f75125ac35f8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 22:50:07 2009 +0000
    
        disable value insertion for now, I need to figure out how
        to inform GVN about the newly inserted values.  This fixes
        PR5631.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90022 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a5bef157088fc71eeb2a968272d270cb94169524
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 22:05:15 2009 +0000
    
        Rework InsertPHITranslatedPointer to handle the recursive case, this
        fixes PR5630 and sets the stage for the next phase of goodness (testcase
        pending).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90019 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3037e0ade3201ff57bc4f486e7dfacc6b7459d47
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 20:25:30 2009 +0000
    
        recursively phi translate bitcast operands too, for consistency.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90016 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e386372015e237a1e98f843920cf8d6449e0b444
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Fri Nov 27 19:57:53 2009 +0000
    
        Oops! Fix bug introduced in my recent cleanup change. Thanks to Tobias Grosser
        for pointing this out.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90015 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0c40e546d78a7f0cb9e42588afcd8a664a816108
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 19:56:00 2009 +0000
    
        I accidentally implemented this :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90014 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d91a9144369b73ed3e066695ad922abc76b5f7de
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 19:11:31 2009 +0000
    
        add support for recursive phi translation and phi
        translation of add with immediate.  This allows us
        to optimize this function:
    
        void test(int N, double* G) {
          long j;
          G[1] = 1;
            for (j = 1; j < N - 1; j++)
                G[j+1] = G[j] + G[j+1];
        }
    
        to only do one load every iteration of the loop.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90013 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f198db016d5b8c1fb5f225f644efb5f5198c2471
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 18:08:30 2009 +0000
    
        add two simple test cases we now optimize (to one load in the loop each) and one we don't (corresponding to the fixme I added yesterday).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90012 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96076f73dbddf62f2bed35458afc5fbd5628a5e2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 17:42:22 2009 +0000
    
        factor some logic out of instcombine into a new SimplifyAddInst method.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90011 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 15aec3314b05e3d68f73fcc95b88fd4b9baad6b3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 17:12:30 2009 +0000
    
        add a deadargelim note.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90009 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7d147914a609049f7cda4a5c5ee8ee8abbde4782
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 16:53:57 2009 +0000
    
        This testcase is actually only partially redundant, and requires
        the FIXME I added yesterday to be implemented.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90008 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8c800c00ef9a6054218b11e5e1d65205b1297115
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 16:37:41 2009 +0000
    
        fix PR5436 by making the 'simple' case of SRoA not promote out of range
        array indexes.  The "complex" case of SRoA still handles them, and correctly.
    
        This fixes a weirdness where we'd correctly avoid transforming A[0][42] if
        the 42 was too large, but we'd only do it if it was one gep, not two separate
        ones.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90007 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3ad93b0636e31b90c74c658673130d1b6a4ffd8e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Nov 27 16:31:59 2009 +0000
    
        filecheckize
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90006 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1da9984b8f8bec04ec77e96ea4b9e1b358f9f4a2
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Nov 27 16:04:14 2009 +0000
    
        While this test is testing a problem in the generic part of codegen,
        the problem only shows for msp430 and pic16 which is why it specifies
        them using -march.  But it is wrong to put such tests in CodeGen/Generic,
        since not everyone builds these targets.  Put a copy of the test in each
        of the target test directories.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90005 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87598b02a90f8027b59afb1433ae3e828e19c72a
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Nov 27 13:38:03 2009 +0000
    
        Vector types are no longer required to have a power-of-two length.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90004 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 59b538afbdb86d62b1caa0f549401cb1583101fa
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Nov 27 12:33:22 2009 +0000
    
        These code generator limitations have been removed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@90003 91177308-0d34-0410-b5e6-96231b3b80d8

diff --git a/libclamav/c++/llvm/CMakeLists.txt b/libclamav/c++/llvm/CMakeLists.txt
index cc3815d..c57fa93 100644
--- a/libclamav/c++/llvm/CMakeLists.txt
+++ b/libclamav/c++/llvm/CMakeLists.txt
@@ -177,6 +177,16 @@ set( CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${LLVM_BINARY_DIR}/lib )
 add_llvm_definitions( -D__STDC_LIMIT_MACROS )
 add_llvm_definitions( -D__STDC_CONSTANT_MACROS )
 
+# MSVC has a gazillion warnings with this.
+if( MSVC )
+  option(LLVM_ENABLE_WARNINGS "Enable compiler warnings." OFF)
+else( MSVC )
+  option(LLVM_ENABLE_WARNINGS "Enable compiler warnings." ON)
+endif()
+
+option(LLVM_ENABLE_PEDANTIC "Compile with pedantic enabled." ON)
+option(LLVM_ENABLE_WERROR "Fail and stop if a warning is triggered." OFF)
+
 if( CMAKE_SIZEOF_VOID_P EQUAL 8 AND NOT WIN32 )
   # TODO: support other platforms and toolchains.
   option(LLVM_BUILD_32_BITS "Build 32 bits executables and libraries." OFF)
@@ -212,6 +222,27 @@ if( MSVC )
     add_llvm_definitions("/${LLVM_USE_CRT}")
     message(STATUS "Using VC++ CRT: ${LLVM_USE_CRT}")
   endif (NOT ${LLVM_USE_CRT} STREQUAL "")
+
+  # Enable warnings
+  if (LLVM_ENABLE_WARNINGS)
+    add_llvm_definitions( /W4 /Wall )
+    if (LLVM_ENABLE_PEDANTIC)
+      # No MSVC equivalent available
+    endif (LLVM_ENABLE_PEDANTIC)
+  endif (LLVM_ENABLE_WARNINGS)
+  if (LLVM_ENABLE_WERROR)
+    add_llvm_definitions( /WX )
+  endif (LLVM_ENABLE_WERROR)
+elseif( CMAKE_COMPILER_IS_GNUCXX )
+  if (LLVM_ENABLE_WARNINGS)
+    add_llvm_definitions( -Wall -W -Wno-unused-parameter -Wwrite-strings )
+    if (LLVM_ENABLE_PEDANTIC)
+      add_llvm_definitions( -pedantic -Wno-long-long )
+    endif (LLVM_ENABLE_PEDANTIC)
+  endif (LLVM_ENABLE_WARNINGS)
+  if (LLVM_ENABLE_WERROR)
+    add_llvm_definitions( -Werror )
+  endif (LLVM_ENABLE_WERROR)
 endif( MSVC )
 
 include_directories( ${LLVM_BINARY_DIR}/include ${LLVM_MAIN_INCLUDE_DIR})
diff --git a/libclamav/c++/llvm/Makefile.config.in b/libclamav/c++/llvm/Makefile.config.in
index 44296a4..2cc69dc 100644
--- a/libclamav/c++/llvm/Makefile.config.in
+++ b/libclamav/c++/llvm/Makefile.config.in
@@ -313,7 +313,7 @@ endif
 # Location of the plugin header file for gold.
 BINUTILS_INCDIR := @BINUTILS_INCDIR@
 
-C_INCLUDE_DIRS := @C_INCLUDE_DISR@
+C_INCLUDE_DIRS := @C_INCLUDE_DIRS@
 CXX_INCLUDE_ROOT := @CXX_INCLUDE_ROOT@
 CXX_INCLUDE_ARCH := @CXX_INCLUDE_ARCH@
 CXX_INCLUDE_32BIT_DIR = @CXX_INCLUDE_32BIT_DIR@
diff --git a/libclamav/c++/llvm/autoconf/configure.ac b/libclamav/c++/llvm/autoconf/configure.ac
index 9519698..9ebaadc 100644
--- a/libclamav/c++/llvm/autoconf/configure.ac
+++ b/libclamav/c++/llvm/autoconf/configure.ac
@@ -672,7 +672,7 @@ case "$withval" in
   *) AC_MSG_ERROR([Invalid path for --with-ocaml-libdir. Provide full path]) ;;
 esac
 
-AC_ARG_WITH(c-include-dir,
+AC_ARG_WITH(c-include-dirs,
   AS_HELP_STRING([--with-c-include-dirs],
     [Colon separated list of directories clang will search for headers]),,
     withval="")
diff --git a/libclamav/c++/llvm/cmake/modules/CheckAtomic.cmake b/libclamav/c++/llvm/cmake/modules/CheckAtomic.cmake
index 27bbaba..f40ff4d 100644
--- a/libclamav/c++/llvm/cmake/modules/CheckAtomic.cmake
+++ b/libclamav/c++/llvm/cmake/modules/CheckAtomic.cmake
@@ -1,14 +1,25 @@
 # atomic builtins are required for threading support.
 
 INCLUDE(CheckCXXSourceCompiles)
-	
+
 CHECK_CXX_SOURCE_COMPILES("
+#ifdef _MSC_VER
+#include <windows.h>
+#endif
 int main() {
+#ifdef _MSC_VER
+        volatile LONG val = 1;
+        MemoryBarrier();
+        InterlockedCompareExchange(&val, 0, 1);
+        InterlockedIncrement(&val);
+        InterlockedDecrement(&val);
+#else
         volatile unsigned long val = 1;
         __sync_synchronize();
         __sync_val_compare_and_swap(&val, 1, 0);
         __sync_add_and_fetch(&val, 1);
         __sync_sub_and_fetch(&val, 1);
+#endif
         return 0;
       }
 " LLVM_MULTITHREADED)
diff --git a/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake b/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake
index 677e1f9..6a35354 100644
--- a/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake
+++ b/libclamav/c++/llvm/cmake/modules/LLVMLibDeps.cmake
@@ -1,19 +1,58 @@
+set(MSVC_LIB_DEPS_LLVMARMAsmParser LLVMARMInfo LLVMMC)
+set(MSVC_LIB_DEPS_LLVMARMAsmPrinter LLVMARMCodeGen LLVMARMInfo LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMARMCodeGen LLVMARMInfo LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMARMInfo LLVMSupport)
+set(MSVC_LIB_DEPS_LLVMAlphaAsmPrinter LLVMAlphaInfo LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMAlphaCodeGen LLVMAlphaInfo LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMAlphaInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMAnalysis LLVMCore LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMArchive LLVMBitReader LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMAsmParser LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMAsmPrinter LLVMAnalysis LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMBitReader LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMBitWriter LLVMCore LLVMSupport LLVMSystem)
+set(MSVC_LIB_DEPS_LLVMBlackfinAsmPrinter LLVMAsmPrinter LLVMBlackfinInfo LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMBlackfinCodeGen LLVMBlackfinInfo LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMBlackfinInfo LLVMSupport)
+set(MSVC_LIB_DEPS_LLVMCBackend LLVMAnalysis LLVMCBackendInfo LLVMCodeGen LLVMCore LLVMScalarOpts LLVMSupport LLVMSystem LLVMTarget LLVMTransformUtils LLVMipa)
+set(MSVC_LIB_DEPS_LLVMCBackendInfo LLVMSupport)
+set(MSVC_LIB_DEPS_LLVMCellSPUAsmPrinter LLVMAsmPrinter LLVMCellSPUInfo LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMCellSPUCodeGen LLVMCellSPUInfo LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMCellSPUInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMCodeGen LLVMAnalysis LLVMCore LLVMMC LLVMScalarOpts LLVMSupport LLVMSystem LLVMTarget LLVMTransformUtils)
 set(MSVC_LIB_DEPS_LLVMCore LLVMSupport LLVMSystem)
+set(MSVC_LIB_DEPS_LLVMCppBackend LLVMCore LLVMCppBackendInfo LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMCppBackendInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMExecutionEngine LLVMCore LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMInstrumentation LLVMAnalysis LLVMCore LLVMScalarOpts LLVMSupport LLVMSystem LLVMTransformUtils)
 set(MSVC_LIB_DEPS_LLVMInterpreter LLVMCodeGen LLVMCore LLVMExecutionEngine LLVMSupport LLVMSystem LLVMTarget)
 set(MSVC_LIB_DEPS_LLVMJIT LLVMCodeGen LLVMCore LLVMExecutionEngine LLVMMC LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMLinker LLVMArchive LLVMBitReader LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMMC LLVMSupport LLVMSystem)
+set(MSVC_LIB_DEPS_LLVMMSIL LLVMAnalysis LLVMCodeGen LLVMCore LLVMMSILInfo LLVMScalarOpts LLVMSupport LLVMSystem LLVMTarget LLVMTransformUtils LLVMipa)
+set(MSVC_LIB_DEPS_LLVMMSILInfo LLVMSupport)
+set(MSVC_LIB_DEPS_LLVMMSP430AsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMMSP430Info LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMMSP430CodeGen LLVMCodeGen LLVMCore LLVMMC LLVMMSP430Info LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMMSP430Info LLVMSupport)
+set(MSVC_LIB_DEPS_LLVMMipsAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMMipsCodeGen LLVMMipsInfo LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMMipsCodeGen LLVMCodeGen LLVMCore LLVMMC LLVMMipsInfo LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMMipsInfo LLVMSupport)
+set(MSVC_LIB_DEPS_LLVMPIC16 LLVMAnalysis LLVMCodeGen LLVMCore LLVMMC LLVMPIC16Info LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMPIC16AsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMPIC16 LLVMPIC16Info LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMPIC16Info LLVMSupport)
+set(MSVC_LIB_DEPS_LLVMPowerPCAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMPowerPCInfo LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMPowerPCCodeGen LLVMCodeGen LLVMCore LLVMMC LLVMPowerPCInfo LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMPowerPCInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMScalarOpts LLVMAnalysis LLVMCore LLVMSupport LLVMSystem LLVMTarget LLVMTransformUtils)
 set(MSVC_LIB_DEPS_LLVMSelectionDAG LLVMAnalysis LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMSparcAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSparcInfo LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMSparcCodeGen LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSparcInfo LLVMSupport LLVMSystem LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMSparcInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMSystem )
+set(MSVC_LIB_DEPS_LLVMSystemZAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMSystemZInfo LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMSystemZCodeGen LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystemZInfo LLVMTarget)
+set(MSVC_LIB_DEPS_LLVMSystemZInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMTarget LLVMCore LLVMMC LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMTransformUtils LLVMAnalysis LLVMCore LLVMSupport LLVMSystem LLVMTarget LLVMipa)
 set(MSVC_LIB_DEPS_LLVMX86AsmParser LLVMMC LLVMX86Info)
@@ -21,5 +60,8 @@ set(MSVC_LIB_DEPS_LLVMX86AsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC L
 set(MSVC_LIB_DEPS_LLVMX86CodeGen LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget LLVMX86Info)
 set(MSVC_LIB_DEPS_LLVMX86Disassembler LLVMX86Info)
 set(MSVC_LIB_DEPS_LLVMX86Info LLVMSupport)
+set(MSVC_LIB_DEPS_LLVMXCore LLVMCodeGen LLVMCore LLVMMC LLVMSelectionDAG LLVMSupport LLVMSystem LLVMTarget LLVMXCoreInfo)
+set(MSVC_LIB_DEPS_LLVMXCoreAsmPrinter LLVMAsmPrinter LLVMCodeGen LLVMCore LLVMMC LLVMSupport LLVMSystem LLVMTarget LLVMXCoreInfo)
+set(MSVC_LIB_DEPS_LLVMXCoreInfo LLVMSupport)
 set(MSVC_LIB_DEPS_LLVMipa LLVMAnalysis LLVMCore LLVMSupport LLVMSystem)
 set(MSVC_LIB_DEPS_LLVMipo LLVMAnalysis LLVMCore LLVMSupport LLVMSystem LLVMTarget LLVMTransformUtils LLVMipa)
diff --git a/libclamav/c++/llvm/configure b/libclamav/c++/llvm/configure
index 4ef693f..3e0ca0a 100755
--- a/libclamav/c++/llvm/configure
+++ b/libclamav/c++/llvm/configure
@@ -5286,9 +5286,9 @@ echo "$as_me: error: Invalid path for --with-ocaml-libdir. Provide full path" >&
 esac
 
 
-# Check whether --with-c-include-dir was given.
-if test "${with_c_include_dir+set}" = set; then
-  withval=$with_c_include_dir;
+# Check whether --with-c-include-dirs was given.
+if test "${with_c_include_dirs+set}" = set; then
+  withval=$with_c_include_dirs;
 else
   withval=""
 fi
diff --git a/libclamav/c++/llvm/docs/CMake.html b/libclamav/c++/llvm/docs/CMake.html
index 2b7fda3..40a2cec 100644
--- a/libclamav/c++/llvm/docs/CMake.html
+++ b/libclamav/c++/llvm/docs/CMake.html
@@ -274,10 +274,21 @@
     compiler supports this flag. Some systems, like Windows, do not
     need this flag. Defaults to ON.</dd>
 
+  <dt><b>LLVM_ENABLE_WARNINGS</b>:BOOL</dt>
+  <dd>Enable all compiler warnings. Defaults to ON.</dd>
+
+  <dt><b>LLVM_ENABLE_PEDANTIC</b>:BOOL</dt>
+  <dd>Enable pedantic mode. This disable compiler specific extensions, is
+    possible. Defaults to ON.</dd>
+
+  <dt><b>LLVM_ENABLE_WERROR</b>:BOOL</dt>
+  <dd>Stop and fail build, if a compiler warning is
+    triggered. Defaults to OFF.</dd>
+
   <dt><b>LLVM_BUILD_32_BITS</b>:BOOL</dt>
   <dd>Build 32-bits executables and libraries on 64-bits systems. This
-  option is available only on some 64-bits unix systems. Defaults to
-  OFF.</dd>
+    option is available only on some 64-bits unix systems. Defaults to
+    OFF.</dd>
 
   <dt><b>LLVM_TARGET_ARCH</b>:STRING</dt>
   <dd>LLVM target to use for native code generation. This is required
diff --git a/libclamav/c++/llvm/docs/CommandGuide/llvmc.pod b/libclamav/c++/llvm/docs/CommandGuide/llvmc.pod
index e3031e1..e5e0651 100644
--- a/libclamav/c++/llvm/docs/CommandGuide/llvmc.pod
+++ b/libclamav/c++/llvm/docs/CommandGuide/llvmc.pod
@@ -126,24 +126,31 @@ use the B<-Wo,> option.
 
 =item B<-I> I<directory>
 
-Add a directory to the header file search path.  This option can be
-repeated.
+Add a directory to the header file search path.
 
 =item B<-L> I<directory>
 
-Add I<directory> to the library search path.  This option can be
-repeated.
+Add I<directory> to the library search path.
+
+=item B<-F> I<directory>
+
+Add I<directory> to the framework search path.
 
 =item B<-l>I<name>
 
 Link in the library libI<name>.[bc | a | so].  This library should
 be a bitcode library.
 
+=item B<-framework> I<name>
+
+Link in the library libI<name>.[bc | a | so].  This library should
+be a bitcode library.
+
 =item B<-emit-llvm>
 
-Make the output be LLVM bitcode (with B<-c>) or assembly (with B<-S>) instead
-of native object (or assembly).  If B<-emit-llvm> is given without either B<-c>
-or B<-S> it has no effect.
+Output LLVM bitcode (with B<-c>) or assembly (with B<-S>) instead of native
+object (or assembly).  If B<-emit-llvm> is given without either B<-c> or B<-S>
+it has no effect.
 
 =item B<-Wa>
 
@@ -157,6 +164,10 @@ Pass options to linker.
 
 Pass options to opt.
 
+=item B<-Wllc>
+
+Pass options to llc (code generator).
+
 =back
 
 =head1 EXIT STATUS
diff --git a/libclamav/c++/llvm/docs/CompilerDriver.html b/libclamav/c++/llvm/docs/CompilerDriver.html
index 761d6ee..0a3f877 100644
--- a/libclamav/c++/llvm/docs/CompilerDriver.html
+++ b/libclamav/c++/llvm/docs/CompilerDriver.html
@@ -17,28 +17,28 @@ The ReST source lives in the directory 'tools/llvmc/doc'. -->
 <div class="contents topic" id="contents">
 <p class="topic-title first">Contents</p>
 <ul class="simple">
-<li><a class="reference internal" href="#introduction" id="id4">Introduction</a></li>
-<li><a class="reference internal" href="#compiling-with-llvmc" id="id5">Compiling with LLVMC</a></li>
-<li><a class="reference internal" href="#predefined-options" id="id6">Predefined options</a></li>
-<li><a class="reference internal" href="#compiling-llvmc-plugins" id="id7">Compiling LLVMC plugins</a></li>
-<li><a class="reference internal" href="#compiling-standalone-llvmc-based-drivers" id="id8">Compiling standalone LLVMC-based drivers</a></li>
-<li><a class="reference internal" href="#customizing-llvmc-the-compilation-graph" id="id9">Customizing LLVMC: the compilation graph</a></li>
-<li><a class="reference internal" href="#describing-options" id="id10">Describing options</a><ul>
-<li><a class="reference internal" href="#external-options" id="id11">External options</a></li>
+<li><a class="reference internal" href="#introduction" id="id8">Introduction</a></li>
+<li><a class="reference internal" href="#compiling-with-llvmc" id="id9">Compiling with LLVMC</a></li>
+<li><a class="reference internal" href="#predefined-options" id="id10">Predefined options</a></li>
+<li><a class="reference internal" href="#compiling-llvmc-plugins" id="id11">Compiling LLVMC plugins</a></li>
+<li><a class="reference internal" href="#compiling-standalone-llvmc-based-drivers" id="id12">Compiling standalone LLVMC-based drivers</a></li>
+<li><a class="reference internal" href="#customizing-llvmc-the-compilation-graph" id="id13">Customizing LLVMC: the compilation graph</a></li>
+<li><a class="reference internal" href="#describing-options" id="id14">Describing options</a><ul>
+<li><a class="reference internal" href="#external-options" id="id15">External options</a></li>
 </ul>
 </li>
-<li><a class="reference internal" href="#conditional-evaluation" id="id12">Conditional evaluation</a></li>
-<li><a class="reference internal" href="#writing-a-tool-description" id="id13">Writing a tool description</a><ul>
-<li><a class="reference internal" href="#actions" id="id14">Actions</a></li>
+<li><a class="reference internal" href="#conditional-evaluation" id="id16">Conditional evaluation</a></li>
+<li><a class="reference internal" href="#writing-a-tool-description" id="id17">Writing a tool description</a><ul>
+<li><a class="reference internal" href="#id5" id="id18">Actions</a></li>
 </ul>
 </li>
-<li><a class="reference internal" href="#language-map" id="id15">Language map</a></li>
-<li><a class="reference internal" href="#option-preprocessor" id="id16">Option preprocessor</a></li>
-<li><a class="reference internal" href="#more-advanced-topics" id="id17">More advanced topics</a><ul>
-<li><a class="reference internal" href="#hooks-and-environment-variables" id="id18">Hooks and environment variables</a></li>
-<li><a class="reference internal" href="#how-plugins-are-loaded" id="id19">How plugins are loaded</a></li>
-<li><a class="reference internal" href="#debugging" id="id20">Debugging</a></li>
-<li><a class="reference internal" href="#conditioning-on-the-executable-name" id="id21">Conditioning on the executable name</a></li>
+<li><a class="reference internal" href="#language-map" id="id19">Language map</a></li>
+<li><a class="reference internal" href="#option-preprocessor" id="id20">Option preprocessor</a></li>
+<li><a class="reference internal" href="#more-advanced-topics" id="id21">More advanced topics</a><ul>
+<li><a class="reference internal" href="#hooks-and-environment-variables" id="id22">Hooks and environment variables</a></li>
+<li><a class="reference internal" href="#how-plugins-are-loaded" id="id23">How plugins are loaded</a></li>
+<li><a class="reference internal" href="#debugging" id="id24">Debugging</a></li>
+<li><a class="reference internal" href="#conditioning-on-the-executable-name" id="id25">Conditioning on the executable name</a></li>
 </ul>
 </li>
 </ul>
@@ -46,7 +46,7 @@ The ReST source lives in the directory 'tools/llvmc/doc'. -->
 <div class="doc_author">
 <p>Written by <a href="mailto:foldr at codedgers.com">Mikhail Glushenkov</a></p>
 </div><div class="section" id="introduction">
-<h1><a class="toc-backref" href="#id4">Introduction</a></h1>
+<h1><a class="toc-backref" href="#id8">Introduction</a></h1>
 <p>LLVMC is a generic compiler driver, designed to be customizable and
 extensible. It plays the same role for LLVM as the <tt class="docutils literal"><span class="pre">gcc</span></tt> program
 does for GCC - LLVMC's job is essentially to transform a set of input
@@ -63,7 +63,7 @@ example, as a build tool for game resources.</p>
 need to be familiar with it to customize LLVMC.</p>
 </div>
 <div class="section" id="compiling-with-llvmc">
-<h1><a class="toc-backref" href="#id5">Compiling with LLVMC</a></h1>
+<h1><a class="toc-backref" href="#id9">Compiling with LLVMC</a></h1>
 <p>LLVMC tries hard to be as compatible with <tt class="docutils literal"><span class="pre">gcc</span></tt> as possible,
 although there are some small differences. Most of the time, however,
 you shouldn't be able to notice them:</p>
@@ -100,7 +100,7 @@ hello
 possible to choose the <tt class="docutils literal"><span class="pre">clang</span></tt> compiler with the <tt class="docutils literal"><span class="pre">-clang</span></tt> option.</p>
 </div>
 <div class="section" id="predefined-options">
-<h1><a class="toc-backref" href="#id6">Predefined options</a></h1>
+<h1><a class="toc-backref" href="#id10">Predefined options</a></h1>
 <p>LLVMC has some built-in options that can't be overridden in the
 configuration libraries:</p>
 <ul class="simple">
@@ -137,7 +137,7 @@ their standard meaning.</li>
 </ul>
 </div>
 <div class="section" id="compiling-llvmc-plugins">
-<h1><a class="toc-backref" href="#id7">Compiling LLVMC plugins</a></h1>
+<h1><a class="toc-backref" href="#id11">Compiling LLVMC plugins</a></h1>
 <p>It's easiest to start working on your own LLVMC plugin by copying the
 skeleton project which lives under <tt class="docutils literal"><span class="pre">$LLVMC_DIR/plugins/Simple</span></tt>:</p>
 <pre class="literal-block">
@@ -176,7 +176,7 @@ $ llvmc -load $LLVM_DIR/Release/lib/plugin_llvmc_Simple.so
 </pre>
 </div>
 <div class="section" id="compiling-standalone-llvmc-based-drivers">
-<h1><a class="toc-backref" href="#id8">Compiling standalone LLVMC-based drivers</a></h1>
+<h1><a class="toc-backref" href="#id12">Compiling standalone LLVMC-based drivers</a></h1>
 <p>By default, the <tt class="docutils literal"><span class="pre">llvmc</span></tt> executable consists of a driver core plus several
 statically linked plugins (<tt class="docutils literal"><span class="pre">Base</span></tt> and <tt class="docutils literal"><span class="pre">Clang</span></tt> at the moment). You can
 produce a standalone LLVMC-based driver executable by linking the core with your
@@ -215,7 +215,7 @@ $ make LLVMC_BUILTIN_PLUGINS=&quot;&quot;
 </pre>
 </div>
 <div class="section" id="customizing-llvmc-the-compilation-graph">
-<h1><a class="toc-backref" href="#id9">Customizing LLVMC: the compilation graph</a></h1>
+<h1><a class="toc-backref" href="#id13">Customizing LLVMC: the compilation graph</a></h1>
 <p>Each TableGen configuration file should include the common
 definitions:</p>
 <pre class="literal-block">
@@ -283,7 +283,7 @@ debugging), run <tt class="docutils literal"><span class="pre">llvmc</span> <spa
 <tt class="docutils literal"><span class="pre">gsview</span></tt> installed for this to work properly.</p>
 </div>
 <div class="section" id="describing-options">
-<h1><a class="toc-backref" href="#id10">Describing options</a></h1>
+<h1><a class="toc-backref" href="#id14">Describing options</a></h1>
 <p>Command-line options that the plugin supports are defined by using an
 <tt class="docutils literal"><span class="pre">OptionList</span></tt>:</p>
 <pre class="literal-block">
@@ -342,6 +342,11 @@ the <tt class="docutils literal"><span class="pre">--help</span></tt> output (bu
 output).</li>
 <li><tt class="docutils literal"><span class="pre">really_hidden</span></tt> - the option will not be mentioned in any help
 output.</li>
+<li><tt class="docutils literal"><span class="pre">comma_separated</span></tt> - Indicates that any commas specified for an option's
+value should be used to split the value up into multiple values for the
+option. This property is valid only for list options. In conjunction with
+<tt class="docutils literal"><span class="pre">forward_value</span></tt> can be used to implement option forwarding in style of
+gcc's <tt class="docutils literal"><span class="pre">-Wa,</span></tt>.</li>
 <li><tt class="docutils literal"><span class="pre">multi_val</span> <span class="pre">n</span></tt> - this option takes <em>n</em> arguments (can be useful in some
 special cases). Usage example: <tt class="docutils literal"><span class="pre">(parameter_list_option</span> <span class="pre">&quot;foo&quot;,</span> <span class="pre">(multi_val</span>
 <span class="pre">3))</span></tt>; the command-line syntax is '-foo a b c'. Only list options can have
@@ -352,13 +357,13 @@ parameter), or a boolean (if it is a switch; boolean constants are called
 <tt class="docutils literal"><span class="pre">true</span></tt> and <tt class="docutils literal"><span class="pre">false</span></tt>). List options can't have this attribute. Usage
 examples: <tt class="docutils literal"><span class="pre">(switch_option</span> <span class="pre">&quot;foo&quot;,</span> <span class="pre">(init</span> <span class="pre">true))</span></tt>; <tt class="docutils literal"><span class="pre">(prefix_option</span> <span class="pre">&quot;bar&quot;,</span>
 <span class="pre">(init</span> <span class="pre">&quot;baz&quot;))</span></tt>.</li>
-<li><tt class="docutils literal"><span class="pre">extern</span></tt> - this option is defined in some other plugin, see below.</li>
+<li><tt class="docutils literal"><span class="pre">extern</span></tt> - this option is defined in some other plugin, see <a class="reference internal" href="#extern">below</a>.</li>
 </ul>
 </blockquote>
 </li>
 </ul>
 <div class="section" id="external-options">
-<h2><a class="toc-backref" href="#id11">External options</a></h2>
+<span id="extern"></span><h2><a class="toc-backref" href="#id15">External options</a></h2>
 <p>Sometimes, when linking several plugins together, one plugin needs to
 access options defined in some other plugin. Because of the way
 options are implemented, such options must be marked as
@@ -374,7 +379,7 @@ ignored. See also the section on plugin <a class="reference internal" href="#pri
 </div>
 </div>
 <div class="section" id="conditional-evaluation">
-<span id="case"></span><h1><a class="toc-backref" href="#id12">Conditional evaluation</a></h1>
+<span id="case"></span><h1><a class="toc-backref" href="#id16">Conditional evaluation</a></h1>
 <p>The 'case' construct is the main means by which programmability is
 achieved in LLVMC. It can be used to calculate edge weights, program
 actions and modify the shell commands to be executed. The 'case'
@@ -433,7 +438,7 @@ a given value.
 Example: <tt class="docutils literal"><span class="pre">(parameter_equals</span> <span class="pre">&quot;W&quot;,</span> <span class="pre">&quot;all&quot;)</span></tt>.</li>
 <li><tt class="docutils literal"><span class="pre">element_in_list</span></tt> - Returns true if a command-line parameter
 list contains a given value.
-Example: <tt class="docutils literal"><span class="pre">(parameter_in_list</span> <span class="pre">&quot;l&quot;,</span> <span class="pre">&quot;pthread&quot;)</span></tt>.</li>
+Example: <tt class="docutils literal"><span class="pre">(element_in_list</span> <span class="pre">&quot;l&quot;,</span> <span class="pre">&quot;pthread&quot;)</span></tt>.</li>
 <li><tt class="docutils literal"><span class="pre">input_languages_contain</span></tt> - Returns true if a given language
 belongs to the current input language set.
 Example: <tt class="docutils literal"><span class="pre">(input_languages_contain</span> <span class="pre">&quot;c++&quot;)</span></tt>.</li>
@@ -475,7 +480,7 @@ argument. Example: <tt class="docutils literal"><span class="pre">(not</span> <s
 </ul>
 </div>
 <div class="section" id="writing-a-tool-description">
-<h1><a class="toc-backref" href="#id13">Writing a tool description</a></h1>
+<h1><a class="toc-backref" href="#id17">Writing a tool description</a></h1>
 <p>As was said earlier, nodes in the compilation graph represent tools,
 which are described separately. A tool definition looks like this
 (taken from the <tt class="docutils literal"><span class="pre">include/llvm/CompilerDriver/Tools.td</span></tt> file):</p>
@@ -512,12 +517,12 @@ list of input files and joins them together. Used for linkers.</li>
 tools are passed to this tool.</li>
 <li><tt class="docutils literal"><span class="pre">actions</span></tt> - A single big <tt class="docutils literal"><span class="pre">case</span></tt> expression that specifies how
 this tool reacts on command-line options (described in more detail
-below).</li>
+<a class="reference internal" href="#actions">below</a>).</li>
 </ul>
 </li>
 </ul>
-<div class="section" id="actions">
-<h2><a class="toc-backref" href="#id14">Actions</a></h2>
+<div class="section" id="id5">
+<span id="actions"></span><h2><a class="toc-backref" href="#id18">Actions</a></h2>
 <p>A tool often needs to react to command-line options, and this is
 precisely what the <tt class="docutils literal"><span class="pre">actions</span></tt> property is for. The next example
 illustrates this feature:</p>
@@ -550,28 +555,31 @@ like a linker.</p>
 <li><p class="first">Possible actions:</p>
 <blockquote>
 <ul class="simple">
-<li><tt class="docutils literal"><span class="pre">append_cmd</span></tt> - append a string to the tool invocation
-command.
-Example: <tt class="docutils literal"><span class="pre">(case</span> <span class="pre">(switch_on</span> <span class="pre">&quot;pthread&quot;),</span> <span class="pre">(append_cmd</span>
-<span class="pre">&quot;-lpthread&quot;))</span></tt></li>
-<li><tt class="docutils literal"><span class="pre">error</span></tt> - exit with error.
+<li><tt class="docutils literal"><span class="pre">append_cmd</span></tt> - Append a string to the tool invocation command.
+Example: <tt class="docutils literal"><span class="pre">(case</span> <span class="pre">(switch_on</span> <span class="pre">&quot;pthread&quot;),</span> <span class="pre">(append_cmd</span> <span class="pre">&quot;-lpthread&quot;))</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">error</span></tt> - Exit with error.
 Example: <tt class="docutils literal"><span class="pre">(error</span> <span class="pre">&quot;Mixing</span> <span class="pre">-c</span> <span class="pre">and</span> <span class="pre">-S</span> <span class="pre">is</span> <span class="pre">not</span> <span class="pre">allowed!&quot;)</span></tt>.</li>
-<li><tt class="docutils literal"><span class="pre">warning</span></tt> - print a warning.
+<li><tt class="docutils literal"><span class="pre">warning</span></tt> - Print a warning.
 Example: <tt class="docutils literal"><span class="pre">(warning</span> <span class="pre">&quot;Specifying</span> <span class="pre">both</span> <span class="pre">-O1</span> <span class="pre">and</span> <span class="pre">-O2</span> <span class="pre">is</span> <span class="pre">meaningless!&quot;)</span></tt>.</li>
-<li><tt class="docutils literal"><span class="pre">forward</span></tt> - forward an option unchanged.  Example: <tt class="docutils literal"><span class="pre">(forward</span> <span class="pre">&quot;Wall&quot;)</span></tt>.</li>
-<li><tt class="docutils literal"><span class="pre">forward_as</span></tt> - Change the name of an option, but forward the
-argument unchanged.
+<li><tt class="docutils literal"><span class="pre">forward</span></tt> - Forward the option unchanged.
+Example: <tt class="docutils literal"><span class="pre">(forward</span> <span class="pre">&quot;Wall&quot;)</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">forward_as</span></tt> - Change the option's name, but forward the argument
+unchanged.
 Example: <tt class="docutils literal"><span class="pre">(forward_as</span> <span class="pre">&quot;O0&quot;,</span> <span class="pre">&quot;--disable-optimization&quot;)</span></tt>.</li>
-<li><tt class="docutils literal"><span class="pre">output_suffix</span></tt> - modify the output suffix of this
-tool.
+<li><tt class="docutils literal"><span class="pre">forward_value</span></tt> - Forward only option's value. Cannot be used with switch
+options (since they don't have values), but works fine with lists.
+Example: <tt class="docutils literal"><span class="pre">(forward_value</span> <span class="pre">&quot;Wa,&quot;)</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">forward_transformed_value</span></tt> - As above, but applies a hook to the
+option's value before forwarding (see <a class="reference internal" href="#hooks">below</a>). When
+<tt class="docutils literal"><span class="pre">forward_transformed_value</span></tt> is applied to a list
+option, the hook must have signature
+<tt class="docutils literal"><span class="pre">std::string</span> <span class="pre">hooks::HookName</span> <span class="pre">(const</span> <span class="pre">std::vector&lt;std::string&gt;&amp;)</span></tt>.
+Example: <tt class="docutils literal"><span class="pre">(forward_transformed_value</span> <span class="pre">&quot;m&quot;,</span> <span class="pre">&quot;ConvertToMAttr&quot;)</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">output_suffix</span></tt> - Modify the output suffix of this tool.
 Example: <tt class="docutils literal"><span class="pre">(output_suffix</span> <span class="pre">&quot;i&quot;)</span></tt>.</li>
-<li><tt class="docutils literal"><span class="pre">stop_compilation</span></tt> - stop compilation after this tool processes
-its input. Used without arguments.</li>
-<li><tt class="docutils literal"><span class="pre">unpack_values</span></tt> - used for for splitting and forwarding
-comma-separated lists of options, e.g. <tt class="docutils literal"><span class="pre">-Wa,-foo=bar,-baz</span></tt> is
-converted to <tt class="docutils literal"><span class="pre">-foo=bar</span> <span class="pre">-baz</span></tt> and appended to the tool invocation
-command.
-Example: <tt class="docutils literal"><span class="pre">(unpack_values</span> <span class="pre">&quot;Wa,&quot;)</span></tt>.</li>
+<li><tt class="docutils literal"><span class="pre">stop_compilation</span></tt> - Stop compilation after this tool processes its
+input. Used without arguments.
+Example: <tt class="docutils literal"><span class="pre">(stop_compilation)</span></tt>.</li>
 </ul>
 </blockquote>
 </li>
@@ -579,7 +587,7 @@ Example: <tt class="docutils literal"><span class="pre">(unpack_values</span> <s
 </div>
 </div>
 <div class="section" id="language-map">
-<h1><a class="toc-backref" href="#id15">Language map</a></h1>
+<h1><a class="toc-backref" href="#id19">Language map</a></h1>
 <p>If you are adding support for a new language to LLVMC, you'll need to
 modify the language map, which defines mappings from file extensions
 to language names. It is used to choose the proper toolchain(s) for a
@@ -602,7 +610,7 @@ multiple output languages, for nodes &quot;inside&quot; the graph the input and
 output languages should match. This is enforced at compile-time.</p>
 </div>
 <div class="section" id="option-preprocessor">
-<h1><a class="toc-backref" href="#id16">Option preprocessor</a></h1>
+<h1><a class="toc-backref" href="#id20">Option preprocessor</a></h1>
 <p>It is sometimes useful to run error-checking code before processing the
 compilation graph. For example, if optimization options &quot;-O1&quot; and &quot;-O2&quot; are
 implemented as switches, we might want to output a warning if the user invokes
@@ -629,9 +637,9 @@ in <tt class="docutils literal"><span class="pre">OptionPreprocessor</span></tt>
 convenience, <tt class="docutils literal"><span class="pre">unset_option</span></tt> also works on lists.</p>
 </div>
 <div class="section" id="more-advanced-topics">
-<h1><a class="toc-backref" href="#id17">More advanced topics</a></h1>
+<h1><a class="toc-backref" href="#id21">More advanced topics</a></h1>
 <div class="section" id="hooks-and-environment-variables">
-<span id="hooks"></span><h2><a class="toc-backref" href="#id18">Hooks and environment variables</a></h2>
+<span id="hooks"></span><h2><a class="toc-backref" href="#id22">Hooks and environment variables</a></h2>
 <p>Normally, LLVMC executes programs from the system <tt class="docutils literal"><span class="pre">PATH</span></tt>. Sometimes,
 this is not sufficient: for example, we may want to specify tool paths
 or names in the configuration file. This can be easily achieved via
@@ -664,7 +672,7 @@ the <tt class="docutils literal"><span class="pre">case</span></tt> expression (
 </pre>
 </div>
 <div class="section" id="how-plugins-are-loaded">
-<span id="priorities"></span><h2><a class="toc-backref" href="#id19">How plugins are loaded</a></h2>
+<span id="priorities"></span><h2><a class="toc-backref" href="#id23">How plugins are loaded</a></h2>
 <p>It is possible for LLVMC plugins to depend on each other. For example,
 one can create edges between nodes defined in some other plugin. To
 make this work, however, that plugin should be loaded first. To
@@ -680,7 +688,7 @@ with 0. Therefore, the plugin with the highest priority value will be
 loaded last.</p>
 </div>
 <div class="section" id="debugging">
-<h2><a class="toc-backref" href="#id20">Debugging</a></h2>
+<h2><a class="toc-backref" href="#id24">Debugging</a></h2>
 <p>When writing LLVMC plugins, it can be useful to get a visual view of
 the resulting compilation graph. This can be achieved via the command
 line option <tt class="docutils literal"><span class="pre">--view-graph</span></tt>. This command assumes that <a class="reference external" href="http://www.graphviz.org/">Graphviz</a> and
@@ -696,7 +704,7 @@ perform any compilation tasks and returns the number of encountered
 errors as its status code.</p>
 </div>
 <div class="section" id="conditioning-on-the-executable-name">
-<h2><a class="toc-backref" href="#id21">Conditioning on the executable name</a></h2>
+<h2><a class="toc-backref" href="#id25">Conditioning on the executable name</a></h2>
 <p>For now, the executable name (the value passed to the driver in <tt class="docutils literal"><span class="pre">argv[0]</span></tt>) is
 accessible only in the C++ code (i.e. hooks). Use the following code:</p>
 <pre class="literal-block">
@@ -704,12 +712,16 @@ namespace llvmc {
 extern const char* ProgramName;
 }
 
+namespace hooks {
+
 std::string MyHook() {
 //...
 if (strcmp(ProgramName, &quot;mydriver&quot;) == 0) {
    //...
 
 }
+
+} // end namespace hooks
 </pre>
 <p>In general, you're encouraged not to make the behaviour dependent on the
 executable file name, and use command-line switches instead. See for example how
@@ -727,7 +739,7 @@ the <tt class="docutils literal"><span class="pre">Base</span></tt> plugin behav
 <a href="mailto:foldr at codedgers.com">Mikhail Glushenkov</a><br />
 <a href="http://llvm.org">LLVM Compiler Infrastructure</a><br />
 
-Last modified: $Date$
+Last modified: $Date: 2008-12-11 11:34:48 -0600 (Thu, 11 Dec 2008) $
 </address></div>
 </div>
 </div>
diff --git a/libclamav/c++/llvm/docs/GettingStarted.html b/libclamav/c++/llvm/docs/GettingStarted.html
index 851bfb6..6dd32a8 100644
--- a/libclamav/c++/llvm/docs/GettingStarted.html
+++ b/libclamav/c++/llvm/docs/GettingStarted.html
@@ -252,7 +252,8 @@ software you will need.</p>
 </tr>
 <tr>
   <td>Cygwin/Win32</td>
-  <td>x86<sup><a href="#pf_1">1</a>,<a href="#pf_8">8</a></sup></td>
+  <td>x86<sup><a href="#pf_1">1</a>,<a href="#pf_8">8</a>,
+     <a href="#pf_11">11</a></sup></td>
   <td>GCC 3.4.X, binutils 2.15</td>
 </tr>
 <tr>
@@ -331,6 +332,9 @@ up</a></li>
     before any Windows-based versions such as Strawberry Perl and
     ActivePerl, as these have Windows-specifics that will cause the
     build to fail.</a></li>
+<li><a name="pf_11">In general, LLVM modules requiring dynamic linking can
+    not be built on Windows. However, you can build LLVM tools using
+    <i>"make tools-only"</i>.</li>
 </ol>
 </div>
 
diff --git a/libclamav/c++/llvm/docs/LangRef.html b/libclamav/c++/llvm/docs/LangRef.html
index ab656d8..45f6f38 100644
--- a/libclamav/c++/llvm/docs/LangRef.html
+++ b/libclamav/c++/llvm/docs/LangRef.html
@@ -5,7 +5,7 @@
   <title>LLVM Assembly Language Reference Manual</title>
   <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
   <meta name="author" content="Chris Lattner">
-  <meta name="description" 
+  <meta name="description"
   content="LLVM Assembly Language Reference Manual.">
   <link rel="stylesheet" href="llvm.css" type="text/css">
 </head>
@@ -54,7 +54,7 @@
   <li><a href="#typesystem">Type System</a>
     <ol>
       <li><a href="#t_classifications">Type Classifications</a></li>
-      <li><a href="#t_primitive">Primitive Types</a>    
+      <li><a href="#t_primitive">Primitive Types</a>
         <ol>
           <li><a href="#t_integer">Integer Type</a></li>
           <li><a href="#t_floating">Floating Point Types</a></li>
@@ -291,6 +291,8 @@
             '<tt>llvm.trap</tt>' Intrinsic</a></li>
           <li><a href="#int_stackprotector">
             '<tt>llvm.stackprotector</tt>' Intrinsic</a></li>
+	  <li><a href="#int_objectsize">
+            '<tt>llvm.objectsize</tt>' Intrinsic</a></li>
         </ol>
       </li>
     </ol>
@@ -574,7 +576,7 @@ define i32 @main() {                                        <i>; i32()* </i>
       Symbols with "<tt>common</tt>" linkage are merged in the same way as
       <tt>weak symbols</tt>, and they may not be deleted if unreferenced.
       <tt>common</tt> symbols may not have an explicit section,
-      must have a zero initializer, and may not be marked '<a 
+      must have a zero initializer, and may not be marked '<a
       href="#globalvars"><tt>constant</tt></a>'.  Functions and aliases may not
       have common linkage.</dd>
 
@@ -841,7 +843,7 @@ define i32 @main() {                                        <i>; i32()* </i>
 
 <p>LLVM function declarations consist of the "<tt>declare</tt>" keyword, an
    optional <a href="#linkage">linkage type</a>, an optional
-   <a href="#visibility">visibility style</a>, an optional 
+   <a href="#visibility">visibility style</a>, an optional
    <a href="#callingconv">calling convention</a>, a return type, an optional
    <a href="#paramattrs">parameter attribute</a> for the return type, a function
    name, a possibly empty list of arguments, an optional alignment, and an
@@ -1190,7 +1192,7 @@ target datalayout = "<i>layout specification</i>"
       location.</dd>
 
   <dt><tt>p:<i>size</i>:<i>abi</i>:<i>pref</i></tt></dt>
-  <dd>This specifies the <i>size</i> of a pointer and its <i>abi</i> and 
+  <dd>This specifies the <i>size</i> of a pointer and its <i>abi</i> and
       <i>preferred</i> alignments. All sizes are in bits. Specifying
       the <i>pref</i> alignment is optional. If omitted, the
       preceding <tt>:</tt> should be omitted too.</dd>
@@ -1200,11 +1202,11 @@ target datalayout = "<i>layout specification</i>"
       <i>size</i>. The value of <i>size</i> must be in the range [1,2^23).</dd>
 
   <dt><tt>v<i>size</i>:<i>abi</i>:<i>pref</i></tt></dt>
-  <dd>This specifies the alignment for a vector type of a given bit 
+  <dd>This specifies the alignment for a vector type of a given bit
       <i>size</i>.</dd>
 
   <dt><tt>f<i>size</i>:<i>abi</i>:<i>pref</i></tt></dt>
-  <dd>This specifies the alignment for a floating point type of a given bit 
+  <dd>This specifies the alignment for a floating point type of a given bit
       <i>size</i>. The value of <i>size</i> must be either 32 (float) or 64
       (double).</dd>
 
@@ -1220,7 +1222,7 @@ target datalayout = "<i>layout specification</i>"
   <dd>This specifies a set of native integer widths for the target CPU
       in bits.  For example, it might contain "n32" for 32-bit PowerPC,
       "n32:64" for PowerPC 64, or "n8:16:32:64" for X86-64.  Elements of
-      this set are considered to support most general arithmetic 
+      this set are considered to support most general arithmetic
       operations efficiently.</dd>
 </dl>
 
@@ -1440,11 +1442,6 @@ Classifications</a> </div>
   </tr>
 </table>
 
-<p>Note that the code generator does not yet support large integer types to be
-   used as function return types. The specific limit on how large a return type
-   the code generator can currently handle is target-dependent; currently it's
-   often 64 bits for 32-bit targets and 128 bits for 64-bit targets.</p>
-
 </div>
 
 <!-- _______________________________________________________________________ -->
@@ -1583,11 +1580,6 @@ Classifications</a> </div>
    length array type. An implementation of 'pascal style arrays' in LLVM could
    use the type "<tt>{ i32, [0 x float]}</tt>", for example.</p>
 
-<p>Note that the code generator does not yet support large aggregate types to be
-   used as function return types. The specific limit on how large an aggregate
-   return type the code generator can currently handle is target-dependent, and
-   also dependent on the aggregate element types.</p>
-
 </div>
 
 <!-- _______________________________________________________________________ -->
@@ -1624,16 +1616,16 @@ Classifications</a> </div>
   </tr><tr class="layout">
     <td class="left"><tt>float&nbsp;(i16&nbsp;signext,&nbsp;i32&nbsp;*)&nbsp;*
     </tt></td>
-    <td class="left"><a href="#t_pointer">Pointer</a> to a function that takes 
-      an <tt>i16</tt> that should be sign extended and a 
-      <a href="#t_pointer">pointer</a> to <tt>i32</tt>, returning 
+    <td class="left"><a href="#t_pointer">Pointer</a> to a function that takes
+      an <tt>i16</tt> that should be sign extended and a
+      <a href="#t_pointer">pointer</a> to <tt>i32</tt>, returning
       <tt>float</tt>.
     </td>
   </tr><tr class="layout">
     <td class="left"><tt>i32 (i8*, ...)</tt></td>
-    <td class="left">A vararg function that takes at least one 
-      <a href="#t_pointer">pointer</a> to <tt>i8 </tt> (char in C), 
-      which returns an integer.  This is the signature for <tt>printf</tt> in 
+    <td class="left">A vararg function that takes at least one
+      <a href="#t_pointer">pointer</a> to <tt>i8 </tt> (char in C),
+      which returns an integer.  This is the signature for <tt>printf</tt> in
       LLVM.
     </td>
   </tr><tr class="layout">
@@ -1680,11 +1672,6 @@ Classifications</a> </div>
   </tr>
 </table>
 
-<p>Note that the code generator does not yet support large aggregate types to be
-   used as function return types. The specific limit on how large an aggregate
-   return type the code generator can currently handle is target-dependent, and
-   also dependent on the aggregate element types.</p>
-
 </div>
 
 <!-- _______________________________________________________________________ -->
@@ -1775,8 +1762,7 @@ Classifications</a> </div>
 <p>A vector type is a simple derived type that represents a vector of elements.
    Vector types are used when multiple primitive data are operated in parallel
    using a single instruction (SIMD).  A vector type requires a size (number of
-   elements) and an underlying primitive data type.  Vectors must have a power
-   of two length (1, 2, 4, 8, 16 ...).  Vector types are considered
+   elements) and an underlying primitive data type.  Vector types are considered
    <a href="#t_firstclass">first class</a>.</p>
 
 <h5>Syntax:</h5>
@@ -1803,11 +1789,6 @@ Classifications</a> </div>
   </tr>
 </table>
 
-<p>Note that the code generator does not yet support large vector types to be
-   used as function return types. The specific limit on how large a vector
-   return type codegen can currently handle is target-dependent; currently it's
-   often a few times longer than a hardware vector register.</p>
-
 </div>
 
 <!-- _______________________________________________________________________ -->
@@ -2073,9 +2054,9 @@ Unsafe:
 For example, if "%X" has a zero bit, then the output of the 'and' operation will
 always be a zero, no matter what the corresponding bit from the undef is.  As
 such, it is unsafe to optimize or assume that the result of the and is undef.
-However, it is safe to assume that all bits of the undef could be 0, and 
-optimize the and to 0.  Likewise, it is safe to assume that all the bits of 
-the undef operand to the or could be set, allowing the or to be folded to 
+However, it is safe to assume that all bits of the undef could be 0, and
+optimize the and to 0.  Likewise, it is safe to assume that all the bits of
+the undef operand to the or could be set, allowing the or to be folded to
 -1.</p>
 
 <div class="doc_code">
@@ -2105,7 +2086,7 @@ the optimizer is allowed to assume that the undef operand could be the same as
 <div class="doc_code">
 <pre>
   %A = xor undef, undef
-  
+
   %B = undef
   %C = xor %B, %B
 
@@ -2156,7 +2137,7 @@ does not execute at all.  This allows us to delete the divide and all code after
 it: since the undefined operation "can't happen", the optimizer can assume that
 it occurs in dead code.
 </p>
- 
+
 <div class="doc_code">
 <pre>
 a:  store undef -> %X
@@ -2168,7 +2149,7 @@ b: unreachable
 </div>
 
 <p>These examples reiterate the fdiv example: a store "of" an undefined value
-can be assumed to not have any effect: we can assume that the value is 
+can be assumed to not have any effect: we can assume that the value is
 overwritten with bits that happen to match what was already there.  However, a
 store "to" an undefined location could clobber arbitrary memory, therefore, it
 has undefined behavior.</p>
@@ -2185,7 +2166,7 @@ has undefined behavior.</p>
 <p>The '<tt>blockaddress</tt>' constant computes the address of the specified
    basic block in the specified function, and always has an i8* type.  Taking
    the address of the entry block is illegal.</p>
-     
+
 <p>This value only has defined behavior when used as an operand to the
    '<a href="#i_indirectbr"><tt>indirectbr</tt></a>' instruction or for comparisons
    against null.  Pointer equality tests between labels addresses is undefined
@@ -2194,7 +2175,7 @@ has undefined behavior.</p>
    pointer sized value as long as the bits are not inspected.  This allows
    <tt>ptrtoint</tt> and arithmetic to be performed on these values so long as
    the original value is reconstituted before the <tt>indirectbr</tt>.</p>
-   
+
 <p>Finally, some targets may provide defined semantics when
    using the value as the operand to an inline assembly, but that is target
    specific.
@@ -2600,14 +2581,6 @@ Instruction</a> </div>
   ret { i32, i8 } { i32 4, i8 2 } <i>; Return a struct of values 4 and 2</i>
 </pre>
 
-<p>Note that the code generator does not yet fully support large
-   return values. The specific sizes that are currently supported are
-   dependent on the target. For integers, on 32-bit targets the limit
-   is often 64 bits, and on 64-bit targets the limit is often 128 bits.
-   For aggregate types, the current limits are dependent on the element
-   types; for example targets are often limited to 2 total integer
-   elements and 2 total floating-point elements.</p>
-
 </div>
 <!-- _______________________________________________________________________ -->
 <div class="doc_subsubsection"> <a name="i_br">'<tt>br</tt>' Instruction</a> </div>
@@ -2730,7 +2703,7 @@ IfUnequal:
    rest of the arguments indicate the full set of possible destinations that the
    address may point to.  Blocks are allowed to occur multiple times in the
    destination list, though this isn't particularly useful.</p>
-   
+
 <p>This destination list is required so that dataflow analysis has an accurate
    understanding of the CFG.</p>
 
@@ -3087,7 +3060,7 @@ Instruction</a> </div>
 <p>The two arguments to the '<tt>mul</tt>' instruction must
    be <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of
    integer values.  Both arguments must have identical types.</p>
- 
+
 <h5>Semantics:</h5>
 <p>The value produced is the integer product of the two operands.</p>
 
@@ -3159,7 +3132,7 @@ Instruction</a> </div>
 <p>The '<tt>udiv</tt>' instruction returns the quotient of its two operands.</p>
 
 <h5>Arguments:</h5>
-<p>The two arguments to the '<tt>udiv</tt>' instruction must be 
+<p>The two arguments to the '<tt>udiv</tt>' instruction must be
    <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of integer
    values.  Both arguments must have identical types.</p>
 
@@ -3194,7 +3167,7 @@ Instruction</a> </div>
 <p>The '<tt>sdiv</tt>' instruction returns the quotient of its two operands.</p>
 
 <h5>Arguments:</h5>
-<p>The two arguments to the '<tt>sdiv</tt>' instruction must be 
+<p>The two arguments to the '<tt>sdiv</tt>' instruction must be
    <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of integer
    values.  Both arguments must have identical types.</p>
 
@@ -3265,7 +3238,7 @@ Instruction</a> </div>
    division of its two arguments.</p>
 
 <h5>Arguments:</h5>
-<p>The two arguments to the '<tt>urem</tt>' instruction must be 
+<p>The two arguments to the '<tt>urem</tt>' instruction must be
    <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of integer
    values.  Both arguments must have identical types.</p>
 
@@ -3305,7 +3278,7 @@ Instruction</a> </div>
    elements must be integers.</p>
 
 <h5>Arguments:</h5>
-<p>The two arguments to the '<tt>srem</tt>' instruction must be 
+<p>The two arguments to the '<tt>srem</tt>' instruction must be
    <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of integer
    values.  Both arguments must have identical types.</p>
 
@@ -3400,7 +3373,7 @@ Instruction</a> </div>
 <p>Both arguments to the '<tt>shl</tt>' instruction must be the
     same <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of
     integer type.  '<tt>op2</tt>' is treated as an unsigned value.</p>
- 
+
 <h5>Semantics:</h5>
 <p>The value produced is <tt>op1</tt> * 2<sup><tt>op2</tt></sup> mod
    2<sup>n</sup>, where <tt>n</tt> is the width of the result.  If <tt>op2</tt>
@@ -3436,7 +3409,7 @@ Instruction</a> </div>
    operand shifted to the right a specified number of bits with zero fill.</p>
 
 <h5>Arguments:</h5>
-<p>Both arguments to the '<tt>lshr</tt>' instruction must be the same 
+<p>Both arguments to the '<tt>lshr</tt>' instruction must be the same
    <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of integer
    type. '<tt>op2</tt>' is treated as an unsigned value.</p>
 
@@ -3476,7 +3449,7 @@ Instruction</a> </div>
    extension.</p>
 
 <h5>Arguments:</h5>
-<p>Both arguments to the '<tt>ashr</tt>' instruction must be the same 
+<p>Both arguments to the '<tt>ashr</tt>' instruction must be the same
    <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of integer
    type.  '<tt>op2</tt>' is treated as an unsigned value.</p>
 
@@ -3516,7 +3489,7 @@ Instruction</a> </div>
    operands.</p>
 
 <h5>Arguments:</h5>
-<p>The two arguments to the '<tt>and</tt>' instruction must be 
+<p>The two arguments to the '<tt>and</tt>' instruction must be
    <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of integer
    values.  Both arguments must have identical types.</p>
 
@@ -3575,7 +3548,7 @@ Instruction</a> </div>
    two operands.</p>
 
 <h5>Arguments:</h5>
-<p>The two arguments to the '<tt>or</tt>' instruction must be 
+<p>The two arguments to the '<tt>or</tt>' instruction must be
    <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of integer
    values.  Both arguments must have identical types.</p>
 
@@ -3638,7 +3611,7 @@ Instruction</a> </div>
    complement" operation, which is the "~" operator in C.</p>
 
 <h5>Arguments:</h5>
-<p>The two arguments to the '<tt>xor</tt>' instruction must be 
+<p>The two arguments to the '<tt>xor</tt>' instruction must be
    <a href="#t_integer">integer</a> or <a href="#t_vector">vector</a> of integer
    values.  Both arguments must have identical types.</p>
 
@@ -3686,7 +3659,7 @@ Instruction</a> </div>
 </div>
 
 <!-- ======================================================================= -->
-<div class="doc_subsection"> 
+<div class="doc_subsection">
   <a name="vectorops">Vector Operations</a>
 </div>
 
@@ -3809,20 +3782,20 @@ Instruction</a> </div>
 
 <h5>Example:</h5>
 <pre>
-  &lt;result&gt; = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; %v2, 
+  &lt;result&gt; = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; %v2,
                           &lt;4 x i32&gt; &lt;i32 0, i32 4, i32 1, i32 5&gt;  <i>; yields &lt;4 x i32&gt;</i>
-  &lt;result&gt; = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; undef, 
+  &lt;result&gt; = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; undef,
                           &lt;4 x i32&gt; &lt;i32 0, i32 1, i32 2, i32 3&gt;  <i>; yields &lt;4 x i32&gt;</i> - Identity shuffle.
-  &lt;result&gt; = shufflevector &lt;8 x i32&gt; %v1, &lt;8 x i32&gt; undef, 
+  &lt;result&gt; = shufflevector &lt;8 x i32&gt; %v1, &lt;8 x i32&gt; undef,
                           &lt;4 x i32&gt; &lt;i32 0, i32 1, i32 2, i32 3&gt;  <i>; yields &lt;4 x i32&gt;</i>
-  &lt;result&gt; = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; %v2, 
+  &lt;result&gt; = shufflevector &lt;4 x i32&gt; %v1, &lt;4 x i32&gt; %v2,
                           &lt;8 x i32&gt; &lt;i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7 &gt;  <i>; yields &lt;8 x i32&gt;</i>
 </pre>
 
 </div>
 
 <!-- ======================================================================= -->
-<div class="doc_subsection"> 
+<div class="doc_subsection">
   <a name="aggregateops">Aggregate Operations</a>
 </div>
 
@@ -3907,7 +3880,7 @@ Instruction</a> </div>
 
 
 <!-- ======================================================================= -->
-<div class="doc_subsection"> 
+<div class="doc_subsection">
   <a name="memoryops">Memory Access and Addressing Operations</a>
 </div>
 
@@ -4270,15 +4243,15 @@ entry:
 </pre>
 
 <h5>Overview:</h5>
-<p>The '<tt>zext</tt>' instruction zero extends its operand to type 
+<p>The '<tt>zext</tt>' instruction zero extends its operand to type
    <tt>ty2</tt>.</p>
 
 
 <h5>Arguments:</h5>
-<p>The '<tt>zext</tt>' instruction takes a value to cast, which must be of 
+<p>The '<tt>zext</tt>' instruction takes a value to cast, which must be of
    <a href="#t_integer">integer</a> type, and a type to cast it to, which must
    also be of <a href="#t_integer">integer</a> type. The bit size of the
-   <tt>value</tt> must be smaller than the bit size of the destination type, 
+   <tt>value</tt> must be smaller than the bit size of the destination type,
    <tt>ty2</tt>.</p>
 
 <h5>Semantics:</h5>
@@ -4310,10 +4283,10 @@ entry:
 <p>The '<tt>sext</tt>' sign extends <tt>value</tt> to the type <tt>ty2</tt>.</p>
 
 <h5>Arguments:</h5>
-<p>The '<tt>sext</tt>' instruction takes a value to cast, which must be of 
+<p>The '<tt>sext</tt>' instruction takes a value to cast, which must be of
    <a href="#t_integer">integer</a> type, and a type to cast it to, which must
    also be of <a href="#t_integer">integer</a> type.  The bit size of the
-   <tt>value</tt> must be smaller than the bit size of the destination type, 
+   <tt>value</tt> must be smaller than the bit size of the destination type,
    <tt>ty2</tt>.</p>
 
 <h5>Semantics:</h5>
@@ -4351,12 +4324,12 @@ entry:
 <p>The '<tt>fptrunc</tt>' instruction takes a <a href="#t_floating">floating
    point</a> value to cast and a <a href="#t_floating">floating point</a> type
    to cast it to. The size of <tt>value</tt> must be larger than the size of
-   <tt>ty2</tt>. This implies that <tt>fptrunc</tt> cannot be used to make a 
+   <tt>ty2</tt>. This implies that <tt>fptrunc</tt> cannot be used to make a
    <i>no-op cast</i>.</p>
 
 <h5>Semantics:</h5>
 <p>The '<tt>fptrunc</tt>' instruction truncates a <tt>value</tt> from a larger
-   <a href="#t_floating">floating point</a> type to a smaller 
+   <a href="#t_floating">floating point</a> type to a smaller
    <a href="#t_floating">floating point</a> type.  If the value cannot fit
    within the destination type, <tt>ty2</tt>, then the results are
    undefined.</p>
@@ -4385,7 +4358,7 @@ entry:
    floating point value.</p>
 
 <h5>Arguments:</h5>
-<p>The '<tt>fpext</tt>' instruction takes a 
+<p>The '<tt>fpext</tt>' instruction takes a
    <a href="#t_floating">floating point</a> <tt>value</tt> to cast, and
    a <a href="#t_floating">floating point</a> type to cast it to. The source
    type must be smaller than the destination type.</p>
@@ -4428,7 +4401,7 @@ entry:
    vector integer type with the same number of elements as <tt>ty</tt></p>
 
 <h5>Semantics:</h5>
-<p>The '<tt>fptoui</tt>' instruction converts its 
+<p>The '<tt>fptoui</tt>' instruction converts its
    <a href="#t_floating">floating point</a> operand into the nearest (rounding
    towards zero) unsigned integer value. If the value cannot fit
    in <tt>ty2</tt>, the results are undefined.</p>
@@ -4454,7 +4427,7 @@ entry:
 </pre>
 
 <h5>Overview:</h5>
-<p>The '<tt>fptosi</tt>' instruction converts 
+<p>The '<tt>fptosi</tt>' instruction converts
    <a href="#t_floating">floating point</a> <tt>value</tt> to
    type <tt>ty2</tt>.</p>
 
@@ -4466,7 +4439,7 @@ entry:
    vector integer type with the same number of elements as <tt>ty</tt></p>
 
 <h5>Semantics:</h5>
-<p>The '<tt>fptosi</tt>' instruction converts its 
+<p>The '<tt>fptosi</tt>' instruction converts its
    <a href="#t_floating">floating point</a> operand into the nearest (rounding
    towards zero) signed integer value. If the value cannot fit in <tt>ty2</tt>,
    the results are undefined.</p>
@@ -4663,7 +4636,7 @@ entry:
 <pre>
   %X = bitcast i8 255 to i8              <i>; yields i8 :-1</i>
   %Y = bitcast i32* %x to sint*          <i>; yields sint*:%x</i>
-  %Z = bitcast &lt;2 x int&gt; %V to i64;      <i>; yields i64: %V</i>   
+  %Z = bitcast &lt;2 x int&gt; %V to i64;      <i>; yields i64: %V</i>
 </pre>
 
 </div>
@@ -4723,11 +4696,11 @@ entry:
    result, as follows:</p>
 
 <ol>
-  <li><tt>eq</tt>: yields <tt>true</tt> if the operands are equal, 
+  <li><tt>eq</tt>: yields <tt>true</tt> if the operands are equal,
       <tt>false</tt> otherwise. No sign interpretation is necessary or
       performed.</li>
 
-  <li><tt>ne</tt>: yields <tt>true</tt> if the operands are unequal, 
+  <li><tt>ne</tt>: yields <tt>true</tt> if the operands are unequal,
       <tt>false</tt> otherwise. No sign interpretation is necessary or
       performed.</li>
 
@@ -4844,42 +4817,42 @@ entry:
 <ol>
   <li><tt>false</tt>: always yields <tt>false</tt>, regardless of operands.</li>
 
-  <li><tt>oeq</tt>: yields <tt>true</tt> if both operands are not a QNAN and 
+  <li><tt>oeq</tt>: yields <tt>true</tt> if both operands are not a QNAN and
       <tt>op1</tt> is equal to <tt>op2</tt>.</li>
 
   <li><tt>ogt</tt>: yields <tt>true</tt> if both operands are not a QNAN and
       <tt>op1</tt> is greather than <tt>op2</tt>.</li>
 
-  <li><tt>oge</tt>: yields <tt>true</tt> if both operands are not a QNAN and 
+  <li><tt>oge</tt>: yields <tt>true</tt> if both operands are not a QNAN and
       <tt>op1</tt> is greater than or equal to <tt>op2</tt>.</li>
 
-  <li><tt>olt</tt>: yields <tt>true</tt> if both operands are not a QNAN and 
+  <li><tt>olt</tt>: yields <tt>true</tt> if both operands are not a QNAN and
       <tt>op1</tt> is less than <tt>op2</tt>.</li>
 
-  <li><tt>ole</tt>: yields <tt>true</tt> if both operands are not a QNAN and 
+  <li><tt>ole</tt>: yields <tt>true</tt> if both operands are not a QNAN and
       <tt>op1</tt> is less than or equal to <tt>op2</tt>.</li>
 
-  <li><tt>one</tt>: yields <tt>true</tt> if both operands are not a QNAN and 
+  <li><tt>one</tt>: yields <tt>true</tt> if both operands are not a QNAN and
       <tt>op1</tt> is not equal to <tt>op2</tt>.</li>
 
   <li><tt>ord</tt>: yields <tt>true</tt> if both operands are not a QNAN.</li>
 
-  <li><tt>ueq</tt>: yields <tt>true</tt> if either operand is a QNAN or 
+  <li><tt>ueq</tt>: yields <tt>true</tt> if either operand is a QNAN or
       <tt>op1</tt> is equal to <tt>op2</tt>.</li>
 
-  <li><tt>ugt</tt>: yields <tt>true</tt> if either operand is a QNAN or 
+  <li><tt>ugt</tt>: yields <tt>true</tt> if either operand is a QNAN or
       <tt>op1</tt> is greater than <tt>op2</tt>.</li>
 
-  <li><tt>uge</tt>: yields <tt>true</tt> if either operand is a QNAN or 
+  <li><tt>uge</tt>: yields <tt>true</tt> if either operand is a QNAN or
       <tt>op1</tt> is greater than or equal to <tt>op2</tt>.</li>
 
-  <li><tt>ult</tt>: yields <tt>true</tt> if either operand is a QNAN or 
+  <li><tt>ult</tt>: yields <tt>true</tt> if either operand is a QNAN or
       <tt>op1</tt> is less than <tt>op2</tt>.</li>
 
-  <li><tt>ule</tt>: yields <tt>true</tt> if either operand is a QNAN or 
+  <li><tt>ule</tt>: yields <tt>true</tt> if either operand is a QNAN or
       <tt>op1</tt> is less than or equal to <tt>op2</tt>.</li>
 
-  <li><tt>une</tt>: yields <tt>true</tt> if either operand is a QNAN or 
+  <li><tt>une</tt>: yields <tt>true</tt> if either operand is a QNAN or
       <tt>op1</tt> is not equal to <tt>op2</tt>.</li>
 
   <li><tt>uno</tt>: yields <tt>true</tt> if either operand is a QNAN.</li>
@@ -5171,7 +5144,7 @@ freestanding environments and non-C-based langauges.</p>
    suffix is required. Because the argument's type is matched against the return
    type, it does not require its own name suffix.</p>
 
-<p>To learn how to add an intrinsic function, please see the 
+<p>To learn how to add an intrinsic function, please see the
    <a href="ExtendingLLVM.html">Extending LLVM Guide</a>.</p>
 
 </div>
@@ -6606,11 +6579,11 @@ LLVM</a>.</p>
 <ul>
   <li><tt>ll</tt>: All loads before the barrier must complete before any load
       after the barrier begins.</li>
-  <li><tt>ls</tt>: All loads before the barrier must complete before any 
+  <li><tt>ls</tt>: All loads before the barrier must complete before any
       store after the barrier begins.</li>
-  <li><tt>ss</tt>: All stores before the barrier must complete before any 
+  <li><tt>ss</tt>: All stores before the barrier must complete before any
       store after the barrier begins.</li>
-  <li><tt>sl</tt>: All stores before the barrier must complete before any 
+  <li><tt>sl</tt>: All stores before the barrier must complete before any
       load after the barrier begins.</li>
 </ul>
 
@@ -6823,7 +6796,7 @@ LLVM</a>.</p>
 </pre>
 
 <h5>Overview:</h5>
-<p>This intrinsic subtracts <tt>delta</tt> to the value stored in memory at 
+<p>This intrinsic subtracts <tt>delta</tt> to the value stored in memory at
    <tt>ptr</tt>. It yields the original value at <tt>ptr</tt>.</p>
 
 <h5>Arguments:</h5>
@@ -6979,7 +6952,7 @@ LLVM</a>.</p>
 </pre>
 
 <h5>Overview:</h5>
-<p>These intrinsics takes the signed or unsigned minimum or maximum of 
+<p>These intrinsics takes the signed or unsigned minimum or maximum of
    <tt>delta</tt> and the value stored in memory at <tt>ptr</tt>. It yields the
    original value at <tt>ptr</tt>.</p>
 
@@ -7275,6 +7248,61 @@ LLVM</a>.</p>
 
 </div>
 
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection">
+  <a name="int_objectsize">'<tt>llvm.objectsize</tt>' Intrinsic</a>
+</div>
+
+<div class="doc_text">
+
+<h5>Syntax:</h5>
+<pre>
+  declare i32 @llvm.objectsize.i32( i8* &lt;ptr&gt;, i32 &lt;type&gt; )
+  declare i64 @llvm.objectsize.i64( i8* &lt;ptr&gt;, i32 &lt;type&gt; )
+</pre>
+
+<h5>Overview:</h5>
+<p>The <tt>llvm.objectsize</tt> intrinsic is designed to provide information
+   to the optimizers to either discover at compile time either a) when an
+   operation like memcpy will either overflow a buffer that corresponds to
+   an object, or b) to determine that a runtime check for overflow isn't
+   necessary. An object in this context means an allocation of a
+   specific <a href="#typesystem">type</a>.</p>
+
+<h5>Arguments:</h5>
+<p>The <tt>llvm.objectsize</tt> intrinsic takes two arguments.  The first
+   argument is a pointer to the object <tt>ptr</tt>. The second argument
+   is an integer <tt>type</tt> which ranges from 0 to 3. The first bit in
+   the type corresponds to a return value based on whole objects,
+   and the second bit whether or not we return the maximum or minimum
+   remaining bytes computed.</p>
+<table class="layout">
+  <tr class="layout">
+    <td class="left"><tt>00</tt></td>
+    <td class="left">whole object, maximum number of bytes</td>
+  </tr>
+  <tr class="layout">
+    <td class="left"><tt>01</tt></td>
+    <td class="left">partial object, maximum number of bytes</td>
+  </tr>
+  <tr class="layout">
+    <td class="left"><tt>10</tt></td>
+    <td class="left">whole object, minimum number of bytes</td>
+  </tr>
+  <tr class="layout">
+    <td class="left"><tt>11</tt></td>
+    <td class="left">partial object, minimum number of bytes</td>
+  </tr>
+</table>
+
+<h5>Semantics:</h5>
+<p>The <tt>llvm.objectsize</tt> intrinsic is lowered to either a constant
+   representing the size of the object concerned or <tt>i32/i64 -1 or 0</tt>
+   (depending on the <tt>type</tt> argument if the size cannot be determined
+   at compile time.</p>
+
+</div>
+
 <!-- *********************************************************************** -->
 <hr>
 <address>
diff --git a/libclamav/c++/llvm/docs/SourceLevelDebugging.html b/libclamav/c++/llvm/docs/SourceLevelDebugging.html
index c405575..05a99e3 100644
--- a/libclamav/c++/llvm/docs/SourceLevelDebugging.html
+++ b/libclamav/c++/llvm/docs/SourceLevelDebugging.html
@@ -780,18 +780,18 @@ DW_TAG_return_variable = 258
 </div>
 
 <div class="doc_text">
-<p>In many languages, the local variables in functions can have their lifetime
-   or scope limited to a subset of a function.  In the C family of languages,
+<p>In many languages, the local variables in functions can have their lifetimes
+   or scopes limited to a subset of a function.  In the C family of languages,
    for example, variables are only live (readable and writable) within the
    source block that they are defined in.  In functional languages, values are
    only readable after they have been defined.  Though this is a very obvious
-   concept, it is also non-trivial to model in LLVM, because it has no notion of
+   concept, it is non-trivial to model in LLVM, because it has no notion of
    scoping in this sense, and does not want to be tied to a language's scoping
    rules.</p>
 
-<p>In order to handle this, the LLVM debug format uses the metadata attached
-   with llvm instructions to encode line nuber and scoping information.
-   Consider the following C fragment, for example:</p>
+<p>In order to handle this, the LLVM debug format uses the metadata attached to
+   llvm instructions to encode line nuber and scoping information. Consider the
+   following C fragment, for example:</p>
 
 <div class="doc_code">
 <pre>
@@ -811,25 +811,25 @@ DW_TAG_return_variable = 258
 
 <div class="doc_code">
 <pre>
-nounwind ssp {
+define void @foo() nounwind ssp {
 entry:
-  %X = alloca i32, align 4                        ; <i32*> [#uses=4]
-  %Y = alloca i32, align 4                        ; <i32*> [#uses=4]
-  %Z = alloca i32, align 4                        ; <i32*> [#uses=3]
-  %0 = bitcast i32* %X to { }*                    ; <{ }*> [#uses=1]
+  %X = alloca i32, align 4                        ; &lt;i32*&gt; [#uses=4]
+  %Y = alloca i32, align 4                        ; &lt;i32*&gt; [#uses=4]
+  %Z = alloca i32, align 4                        ; &lt;i32*&gt; [#uses=3]
+  %0 = bitcast i32* %X to { }*                    ; &lt;{ }*&gt; [#uses=1]
   call void @llvm.dbg.declare({ }* %0, metadata !0), !dbg !7
   store i32 21, i32* %X, !dbg !8
-  %1 = bitcast i32* %Y to { }*                    ; <{ }*> [#uses=1]
+  %1 = bitcast i32* %Y to { }*                    ; &lt;{ }*&gt; [#uses=1]
   call void @llvm.dbg.declare({ }* %1, metadata !9), !dbg !10
   store i32 22, i32* %Y, !dbg !11
-  %2 = bitcast i32* %Z to { }*                    ; <{ }*> [#uses=1]
+  %2 = bitcast i32* %Z to { }*                    ; &lt;{ }*&gt; [#uses=1]
   call void @llvm.dbg.declare({ }* %2, metadata !12), !dbg !14
   store i32 23, i32* %Z, !dbg !15
-  %tmp = load i32* %X, !dbg !16                   ; <i32> [#uses=1]
-  %tmp1 = load i32* %Y, !dbg !16                  ; <i32> [#uses=1]
-  %add = add nsw i32 %tmp, %tmp1, !dbg !16        ; <i32> [#uses=1]
+  %tmp = load i32* %X, !dbg !16                   ; &lt;i32&gt; [#uses=1]
+  %tmp1 = load i32* %Y, !dbg !16                  ; &lt;i32&gt; [#uses=1]
+  %add = add nsw i32 %tmp, %tmp1, !dbg !16        ; &lt;i32&gt; [#uses=1]
   store i32 %add, i32* %Z, !dbg !16
-  %tmp2 = load i32* %Y, !dbg !17                  ; <i32> [#uses=1]
+  %tmp2 = load i32* %Y, !dbg !17                  ; &lt;i32&gt; [#uses=1]
   store i32 %tmp2, i32* %X, !dbg !17
   ret void, !dbg !18
 }
@@ -867,68 +867,74 @@ declare void @llvm.dbg.declare({ }*, metadata) nounwind readnone
 </pre>
 </div>
 
-<p>This example illustrates a few important details about the LLVM debugging
-   information.  In particular, it shows how the llvm.dbg.declare intrinsic
-   and location information, attached with an instruction, are applied
-   together to allow a debugger to analyze the relationship between statements,
-   variable definitions, and the code used to implement the function.</p>
+<p>This example illustrates a few important details about LLVM debugging
+   information. In particular, it shows how the <tt>llvm.dbg.declare</tt>
+   intrinsic and location information, which are attached to an instruction,
+   are applied together to allow a debugger to analyze the relationship between
+   statements, variable definitions, and the code used to implement the
+   function.</p>
 
-   <div class="doc_code">
-   <pre> 
-     call void @llvm.dbg.declare({ }* %0, metadata !0), !dbg !7   
-   </pre>
-   </div>
-<p>This first intrinsic 
+<div class="doc_code">
+<pre>
+call void @llvm.dbg.declare({ }* %0, metadata !0), !dbg !7   
+</pre>
+</div>
+
+<p>The first intrinsic
    <tt>%<a href="#format_common_declare">llvm.dbg.declare</a></tt>
-   encodes debugging information for variable <tt>X</tt>. The metadata, 
-   <tt>!dbg !7</tt> attached with the intrinsic provides scope information for 
-   the variable <tt>X</tt>. </p>
-   <div class="doc_code">
-   <pre>
-     !7 = metadata !{i32 2, i32 7, metadata !1, null}
-     !1 = metadata !{i32 458763, metadata !2}; [DW_TAG_lexical_block ]
-     !2 = metadata !{i32 458798, i32 0, metadata !3, metadata !"foo", 
-                     metadata !"foo", metadata !"foo", metadata !3, i32 1, 
-                     metadata !4, i1 false, i1 true}; [DW_TAG_subprogram ]   
-   </pre>
-   </div>
-
-<p> Here <tt>!7</tt> is a metadata providing location information. It has four
-   fields : line number, column number, scope and original scope. The original
-   scope represents inline location if this instruction is inlined inside
-   a caller. It is null otherwise. In this example scope is encoded by 
+   encodes debugging information for the variable <tt>X</tt>. The metadata
+   <tt>!dbg !7</tt> attached to the intrinsic provides scope information for the
+   variable <tt>X</tt>.</p>
+
+<div class="doc_code">
+<pre>
+!7 = metadata !{i32 2, i32 7, metadata !1, null}
+!1 = metadata !{i32 458763, metadata !2}; [DW_TAG_lexical_block ]
+!2 = metadata !{i32 458798, i32 0, metadata !3, metadata !"foo", 
+                metadata !"foo", metadata !"foo", metadata !3, i32 1, 
+                metadata !4, i1 false, i1 true}; [DW_TAG_subprogram ]   
+</pre>
+</div>
+
+<p>Here <tt>!7</tt> is metadata providing location information. It has four
+   fields: line number, column number, scope, and original scope. The original
+   scope represents inline location if this instruction is inlined inside a
+   caller, and is null otherwise. In this example, scope is encoded by
    <tt>!1</tt>. <tt>!1</tt> represents a lexical block inside the scope
    <tt>!2</tt>, where <tt>!2</tt> is a
-   <a href="#format_subprograms">subprogram descriptor</a>. 
-   This way the location information attched with the intrinsics indicates
-   that the variable <tt>X</tt> is declared at line number 2 at a function level
-   scope in function <tt>foo</tt>.</p>
+   <a href="#format_subprograms">subprogram descriptor</a>. This way the
+   location information attached to the intrinsics indicates that the
+   variable <tt>X</tt> is declared at line number 2 at a function level scope in
+   function <tt>foo</tt>.</p>
 
 <p>Now lets take another example.</p>
 
-   <div class="doc_code">
-   <pre> 
-     call void @llvm.dbg.declare({ }* %2, metadata !12), !dbg !14
-   </pre>
-   </div>
-<p>This intrinsic 
+<div class="doc_code">
+<pre>
+call void @llvm.dbg.declare({ }* %2, metadata !12), !dbg !14
+</pre>
+</div>
+
+<p>The second intrinsic
    <tt>%<a href="#format_common_declare">llvm.dbg.declare</a></tt>
-   encodes debugging information for variable <tt>Z</tt>. The metadata, 
-   <tt>!dbg !14</tt> attached with the intrinsic provides scope information for 
-   the variable <tt>Z</tt>. </p>
-   <div class="doc_code">
-   <pre>
-     !13 = metadata !{i32 458763, metadata !1}; [DW_TAG_lexical_block ]
-     !14 = metadata !{i32 5, i32 9, metadata !13, null}
-   </pre>
-   </div>
-
-<p> Here <tt>!14</tt> indicates that <tt>Z</tt> is declaread at line number 5,
-   column number 9 inside a lexical scope <tt>!13</tt>. This lexical scope
-   itself resides inside lexcial scope <tt>!1</tt> described above.</p>
-
-<p>The scope information attached with each instruction provides a straight
-   forward way to find instructions covered by a scope. </p>
+   encodes debugging information for variable <tt>Z</tt>. The metadata 
+   <tt>!dbg !14</tt> attached to the intrinsic provides scope information for
+   the variable <tt>Z</tt>.</p>
+
+<div class="doc_code">
+<pre>
+!13 = metadata !{i32 458763, metadata !1}; [DW_TAG_lexical_block ]
+!14 = metadata !{i32 5, i32 9, metadata !13, null}
+</pre>
+</div>
+
+<p>Here <tt>!14</tt> indicates that <tt>Z</tt> is declaread at line number 5 and
+   column number 9 inside of lexical scope <tt>!13</tt>. The lexical scope
+   itself resides inside of lexical scope <tt>!1</tt> described above.</p>
+
+<p>The scope information attached with each instruction provides a
+   straightforward way to find instructions covered by a scope.</p>
+
 </div>
 
 <!-- *********************************************************************** -->
diff --git a/libclamav/c++/llvm/docs/tutorial/JITTutorial1.html b/libclamav/c++/llvm/docs/tutorial/JITTutorial1.html
deleted file mode 100644
index 3b7b8de..0000000
--- a/libclamav/c++/llvm/docs/tutorial/JITTutorial1.html
+++ /dev/null
@@ -1,207 +0,0 @@
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
-                      "http://www.w3.org/TR/html4/strict.dtd">
-
-<html>
-<head>
-  <title>LLVM Tutorial 1: A First Function</title>
-  <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
-  <meta name="author" content="Owen Anderson">
-  <meta name="description" 
-  content="LLVM Tutorial 1: A First Function.">
-  <link rel="stylesheet" href="../llvm.css" type="text/css">
-</head>
-
-<body>
-
-<div class="doc_title"> LLVM Tutorial 1: A First Function </div>
-
-<div class="doc_author">
-  <p>Written by <a href="mailto:owen at apple.com">Owen Anderson</a></p>
-</div>
-
-<!-- *********************************************************************** -->
-<div class="doc_section"><a name="intro">A First Function</a></div>
-<!-- *********************************************************************** -->
-
-<div class="doc_text">
-
-<p>For starters, let's consider a relatively straightforward function that takes three integer parameters and returns an arithmetic combination of them.  This is nice and simple, especially since it involves no control flow:</p>
-
-<div class="doc_code">
-<pre>
-int mul_add(int x, int y, int z) {
-  return x * y + z;
-}
-</pre>
-</div>
-
-<p>As a preview, the LLVM IR we’re going to end up generating for this function will look like:</p>
-
-<div class="doc_code">
-<pre>
-define i32 @mul_add(i32 %x, i32 %y, i32 %z) {
-entry:
-  %tmp = mul i32 %x, %y
-  %tmp2 = add i32 %tmp, %z
-  ret i32 %tmp2
-}
-</pre>
-</div>
-
-<p>If you're unsure what the above code says, skim through the <a href="../LangRef.html">LLVM Language Reference Manual</a> and convince yourself that the above LLVM IR is actually equivalent to the original function.  Once you’re satisfied with that, let's move on to actually generating it programmatically!</p>
-
-<p>Of course, before we can start, we need to <code>#include</code> the appropriate LLVM header files:</p>
-
-<div class="doc_code">
-<pre>
-#include "llvm/Module.h"
-#include "llvm/Function.h"
-#include "llvm/PassManager.h"
-#include "llvm/CallingConv.h"
-#include "llvm/Analysis/Verifier.h"
-#include "llvm/Assembly/PrintModulePass.h"
-#include "llvm/Support/IRBuilder.h"
-#include "llvm/Support/raw_ostream.h"
-</pre>
-</div>
-
-<p>Now, let's get started on our real program.  Here's what our basic <code>main()</code> will look like:</p>
-
-<div class="doc_code">
-<pre>
-using namespace llvm;
-
-Module* makeLLVMModule();
-
-int main(int argc, char**argv) {
-  Module* Mod = makeLLVMModule();
-
-  verifyModule(*Mod, PrintMessageAction);
-
-  PassManager PM;
-  PM.add(createPrintModulePass(&amp;outs()));
-  PM.run(*Mod);
-
-  delete Mod;
-  return 0;
-}
-</pre>
-</div>
-
-<p>The first segment is pretty simple: it creates an LLVM “module.”  In LLVM, a module represents a single unit of code that is to be processed together.  A module contains things like global variables, function declarations, and implementations.  Here we’ve declared a <code>makeLLVMModule()</code> function to do the real work of creating the module.  Don’t worry, we’ll be looking at that one next!</p>
-
-<p>The second segment runs the LLVM module verifier on our newly created module.  While this probably isn’t really necessary for a simple module like this one, it's always a good idea, especially if you’re generating LLVM IR based on some input.  The verifier will print an error message if your LLVM module is malformed in any way.</p>
-
-<p>Finally, we instantiate an LLVM <code>PassManager</code> and run
-the <code>PrintModulePass</code> on our module.  LLVM uses an explicit pass
-infrastructure to manage optimizations and various other things.
-A <code>PassManager</code>, as should be obvious from its name, manages passes:
-it is responsible for scheduling them, invoking them, and ensuring the proper
-disposal after we’re done with them.  For this example, we’re just using a
-trivial pass that prints out our module in textual form.</p>
-
-<p>Now onto the interesting part: creating and populating a module.  Here's the
-first chunk of our <code>makeLLVMModule()</code>:</p>
-
-<div class="doc_code">
-<pre>
-Module* makeLLVMModule() {
-  // Module Construction
-  Module* mod = new Module("test", getGlobalContext());
-</pre>
-</div>
-
-<p>Exciting, isn’t it!?  All we’re doing here is instantiating a module and giving it a name.  The name isn’t particularly important unless you’re going to be dealing with multiple modules at once.</p>
-
-<div class="doc_code">
-<pre>
-  Constant* c = mod-&gt;getOrInsertFunction("mul_add",
-  /*ret type*/                           IntegerType::get(32),
-  /*args*/                               IntegerType::get(32),
-                                         IntegerType::get(32),
-                                         IntegerType::get(32),
-  /*varargs terminated with null*/       NULL);
-  
-  Function* mul_add = cast&lt;Function&gt;(c);
-  mul_add-&gt;setCallingConv(CallingConv::C);
-</pre>
-</div>
-
-<p>We construct our <code>Function</code> by calling <code>getOrInsertFunction()</code> on our module, passing in the name, return type, and argument types of the function.  In the case of our <code>mul_add</code> function, that means one 32-bit integer for the return value and three 32-bit integers for the arguments.</p>
-
-<p>You'll notice that <code>getOrInsertFunction()</code> doesn't actually return a <code>Function*</code>.  This is because <code>getOrInsertFunction()</code> will return a cast of the existing function if the function already existed with a different prototype.  Since we know that there's not already a <code>mul_add</code> function, we can safely just cast <code>c</code> to a <code>Function*</code>.
-  
-<p>In addition, we set the calling convention for our new function to be the C
-calling convention.  This isn’t strictly necessary, but it ensures that our new
-function will interoperate properly with C code, which is a good thing.</p>
-
-<div class="doc_code">
-<pre>
-  Function::arg_iterator args = mul_add-&gt;arg_begin();
-  Value* x = args++;
-  x-&gt;setName("x");
-  Value* y = args++;
-  y-&gt;setName("y");
-  Value* z = args++;
-  z-&gt;setName("z");
-</pre>
-</div>
-
-<p>While we’re setting up our function, let's also give names to the parameters.  This also isn’t strictly necessary (LLVM will generate names for them if you don’t specify them), but it’ll make looking at our output somewhat more pleasant.  To name the parameters, we iterate over the arguments of our function and call <code>setName()</code> on them.  We’ll also keep the pointer to <code>x</code>, <code>y</code>, and <code>z</code> around, since we’ll need them when we get around to creating instructions.</p>
-
-<p>Great!  We have a function now.  But what good is a function if it has no body?  Before we start working on a body for our new function, we need to recall some details of the LLVM IR.  The IR, being an abstract assembly language, represents control flow using jumps (we call them branches), both conditional and unconditional.  The straight-line sequences of code between branches are called basic blocks, or just blocks.  To create a body for our function, we fill it with blocks:</p>
-
-<div class="doc_code">
-<pre>
-  BasicBlock* block = BasicBlock::Create(getGlobalContext(), "entry", mul_add);
-  IRBuilder&lt;&gt; builder(block);
-</pre>
-</div>
-
-<p>We create a new basic block, as you might expect, by calling its constructor.  All we need to tell it is its name and the function to which it belongs.  In addition, we’re creating an <code>IRBuilder</code> object, which is a convenience interface for creating instructions and appending them to the end of a block.  Instructions can be created through their constructors as well, but some of their interfaces are quite complicated.  Unless you need a lot of control, using <code>IRBuilder</code> will make your life simpler.</p>
-
-<div class="doc_code">
-<pre>
-  Value* tmp = builder.CreateBinOp(Instruction::Mul,
-                                   x, y, "tmp");
-  Value* tmp2 = builder.CreateBinOp(Instruction::Add,
-                                    tmp, z, "tmp2");
-
-  builder.CreateRet(tmp2);
-  
-  return mod;
-}
-</pre>
-</div>
-
-<p>The final step in creating our function is to create the instructions that make it up.  Our <code>mul_add</code> function is composed of just three instructions: a multiply, an add, and a return.  <code>IRBuilder</code> gives us a simple interface for constructing these instructions and appending them to the “entry” block.  Each of the calls to <code>IRBuilder</code> returns a <code>Value*</code> that represents the value yielded by the instruction.  You’ll also notice that, above, <code>x</code>, <code>y</code>, and <code>z</code> are also <code>Value*</code>'s, so it's clear that instructions operate on <code>Value*</code>'s.</p>
-
-<p>And that's it!  Now you can compile and run your code, and get a wonderful textual print out of the LLVM IR we saw at the beginning.  To compile, use the following command line as a guide:</p>
-
-<div class="doc_code">
-<pre>
-# c++ -g tut1.cpp `llvm-config --cxxflags --ldflags --libs core` -o tut1
-# ./tut1
-</pre>
-</div>
-
-<p>The <code>llvm-config</code> utility is used to obtain the necessary GCC-compatible compiler flags for linking with LLVM.  For this example, we only need the 'core' library.  We'll use others once we start adding optimizers and the JIT engine.</p>
-
-<a href="JITTutorial2.html">Next: A More Complicated Function</a>
-</div>
-
-<!-- *********************************************************************** -->
-<hr>
-<address>
-  <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
-  src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a>
-  <a href="http://validator.w3.org/check/referer"><img
-  src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!"></a>
-
-  <a href="mailto:owen at apple.com">Owen Anderson</a><br>
-  <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2009-07-21 11:05:13 -0700 (Tue, 21 Jul 2009) $
-</address>
-
-</body>
-</html>
diff --git a/libclamav/c++/llvm/docs/tutorial/JITTutorial2-1.png b/libclamav/c++/llvm/docs/tutorial/JITTutorial2-1.png
deleted file mode 100644
index eb21695..0000000
Binary files a/libclamav/c++/llvm/docs/tutorial/JITTutorial2-1.png and /dev/null differ
diff --git a/libclamav/c++/llvm/docs/tutorial/JITTutorial2.html b/libclamav/c++/llvm/docs/tutorial/JITTutorial2.html
deleted file mode 100644
index 504d965..0000000
--- a/libclamav/c++/llvm/docs/tutorial/JITTutorial2.html
+++ /dev/null
@@ -1,200 +0,0 @@
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
-                      "http://www.w3.org/TR/html4/strict.dtd">
-
-<html>
-<head>
-  <title>LLVM Tutorial 2: A More Complicated Function</title>
-  <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
-  <meta name="author" content="Owen Anderson">
-  <meta name="description" 
-  content="LLVM Tutorial 2: A More Complicated Function.">
-  <link rel="stylesheet" href="../llvm.css" type="text/css">
-</head>
-
-<body>
-
-<div class="doc_title"> LLVM Tutorial 2: A More Complicated Function </div>
-
-<div class="doc_author">
-  <p>Written by <a href="mailto:owen at apple.com">Owen Anderson</a></p>
-</div>
-
-<!-- *********************************************************************** -->
-<div class="doc_section"><a name="intro">A First Function</a></div>
-<!-- *********************************************************************** -->
-
-<div class="doc_text">
-
-<p>Now that we understand the basics of creating functions in LLVM, let's move on to a more complicated example: something with control flow.  As an example, let's consider Euclid's Greatest Common Denominator (GCD) algorithm:</p>
-
-<div class="doc_code">
-<pre>
-unsigned gcd(unsigned x, unsigned y) {
-  if(x == y) {
-    return x;
-  } else if(x &lt; y) {
-    return gcd(x, y - x);
-  } else {
-    return gcd(x - y, y);
-  }
-}
-</pre>
-</div>
-
-<p>With this example, we'll learn how to create functions with multiple blocks and control flow, and how to make function calls within your LLVM code.  For starters, consider the diagram below.</p>
-
-<div style="text-align: center;"><img src="JITTutorial2-1.png" alt="GCD CFG" width="60%"></div>
-
-<p>This is a graphical representation of a program in LLVM IR.  It places each basic block on a node of a graph and uses directed edges to indicate flow control.  These blocks will be serialized when written to a text or bitcode file, but it is often useful conceptually to think of them as a graph.  Again, if you are unsure about the code in the diagram, you should skim through the <a href="../LangRef.html">LLVM Language Reference Manual</a> and convince yourself that it is, in fact, the GCD algorithm.</p>
-
-<p>The first part of our code is practically the same as from the first tutorial.  The same basic setup is required: creating a module, verifying it, and running the <code>PrintModulePass</code> on it.  Even the first segment of  <code>makeLLVMModule()</code> looks essentially the same, except that <code>gcd</code> takes one fewer parameter than <code>mul_add</code>.</p>
-
-<div class="doc_code">
-<pre>
-#include "llvm/Module.h"
-#include "llvm/Function.h"
-#include "llvm/PassManager.h"
-#include "llvm/Analysis/Verifier.h"
-#include "llvm/Assembly/PrintModulePass.h"
-#include "llvm/Support/IRBuilder.h"
-#include "llvm/Support/raw_ostream.h"
-
-using namespace llvm;
-
-Module* makeLLVMModule();
-
-int main(int argc, char**argv) {
-  Module* Mod = makeLLVMModule();
-  
-  verifyModule(*Mod, PrintMessageAction);
-  
-  PassManager PM;
-  PM.add(createPrintModulePass(&amp;outs()));
-  PM.run(*Mod);
-
-  delete Mod;  
-  return 0;
-}
-
-Module* makeLLVMModule() {
-  Module* mod = new Module(&quot;tut2&quot;);
-  
-  Constant* c = mod-&gt;getOrInsertFunction(&quot;gcd&quot;,
-                                         IntegerType::get(32),
-                                         IntegerType::get(32),
-                                         IntegerType::get(32),
-                                         NULL);
-  Function* gcd = cast&lt;Function&gt;(c);
-  
-  Function::arg_iterator args = gcd-&gt;arg_begin();
-  Value* x = args++;
-  x-&gt;setName(&quot;x&quot;);
-  Value* y = args++;
-  y-&gt;setName(&quot;y&quot;);
-</pre>
-</div>
-
-<p>Here, however, is where our code begins to diverge from the first tutorial.  Because <code>gcd</code> has control flow, it is composed of multiple blocks interconnected by branching (<code>br</code>) instructions.  For those familiar with assembly language, a block is similar to a labeled set of instructions.  For those not familiar with assembly language, a block is basically a set of instructions that can be branched to and is executed linearly until the block is terminated by one of a small number of control flow instructions, such as <code>br</code> or <code>ret</code>.</p>
-
-<p>Blocks correspond to the nodes in the diagram we looked at in the beginning of this tutorial.  From the diagram, we can see that this function contains five blocks, so we'll go ahead and create them.  Note that we're making use of LLVM's automatic name uniquing in this code sample, since we're giving two blocks the same name.</p>
-
-<div class="doc_code">
-<pre>
-  BasicBlock* entry = BasicBlock::Create(getGlobalContext(), (&quot;entry&quot;, gcd);
-  BasicBlock* ret = BasicBlock::Create(getGlobalContext(), (&quot;return&quot;, gcd);
-  BasicBlock* cond_false = BasicBlock::Create(getGlobalContext(), (&quot;cond_false&quot;, gcd);
-  BasicBlock* cond_true = BasicBlock::Create(getGlobalContext(), (&quot;cond_true&quot;, gcd);
-  BasicBlock* cond_false_2 = BasicBlock::Create(getGlobalContext(), (&quot;cond_false&quot;, gcd);
-</pre>
-</div>
-
-<p>Now we're ready to begin generating code!  We'll start with the <code>entry</code> block.  This block corresponds to the top-level if-statement in the original C code, so we need to compare <code>x</code> and <code>y</code>.  To achieve this, we perform an explicit comparison using <code>ICmpEQ</code>.  <code>ICmpEQ</code> stands for an <em>integer comparison for equality</em> and returns a 1-bit integer result.  This 1-bit result is then used as the input to a conditional branch, with <code>ret</code> as the <code>true</code> and <code>cond_false</code> as the <code>false</code> case.</p>
-
-<div class="doc_code">
-<pre>
-  IRBuilder&lt;&gt; builder(entry);
-  Value* xEqualsY = builder.CreateICmpEQ(x, y, &quot;tmp&quot;);
-  builder.CreateCondBr(xEqualsY, ret, cond_false);
-</pre>
-</div>
-
-<p>Our next block, <code>ret</code>, is pretty simple: it just returns the value of <code>x</code>.  Recall that this block is only reached if <code>x == y</code>, so this is the correct behavior.  Notice that instead of creating a new <code>IRBuilder</code> for each block, we can use <code>SetInsertPoint</code> to retarget our existing one.  This saves on construction and memory allocation costs.</p>
-
-<div class="doc_code">
-<pre>
-  builder.SetInsertPoint(ret);
-  builder.CreateRet(x);
-</pre>
-</div>
-
-<p><code>cond_false</code> is a more interesting block: we now know that <code>x
-!= y</code>, so we must branch again to determine which of <code>x</code>
-and <code>y</code> is larger.  This is achieved using the <code>ICmpULT</code>
-instruction, which stands for <em>integer comparison for unsigned
-less-than</em>.  In LLVM, integer types do not carry sign; a 32-bit integer
-pseudo-register can be interpreted as signed or unsigned without casting.
-Whether a signed or unsigned interpretation is desired is specified in the
-instruction.  This is why several instructions in the LLVM IR, such as integer
-less-than, include a specifier for signed or unsigned.</p>
-
-<p>Also note that we're again making use of LLVM's automatic name uniquing, this time at a register level.  We've deliberately chosen to name every instruction "tmp" to illustrate that LLVM will give them all unique names without getting confused.</p>
-
-<div class="doc_code">
-<pre>
-  builder.SetInsertPoint(cond_false);
-  Value* xLessThanY = builder.CreateICmpULT(x, y, &quot;tmp&quot;);
-  builder.CreateCondBr(xLessThanY, cond_true, cond_false_2);
-</pre>
-</div>
-
-<p>Our last two blocks are quite similar; they're both recursive calls to <code>gcd</code> with different parameters.  To create a call instruction, we have to create a <code>vector</code> (or any other container with <code>InputInterator</code>s) to hold the arguments.  We then pass in the beginning and ending iterators for this vector.</p>
-
-<div class="doc_code">
-<pre>
-  builder.SetInsertPoint(cond_true);
-  Value* yMinusX = builder.CreateSub(y, x, &quot;tmp&quot;);
-  std::vector&lt;Value*&gt; args1;
-  args1.push_back(x);
-  args1.push_back(yMinusX);
-  Value* recur_1 = builder.CreateCall(gcd, args1.begin(), args1.end(), &quot;tmp&quot;);
-  builder.CreateRet(recur_1);
-  
-  builder.SetInsertPoint(cond_false_2);
-  Value* xMinusY = builder.CreateSub(x, y, &quot;tmp&quot;);
-  std::vector&lt;Value*&gt; args2;
-  args2.push_back(xMinusY);
-  args2.push_back(y);
-  Value* recur_2 = builder.CreateCall(gcd, args2.begin(), args2.end(), &quot;tmp&quot;);
-  builder.CreateRet(recur_2);
-  
-  return mod;
-}
-</pre>
-</div>
-
-<p>And that's it!  You can compile and execute your code in the same way as before, by doing:</p>
-
-<div class="doc_code">
-<pre>
-# c++ -g tut2.cpp `llvm-config --cxxflags --ldflags --libs core` -o tut2
-# ./tut2
-</pre>
-</div>
-
-</div>
-
-<!-- *********************************************************************** -->
-<hr>
-<address>
-  <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
-  src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a>
-  <a href="http://validator.w3.org/check/referer"><img
-  src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!"></a>
-
-  <a href="mailto:owen at apple.com">Owen Anderson</a><br>
-  <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
-</address>
-
-</body>
-</html>
diff --git a/libclamav/c++/llvm/docs/tutorial/index.html b/libclamav/c++/llvm/docs/tutorial/index.html
index bfaafe7..250b533 100644
--- a/libclamav/c++/llvm/docs/tutorial/index.html
+++ b/libclamav/c++/llvm/docs/tutorial/index.html
@@ -15,16 +15,6 @@
 <div class="doc_title"> LLVM Tutorial: Table of Contents </div>
 
 <ol>
-  <li><!--<a href="Introduction.html">-->An Introduction to LLVM: Basic Concepts and Design</li>
-  <li>Simple JIT Tutorials
-    <ol>
-      <li><a href="JITTutorial1.html">A First Function</a></li>
-      <li><a href="JITTutorial2.html">A More Complicated Function</a></li>
-      <li><!--<a href="Tutorial3.html">-->Running Optimizations</li>
-      <li><!--<a href="Tutorial4.html">-->Reading and Writing Bitcode</li>
-      <li><!--<a href="Tutorial5.html">-->Invoking the JIT</li>
-    </ol>
-  </li>
   <li>Kaleidoscope: Implementing a Language with LLVM
   <ol>
     <li><a href="LangImpl1.html">Tutorial Introduction and the Lexer</a></li>
diff --git a/libclamav/c++/llvm/include/llvm/ADT/DeltaAlgorithm.h b/libclamav/c++/llvm/include/llvm/ADT/DeltaAlgorithm.h
new file mode 100644
index 0000000..1facfa0
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/ADT/DeltaAlgorithm.h
@@ -0,0 +1,91 @@
+//===--- DeltaAlgorithm.h - A Set Minimization Algorithm -------*- C++ -*--===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_ADT_DELTAALGORITHM_H
+#define LLVM_ADT_DELTAALGORITHM_H
+
+#include <vector>
+#include <set>
+
+namespace llvm {
+
+/// DeltaAlgorithm - Implements the delta debugging algorithm (A. Zeller '99)
+/// for minimizing arbitrary sets using a predicate function.
+///
+/// The result of the algorithm is a subset of the input change set which is
+/// guaranteed to satisfy the predicate, assuming that the input set did. For
+/// well formed predicates, the result set is guaranteed to be such that
+/// removing any single element would falsify the predicate.
+///
+/// For best results the predicate function *should* (but need not) satisfy
+/// certain properties, in particular:
+///  (1) The predicate should return false on an empty set and true on the full
+///  set.
+///  (2) If the predicate returns true for a set of changes, it should return
+///  true for all supersets of that set.
+///
+/// It is not an error to provide a predicate that does not satisfy these
+/// requirements, and the algorithm will generally produce reasonable
+/// results. However, it may run substantially more tests than with a good
+/// predicate.
+class DeltaAlgorithm {
+public:
+  typedef unsigned change_ty;
+  // FIXME: Use a decent data structure.
+  typedef std::set<change_ty> changeset_ty;
+  typedef std::vector<changeset_ty> changesetlist_ty;
+
+private:
+  /// Cache of failed test results. Successful test results are never cached
+  /// since we always reduce following a success.
+  std::set<changeset_ty> FailedTestsCache;
+
+  /// GetTestResult - Get the test result for the \arg Changes from the
+  /// cache, executing the test if necessary.
+  ///
+  /// \param Changes - The change set to test.
+  /// \return - The test result.
+  bool GetTestResult(const changeset_ty &Changes);
+
+  /// Split - Partition a set of changes \arg Sinto one or two subsets.
+  void Split(const changeset_ty &S, changesetlist_ty &Res);
+
+  /// Delta - Minimize a set of \arg Changes which has been partioned into
+  /// smaller sets, by attempting to remove individual subsets.
+  changeset_ty Delta(const changeset_ty &Changes,
+                     const changesetlist_ty &Sets);
+
+  /// Search - Search for a subset (or subsets) in \arg Sets which can be
+  /// removed from \arg Changes while still satisfying the predicate.
+  ///
+  /// \param Res - On success, a subset of Changes which satisfies the
+  /// predicate.
+  /// \return - True on success.
+  bool Search(const changeset_ty &Changes, const changesetlist_ty &Sets,
+              changeset_ty &Res);
+              
+protected:
+  /// UpdatedSearchState - Callback used when the search state changes.
+  virtual void UpdatedSearchState(const changeset_ty &Changes,
+                                  const changesetlist_ty &Sets) {}
+
+  /// ExecuteOneTest - Execute a single test predicate on the change set \arg S.
+  virtual bool ExecuteOneTest(const changeset_ty &S) = 0;
+
+public:
+  virtual ~DeltaAlgorithm();
+
+  /// Run - Minimize the set \arg Changes by executing \see ExecuteOneTest() on
+  /// subsets of changes and returning the smallest set which still satisfies
+  /// the test predicate.
+  changeset_ty Run(const changeset_ty &Changes);
+};
+
+} // end namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/ADT/DenseSet.h b/libclamav/c++/llvm/include/llvm/ADT/DenseSet.h
index ce7344b..89f55ca 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/DenseSet.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/DenseSet.h
@@ -60,7 +60,7 @@ public:
     ValueT& operator*() { return I->first; }
     ValueT* operator->() { return &I->first; }
 
-    Iterator& operator++() { ++I; return *this; };
+    Iterator& operator++() { ++I; return *this; }
     bool operator==(const Iterator& X) const { return I == X.I; }
     bool operator!=(const Iterator& X) const { return I != X.I; }
   };
@@ -73,7 +73,7 @@ public:
     const ValueT& operator*() { return I->first; }
     const ValueT* operator->() { return &I->first; }
 
-    ConstIterator& operator++() { ++I; return *this; };
+    ConstIterator& operator++() { ++I; return *this; }
     bool operator==(const ConstIterator& X) const { return I == X.I; }
     bool operator!=(const ConstIterator& X) const { return I != X.I; }
   };
diff --git a/libclamav/c++/llvm/include/llvm/ADT/StringSwitch.h b/libclamav/c++/llvm/include/llvm/ADT/StringSwitch.h
index 6562d57..7dd5647 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/StringSwitch.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/StringSwitch.h
@@ -7,7 +7,7 @@
 //===----------------------------------------------------------------------===/
 //
 //  This file implements the StringSwitch template, which mimics a switch()
-//  statements whose cases are string literals.
+//  statement whose cases are string literals.
 //
 //===----------------------------------------------------------------------===/
 #ifndef LLVM_ADT_STRINGSWITCH_H
@@ -18,7 +18,7 @@
 #include <cstring>
 
 namespace llvm {
-  
+
 /// \brief A switch()-like statement whose cases are string literals.
 ///
 /// The StringSwitch class is a simple form of a switch() statement that
@@ -35,48 +35,44 @@ namespace llvm {
 ///   .Case("green", Green)
 ///   .Case("blue", Blue)
 ///   .Case("indigo", Indigo)
-///   .Case("violet", Violet)
+///   .Cases("violet", "purple", Violet)
 ///   .Default(UnknownColor);
 /// \endcode
-template<typename T>
+template<typename T, typename R = T>
 class StringSwitch {
   /// \brief The string we are matching.
   StringRef Str;
-  
-  /// \brief The result of this switch statement, once known.
-  T Result;
-  
-  /// \brief Set true when the result of this switch is already known; in this
-  /// case, Result is valid.
-  bool ResultKnown;
-  
+
+  /// \brief The pointer to the result of this switch statement, once known,
+  /// null before that.
+  const T *Result;
+
 public:
-  explicit StringSwitch(StringRef Str) 
-  : Str(Str), ResultKnown(false) { }
-  
+  explicit StringSwitch(StringRef Str)
+  : Str(Str), Result(0) { }
+
   template<unsigned N>
   StringSwitch& Case(const char (&S)[N], const T& Value) {
-    if (!ResultKnown && N-1 == Str.size() && 
+    if (!Result && N-1 == Str.size() &&
         (std::memcmp(S, Str.data(), N-1) == 0)) {
-      Result = Value;
-      ResultKnown = true;
+      Result = &Value;
     }
-    
+
     return *this;
   }
-  
+
   template<unsigned N0, unsigned N1>
   StringSwitch& Cases(const char (&S0)[N0], const char (&S1)[N1],
                       const T& Value) {
     return Case(S0, Value).Case(S1, Value);
   }
-  
+
   template<unsigned N0, unsigned N1, unsigned N2>
   StringSwitch& Cases(const char (&S0)[N0], const char (&S1)[N1],
                       const char (&S2)[N2], const T& Value) {
     return Case(S0, Value).Case(S1, Value).Case(S2, Value);
   }
-  
+
   template<unsigned N0, unsigned N1, unsigned N2, unsigned N3>
   StringSwitch& Cases(const char (&S0)[N0], const char (&S1)[N1],
                       const char (&S2)[N2], const char (&S3)[N3],
@@ -87,21 +83,21 @@ public:
   template<unsigned N0, unsigned N1, unsigned N2, unsigned N3, unsigned N4>
   StringSwitch& Cases(const char (&S0)[N0], const char (&S1)[N1],
                       const char (&S2)[N2], const char (&S3)[N3],
-                       const char (&S4)[N4], const T& Value) {
+                      const char (&S4)[N4], const T& Value) {
     return Case(S0, Value).Case(S1, Value).Case(S2, Value).Case(S3, Value)
       .Case(S4, Value);
   }
-  
-  T Default(const T& Value) {
-    if (ResultKnown)
-      return Result;
-    
+
+  R Default(const T& Value) const {
+    if (Result)
+      return *Result;
+
     return Value;
   }
-  
-  operator T() {
-    assert(ResultKnown && "Fell off the end of a string-switch");
-    return Result;
+
+  operator R() const {
+    assert(Result && "Fell off the end of a string-switch");
+    return *Result;
   }
 };
 
diff --git a/libclamav/c++/llvm/include/llvm/ADT/Trie.h b/libclamav/c++/llvm/include/llvm/ADT/Trie.h
index b415990..6b150c8 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/Trie.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/Trie.h
@@ -309,8 +309,7 @@ struct DOTGraphTraits<Trie<Payload> > : public DefaultDOTGraphTraits {
     return "Trie";
   }
 
-  static std::string getNodeLabel(NodeType* Node, const Trie<Payload>& T,
-                                  bool ShortNames) {
+  static std::string getNodeLabel(NodeType* Node, const Trie<Payload>& T) {
     if (T.getRoot() == Node)
       return "<Root>";
     else
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/CFGPrinter.h b/libclamav/c++/llvm/include/llvm/Analysis/CFGPrinter.h
index 440d182..6ad2e5a 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/CFGPrinter.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/CFGPrinter.h
@@ -24,23 +24,29 @@
 namespace llvm {
 template<>
 struct DOTGraphTraits<const Function*> : public DefaultDOTGraphTraits {
+
+  DOTGraphTraits (bool isSimple=false) : DefaultDOTGraphTraits(isSimple) {}
+
   static std::string getGraphName(const Function *F) {
     return "CFG for '" + F->getNameStr() + "' function";
   }
 
-  static std::string getNodeLabel(const BasicBlock *Node,
-                                  const Function *Graph,
-                                  bool ShortNames) {
-    if (ShortNames && !Node->getName().empty())
-      return Node->getNameStr() + ":";
+  static std::string getSimpleNodeLabel(const BasicBlock *Node,
+                                  const Function *Graph) {
+    if (!Node->getName().empty())
+      return Node->getNameStr(); 
 
     std::string Str;
     raw_string_ostream OS(Str);
 
-    if (ShortNames) {
-      WriteAsOperand(OS, Node, false);
-      return OS.str();
-    }
+    WriteAsOperand(OS, Node, false);
+    return OS.str();
+  }
+
+  static std::string getCompleteNodeLabel(const BasicBlock *Node,
+		                          const Function *Graph) {
+    std::string Str;
+    raw_string_ostream OS(Str);
 
     if (Node->getName().empty()) {
       WriteAsOperand(OS, Node, false);
@@ -65,6 +71,14 @@ struct DOTGraphTraits<const Function*> : public DefaultDOTGraphTraits {
     return OutStr;
   }
 
+  std::string getNodeLabel(const BasicBlock *Node,
+                           const Function *Graph) {
+    if (isSimple())
+      return getSimpleNodeLabel(Node, Graph);
+    else
+      return getCompleteNodeLabel(Node, Graph);
+  }
+
   static std::string getEdgeSourceLabel(const BasicBlock *Node,
                                         succ_const_iterator I) {
     // Label source of conditional branches with "T" or "F"
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
index 866ed8a..232804e 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
@@ -197,7 +197,8 @@ namespace llvm {
       FlagProtected        = 1 << 1,
       FlagFwdDecl          = 1 << 2,
       FlagAppleBlock       = 1 << 3,
-      FlagBlockByrefStruct = 1 << 4
+      FlagBlockByrefStruct = 1 << 4,
+      FlagVirtual          = 1 << 5
     };
 
   protected:
@@ -242,6 +243,9 @@ namespace llvm {
     bool isBlockByrefStruct() const {
       return (getFlags() & FlagBlockByrefStruct) != 0;
     }
+    bool isVirtual() const {
+      return (getFlags() & FlagVirtual) != 0;
+    }
 
     /// dump - print type.
     void dump() const;
@@ -366,6 +370,24 @@ namespace llvm {
     /// compile unit, like 'static' in C.
     unsigned isLocalToUnit() const     { return getUnsignedField(9); }
     unsigned isDefinition() const      { return getUnsignedField(10); }
+
+    unsigned getVirtuality() const {
+      if (DbgNode->getNumElements() < 14)
+        return 0;
+      return getUnsignedField(11);
+    }
+
+    unsigned getVirtualIndex() const { 
+      if (DbgNode->getNumElements() < 14)
+        return 0;
+      return getUnsignedField(12);
+    }
+
+    DICompositeType getContainingType() const {
+      assert (DbgNode->getNumElements() >= 14 && "Invalid type!");
+      return getFieldAs<DICompositeType>(13);
+    }
+
     StringRef getFilename() const    { return getCompileUnit().getFilename();}
     StringRef getDirectory() const   { return getCompileUnit().getDirectory();}
 
@@ -470,6 +492,7 @@ namespace llvm {
 
     const Type *EmptyStructPtr; // "{}*".
     Function *DeclareFn;     // llvm.dbg.declare
+    Function *ValueFn;       // llvm.dbg.value
 
     DIFactory(const DIFactory &);     // DO NOT IMPLEMENT
     void operator=(const DIFactory&); // DO NOT IMPLEMENT
@@ -565,7 +588,14 @@ namespace llvm {
                                   StringRef LinkageName,
                                   DICompileUnit CompileUnit, unsigned LineNo,
                                   DIType Type, bool isLocalToUnit,
-                                  bool isDefinition);
+                                  bool isDefinition,
+                                  unsigned VK = 0,
+                                  unsigned VIndex = 0,
+                                  DIType = DIType());
+
+    /// CreateSubprogramDefinition - Create new subprogram descriptor for the
+    /// given declaration. 
+    DISubprogram CreateSubprogramDefinition(DISubprogram &SPDeclaration);
 
     /// CreateGlobalVariable - Create a new descriptor for the specified global.
     DIGlobalVariable
@@ -610,6 +640,13 @@ namespace llvm {
     Instruction *InsertDeclare(llvm::Value *Storage, DIVariable D,
                                Instruction *InsertBefore);
 
+    /// InsertDbgValueIntrinsic - Insert a new llvm.dbg.value intrinsic call.
+    Instruction *InsertDbgValueIntrinsic(llvm::Value *V, llvm::Value *Offset,
+                                         DIVariable D, BasicBlock *InsertAtEnd);
+
+    /// InsertDbgValueIntrinsic - Insert a new llvm.dbg.value intrinsic call.
+    Instruction *InsertDbgValueIntrinsic(llvm::Value *V, llvm::Value *Offset,
+                                       DIVariable D, Instruction *InsertBefore);
   private:
     Constant *GetTagConstant(unsigned TAG);
   };
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/InstructionSimplify.h b/libclamav/c++/llvm/include/llvm/Analysis/InstructionSimplify.h
index 1cd7e56..13314e6 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/InstructionSimplify.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/InstructionSimplify.h
@@ -20,6 +20,11 @@ namespace llvm {
   class Instruction;
   class Value;
   class TargetData;
+
+  /// SimplifyAddInst - Given operands for an Add, see if we can
+  /// fold the result.  If not, this returns null.
+  Value *SimplifyAddInst(Value *LHS, Value *RHS, bool isNSW, bool isNUW,
+                         const TargetData *TD = 0);
   
   /// SimplifyAndInst - Given operands for an And, see if we can
   /// fold the result.  If not, this returns null.
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LoopDependenceAnalysis.h b/libclamav/c++/llvm/include/llvm/Analysis/LoopDependenceAnalysis.h
index 1d386ba..a1a5637 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/LoopDependenceAnalysis.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/LoopDependenceAnalysis.h
@@ -67,17 +67,17 @@ class LoopDependenceAnalysis : public LoopPass {
   /// created. The third argument is set to the pair found or created.
   bool findOrInsertDependencePair(Value*, Value*, DependencePair*&);
 
-  /// getLoops - Collect all loops of the loop-nest L a given SCEV is variant
-  /// in.
+  /// getLoops - Collect all loops of the loop nest L in which
+  /// a given SCEV is variant.
   void getLoops(const SCEV*, DenseSet<const Loop*>*) const;
 
   /// isLoopInvariant - True if a given SCEV is invariant in all loops of the
-  /// loop-nest starting at the innermost loop L.
+  /// loop nest starting at the innermost loop L.
   bool isLoopInvariant(const SCEV*) const;
 
-  /// isAffine - An SCEV is affine with respect to the loop-nest starting at
+  /// isAffine - An SCEV is affine with respect to the loop nest starting at
   /// the innermost loop L if it is of the form A+B*X where A, B are invariant
-  /// in the loop-nest and X is a induction variable in the loop-nest.
+  /// in the loop nest and X is a induction variable in the loop nest.
   bool isAffine(const SCEV*) const;
 
   /// TODO: doc
@@ -93,8 +93,8 @@ public:
   static char ID; // Class identification, replacement for typeinfo
   LoopDependenceAnalysis() : LoopPass(&ID) {}
 
-  /// isDependencePair - Check wether two values can possibly give rise to a
-  /// data dependence: that is the case if both are instructions accessing
+  /// isDependencePair - Check whether two values can possibly give rise to
+  /// a data dependence: that is the case if both are instructions accessing
   /// memory and at least one of those accesses is a write.
   bool isDependencePair(const Value*, const Value*) const;
 
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
index 9969d99..7419cdc 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
@@ -568,7 +568,7 @@ public:
 
   /// getUniqueExitBlocks - Return all unique successor blocks of this loop. 
   /// These are the blocks _outside of the current loop_ which are branched to.
-  /// This assumes that loop is in canonical form.
+  /// This assumes that loop exits are in canonical form.
   ///
   void getUniqueExitBlocks(SmallVectorImpl<BasicBlock *> &ExitBlocks) const;
 
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LoopPass.h b/libclamav/c++/llvm/include/llvm/Analysis/LoopPass.h
index 2eb329f..2dceccb 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/LoopPass.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/LoopPass.h
@@ -52,7 +52,7 @@ public:
   // LPPassManger as expected.
   void preparePassManager(PMStack &PMS);
 
-  /// Assign pass manager to manager this pass
+  /// Assign pass manager to manage this pass
   virtual void assignPassManager(PMStack &PMS,
                                  PassManagerType PMT = PMT_LoopPassManager);
 
@@ -73,7 +73,7 @@ public:
   /// cloneBasicBlockAnalysis - Clone analysis info associated with basic block.
   virtual void cloneBasicBlockAnalysis(BasicBlock *F, BasicBlock *T, Loop *L) {}
 
-  /// deletekAnalysisValue - Delete analysis info associated with value V.
+  /// deleteAnalysisValue - Delete analysis info associated with value V.
   virtual void deleteAnalysisValue(Value *V, Loop *L) {}
 };
 
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h b/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h
index 042c7fc..c04631b 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/MemoryDependenceAnalysis.h
@@ -16,6 +16,7 @@
 
 #include "llvm/BasicBlock.h"
 #include "llvm/Pass.h"
+#include "llvm/Support/ValueHandle.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/OwningPtr.h"
@@ -30,6 +31,8 @@ namespace llvm {
   class TargetData;
   class MemoryDependenceAnalysis;
   class PredIteratorCache;
+  class DominatorTree;
+  class PHITransAddr;
   
   /// MemDepResult - A memory dependence query can return one of three different
   /// answers, described below.
@@ -59,9 +62,9 @@ namespace llvm {
       ///      this case, the load is loading an undef value or a store is the
       ///      first store to (that part of) the allocation.
       ///   3. Dependence queries on calls return Def only when they are
-      ///      readonly calls with identical callees and no intervening
-      ///      clobbers.  No validation is done that the operands to the calls
-      ///      are the same.
+      ///      readonly calls or memory use intrinsics with identical callees
+      ///      and no intervening clobbers.  No validation is done that the
+      ///      operands to the calls are the same.
       Def,
       
       /// NonLocal - This marker indicates that the query has no dependency in
@@ -129,6 +132,45 @@ namespace llvm {
     }
   };
 
+  /// NonLocalDepEntry - This is an entry in the NonLocalDepInfo cache, and an
+  /// entry in the results set for a non-local query.  For each BasicBlock (the
+  /// BB entry) it keeps a MemDepResult and the (potentially phi translated)
+  /// address that was live in the block.
+  class NonLocalDepEntry {
+    BasicBlock *BB;
+    MemDepResult Result;
+    WeakVH Address;
+  public:
+    NonLocalDepEntry(BasicBlock *bb, MemDepResult result, Value *address)
+      : BB(bb), Result(result), Address(address) {}
+
+    // This is used for searches.
+    NonLocalDepEntry(BasicBlock *bb) : BB(bb) {}
+
+    // BB is the sort key, it can't be changed.
+    BasicBlock *getBB() const { return BB; }
+    
+    void setResult(const MemDepResult &R, Value *Addr) {
+      Result = R;
+      Address = Addr;
+    }
+
+    const MemDepResult &getResult() const { return Result; }
+    
+    /// getAddress - Return the address of this pointer in this block.  This can
+    /// be different than the address queried for the non-local result because
+    /// of phi translation.  This returns null if the address was not available
+    /// in a block (i.e. because phi translation failed) or if this is a cached
+    /// result and that address was deleted.
+    ///
+    /// The address is always null for a non-local 'call' dependence.
+    Value *getAddress() const { return Address; }
+
+    bool operator<(const NonLocalDepEntry &RHS) const {
+      return BB < RHS.BB;
+    }
+  };
+  
   /// MemoryDependenceAnalysis - This is an analysis that determines, for a
   /// given memory operation, what preceding memory operations it depends on.
   /// It builds on alias analysis information, and tries to provide a lazy,
@@ -150,7 +192,6 @@ namespace llvm {
     LocalDepMapType LocalDeps;
 
   public:
-    typedef std::pair<BasicBlock*, MemDepResult> NonLocalDepEntry;
     typedef std::vector<NonLocalDepEntry> NonLocalDepInfo;
   private:
     /// ValueIsLoadPair - This is a pair<Value*, bool> where the bool is true if
@@ -244,20 +285,6 @@ namespace llvm {
                                       BasicBlock *BB,
                                      SmallVectorImpl<NonLocalDepEntry> &Result);
     
-    /// PHITranslatePointer - Find an available version of the specified value
-    /// PHI translated across the specified edge.  If MemDep isn't able to
-    /// satisfy this request, it returns null.
-    Value *PHITranslatePointer(Value *V,
-                               BasicBlock *CurBB, BasicBlock *PredBB,
-                               const TargetData *TD) const;
-
-    /// InsertPHITranslatedPointer - Insert a computation of the PHI translated
-    /// version of 'V' for the edge PredBB->CurBB into the end of the PredBB
-    /// block.
-    Value *InsertPHITranslatedPointer(Value *V,
-                                      BasicBlock *CurBB, BasicBlock *PredBB,
-                                      const TargetData *TD) const;
-    
     /// removeInstruction - Remove an instruction from the dependence analysis,
     /// updating the dependence of instructions that previously depended on it.
     void removeInstruction(Instruction *InstToRemove);
@@ -278,7 +305,7 @@ namespace llvm {
     MemDepResult getCallSiteDependencyFrom(CallSite C, bool isReadOnlyCall,
                                            BasicBlock::iterator ScanIt,
                                            BasicBlock *BB);
-    bool getNonLocalPointerDepFromBB(Value *Pointer, uint64_t Size,
+    bool getNonLocalPointerDepFromBB(const PHITransAddr &Pointer, uint64_t Size,
                                      bool isLoad, BasicBlock *BB,
                                      SmallVectorImpl<NonLocalDepEntry> &Result,
                                      DenseMap<BasicBlock*, Value*> &Visited,
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/PHITransAddr.h b/libclamav/c++/llvm/include/llvm/Analysis/PHITransAddr.h
new file mode 100644
index 0000000..b612316
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Analysis/PHITransAddr.h
@@ -0,0 +1,121 @@
+//===- PHITransAddr.h - PHI Translation for Addresses -----------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file declares the PHITransAddr class.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_ANALYSIS_PHITRANSADDR_H
+#define LLVM_ANALYSIS_PHITRANSADDR_H
+
+#include "llvm/Instruction.h"
+#include "llvm/ADT/SmallVector.h"
+
+namespace llvm {
+  class DominatorTree;
+  class TargetData;
+  
+/// PHITransAddr - An address value which tracks and handles phi translation.
+/// As we walk "up" the CFG through predecessors, we need to ensure that the
+/// address we're tracking is kept up to date.  For example, if we're analyzing
+/// an address of "&A[i]" and walk through the definition of 'i' which is a PHI
+/// node, we *must* phi translate i to get "&A[j]" or else we will analyze an
+/// incorrect pointer in the predecessor block.
+///
+/// This is designed to be a relatively small object that lives on the stack and
+/// is copyable.
+///
+class PHITransAddr {
+  /// Addr - The actual address we're analyzing.
+  Value *Addr;
+  
+  /// TD - The target data we are playing with if known, otherwise null.
+  const TargetData *TD;
+  
+  /// InstInputs - The inputs for our symbolic address.
+  SmallVector<Instruction*, 4> InstInputs;
+public:
+  PHITransAddr(Value *addr, const TargetData *td) : Addr(addr), TD(td) {
+    // If the address is an instruction, the whole thing is considered an input.
+    if (Instruction *I = dyn_cast<Instruction>(Addr))
+      InstInputs.push_back(I);
+  }
+  
+  Value *getAddr() const { return Addr; }
+  
+  /// NeedsPHITranslationFromBlock - Return true if moving from the specified
+  /// BasicBlock to its predecessors requires PHI translation.
+  bool NeedsPHITranslationFromBlock(BasicBlock *BB) const {
+    // We do need translation if one of our input instructions is defined in
+    // this block.
+    for (unsigned i = 0, e = InstInputs.size(); i != e; ++i)
+      if (InstInputs[i]->getParent() == BB)
+        return true;
+    return false;
+  }
+  
+  /// IsPotentiallyPHITranslatable - If this needs PHI translation, return true
+  /// if we have some hope of doing it.  This should be used as a filter to
+  /// avoid calling PHITranslateValue in hopeless situations.
+  bool IsPotentiallyPHITranslatable() const;
+  
+  /// PHITranslateValue - PHI translate the current address up the CFG from
+  /// CurBB to Pred, updating our state the reflect any needed changes.  This
+  /// returns true on failure and sets Addr to null.
+  bool PHITranslateValue(BasicBlock *CurBB, BasicBlock *PredBB);
+  
+  /// PHITranslateWithInsertion - PHI translate this value into the specified
+  /// predecessor block, inserting a computation of the value if it is
+  /// unavailable.
+  ///
+  /// All newly created instructions are added to the NewInsts list.  This
+  /// returns null on failure.
+  ///
+  Value *PHITranslateWithInsertion(BasicBlock *CurBB, BasicBlock *PredBB,
+                                   const DominatorTree &DT,
+                                   SmallVectorImpl<Instruction*> &NewInsts);
+  
+  void dump() const;
+  
+  /// Verify - Check internal consistency of this data structure.  If the
+  /// structure is valid, it returns true.  If invalid, it prints errors and
+  /// returns false.
+  bool Verify() const;
+private:
+  Value *PHITranslateSubExpr(Value *V, BasicBlock *CurBB, BasicBlock *PredBB);
+  
+  
+  /// GetAvailablePHITranslatedSubExpr - Return the value computed by
+  /// PHITranslateSubExpr if it dominates PredBB, otherwise return null.
+  Value *GetAvailablePHITranslatedSubExpr(Value *V,
+                                          BasicBlock *CurBB, BasicBlock *PredBB,
+                                          const DominatorTree &DT) const;
+  
+  /// InsertPHITranslatedSubExpr - Insert a computation of the PHI translated
+  /// version of 'V' for the edge PredBB->CurBB into the end of the PredBB
+  /// block.  All newly created instructions are added to the NewInsts list.
+  /// This returns null on failure.
+  ///
+  Value *InsertPHITranslatedSubExpr(Value *InVal, BasicBlock *CurBB,
+                                    BasicBlock *PredBB, const DominatorTree &DT,
+                                    SmallVectorImpl<Instruction*> &NewInsts);
+  
+  /// AddAsInput - If the specified value is an instruction, add it as an input.
+  Value *AddAsInput(Value *V) {
+    // If V is an instruction, it is now an input.
+    if (Instruction *VI = dyn_cast<Instruction>(V))
+      InstInputs.push_back(VI);
+    return V;
+  }
+  
+};
+
+} // end namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/Passes.h b/libclamav/c++/llvm/include/llvm/Analysis/Passes.h
index b222321..2f39c6a 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/Passes.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/Passes.h
@@ -92,6 +92,7 @@ namespace llvm {
   // file.
   //
   ModulePass *createProfileLoaderPass();
+  extern const PassInfo *ProfileLoaderPassID;
 
   //===--------------------------------------------------------------------===//
   //
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/PostDominators.h b/libclamav/c++/llvm/include/llvm/Analysis/PostDominators.h
index 42a16e7..ea14b2d 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/PostDominators.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/PostDominators.h
@@ -81,7 +81,10 @@ template <> struct GraphTraits<PostDominatorTree*>
   }
 
   static nodes_iterator nodes_begin(PostDominatorTree *N) {
-    return df_begin(getEntryNode(N));
+    if (getEntryNode(N))
+      return df_begin(getEntryNode(N));
+    else
+      return df_end(getEntryNode(N));
   }
 
   static nodes_iterator nodes_end(PostDominatorTree *N) {
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ProfileInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/ProfileInfo.h
index 2a80f3d..80ba6d8 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ProfileInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ProfileInfo.h
@@ -21,116 +21,228 @@
 #ifndef LLVM_ANALYSIS_PROFILEINFO_H
 #define LLVM_ANALYSIS_PROFILEINFO_H
 
-#include "llvm/BasicBlock.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/Format.h"
+#include "llvm/Support/raw_ostream.h"
 #include <cassert>
 #include <string>
 #include <map>
+#include <set>
 
 namespace llvm {
-  class Function;
   class Pass;
   class raw_ostream;
 
+  class BasicBlock;
+  class Function;
+  class MachineBasicBlock;
+  class MachineFunction;
+
+  // Helper for dumping edges to errs().
+  raw_ostream& operator<<(raw_ostream &O, std::pair<const BasicBlock *, const BasicBlock *> E);
+  raw_ostream& operator<<(raw_ostream &O, std::pair<const MachineBasicBlock *, const MachineBasicBlock *> E);
+
+  raw_ostream& operator<<(raw_ostream &O, const BasicBlock *BB);
+  raw_ostream& operator<<(raw_ostream &O, const MachineBasicBlock *MBB);
+
+  raw_ostream& operator<<(raw_ostream &O, const Function *F);
+  raw_ostream& operator<<(raw_ostream &O, const MachineFunction *MF);
+
   /// ProfileInfo Class - This class holds and maintains profiling
   /// information for some unit of code.
-  class ProfileInfo {
+  template<class FType, class BType>
+  class ProfileInfoT {
   public:
     // Types for handling profiling information.
-    typedef std::pair<const BasicBlock*, const BasicBlock*> Edge;
+    typedef std::pair<const BType*, const BType*> Edge;
     typedef std::pair<Edge, double> EdgeWeight;
     typedef std::map<Edge, double> EdgeWeights;
-    typedef std::map<const BasicBlock*, double> BlockCounts;
+    typedef std::map<const BType*, double> BlockCounts;
+    typedef std::map<const BType*, const BType*> Path;
 
   protected:
     // EdgeInformation - Count the number of times a transition between two
     // blocks is executed. As a special case, we also hold an edge from the
     // null BasicBlock to the entry block to indicate how many times the
     // function was entered.
-    std::map<const Function*, EdgeWeights> EdgeInformation;
+    std::map<const FType*, EdgeWeights> EdgeInformation;
 
     // BlockInformation - Count the number of times a block is executed.
-    std::map<const Function*, BlockCounts> BlockInformation;
+    std::map<const FType*, BlockCounts> BlockInformation;
 
     // FunctionInformation - Count the number of times a function is executed.
-    std::map<const Function*, double> FunctionInformation;
+    std::map<const FType*, double> FunctionInformation;
+
+    ProfileInfoT<MachineFunction, MachineBasicBlock> *MachineProfile;
   public:
     static char ID; // Class identification, replacement for typeinfo
-    virtual ~ProfileInfo();  // We want to be subclassed
+    ProfileInfoT();
+    ~ProfileInfoT();  // We want to be subclassed
 
     // MissingValue - The value that is returned for execution counts in case
     // no value is available.
     static const double MissingValue;
 
     // getFunction() - Returns the Function for an Edge, checking for validity.
-    static const Function* getFunction(Edge e) {
+    static const FType* getFunction(Edge e) {
       if (e.first) {
         return e.first->getParent();
       } else if (e.second) {
         return e.second->getParent();
       }
       assert(0 && "Invalid ProfileInfo::Edge");
-      return (const Function*)0;
+      return (const FType*)0;
     }
 
     // getEdge() - Creates an Edge from two BasicBlocks.
-    static Edge getEdge(const BasicBlock *Src, const BasicBlock *Dest) {
+    static Edge getEdge(const BType *Src, const BType *Dest) {
       return std::make_pair(Src, Dest);
     }
 
     //===------------------------------------------------------------------===//
     /// Profile Information Queries
     ///
-    double getExecutionCount(const Function *F);
+    double getExecutionCount(const FType *F);
+
+    double getExecutionCount(const BType *BB);
 
-    double getExecutionCount(const BasicBlock *BB);
+    void setExecutionCount(const BType *BB, double w);
+
+    void addExecutionCount(const BType *BB, double w);
 
     double getEdgeWeight(Edge e) const {
-      std::map<const Function*, EdgeWeights>::const_iterator J =
+      typename std::map<const FType*, EdgeWeights>::const_iterator J =
         EdgeInformation.find(getFunction(e));
       if (J == EdgeInformation.end()) return MissingValue;
 
-      EdgeWeights::const_iterator I = J->second.find(e);
+      typename EdgeWeights::const_iterator I = J->second.find(e);
       if (I == J->second.end()) return MissingValue;
 
       return I->second;
     }
 
-    EdgeWeights &getEdgeWeights (const Function *F) {
+    void setEdgeWeight(Edge e, double w) {
+      DEBUG_WITH_TYPE("profile-info",
+            errs() << "Creating Edge " << e
+                   << " (weight: " << format("%.20g",w) << ")\n");
+      EdgeInformation[getFunction(e)][e] = w;
+    }
+
+    void addEdgeWeight(Edge e, double w);
+
+    EdgeWeights &getEdgeWeights (const FType *F) {
       return EdgeInformation[F];
     }
 
     //===------------------------------------------------------------------===//
     /// Analysis Update Methods
     ///
-    void removeBlock(const BasicBlock *BB) {
-      std::map<const Function*, BlockCounts>::iterator J =
-        BlockInformation.find(BB->getParent());
-      if (J == BlockInformation.end()) return;
-
-      J->second.erase(BB);
+    void removeBlock(const BType *BB);
+
+    void removeEdge(Edge e);
+
+    void replaceEdge(const Edge &, const Edge &);
+
+    enum GetPathMode {
+      GetPathToExit = 1,
+      GetPathToValue = 2,
+      GetPathToDest = 4,
+      GetPathWithNewEdges = 8
+    };
+
+    const BType *GetPath(const BType *Src, const BType *Dest,
+                              Path &P, unsigned Mode);
+
+    void divertFlow(const Edge &, const Edge &);
+
+    void splitEdge(const BType *FirstBB, const BType *SecondBB,
+                   const BType *NewBB, bool MergeIdenticalEdges = false);
+
+    void splitBlock(const BType *Old, const BType* New);
+
+    void splitBlock(const BType *BB, const BType* NewBB,
+                    BType *const *Preds, unsigned NumPreds);
+
+    void replaceAllUses(const BType *RmBB, const BType *DestBB);
+
+    void transfer(const FType *Old, const FType *New);
+
+    void repair(const FType *F);
+
+    void dump(FType *F = 0, bool real = true) {
+      errs() << "**** This is ProfileInfo " << this << " speaking:\n";
+      if (!real) {
+        typename std::set<const FType*> Functions;
+
+        errs() << "Functions: \n";
+        if (F) {
+          errs() << F << "@" << format("%p", F) << ": " << format("%.20g",getExecutionCount(F)) << "\n";
+          Functions.insert(F);
+        } else {
+          for (typename std::map<const FType*, double>::iterator fi = FunctionInformation.begin(),
+               fe = FunctionInformation.end(); fi != fe; ++fi) {
+            errs() << fi->first << "@" << format("%p",fi->first) << ": " << format("%.20g",fi->second) << "\n";
+            Functions.insert(fi->first);
+          }
+        }
+
+        for (typename std::set<const FType*>::iterator FI = Functions.begin(), FE = Functions.end();
+             FI != FE; ++FI) {
+          const FType *F = *FI;
+          typename std::map<const FType*, BlockCounts>::iterator bwi = BlockInformation.find(F);
+          errs() << "BasicBlocks for Function " << F << ":\n";
+          for (typename BlockCounts::const_iterator bi = bwi->second.begin(), be = bwi->second.end(); bi != be; ++bi) {
+            errs() << bi->first << "@" << format("%p", bi->first) << ": " << format("%.20g",bi->second) << "\n";
+          }
+        }
+
+        for (typename std::set<const FType*>::iterator FI = Functions.begin(), FE = Functions.end();
+             FI != FE; ++FI) {
+          typename std::map<const FType*, EdgeWeights>::iterator ei = EdgeInformation.find(*FI);
+          errs() << "Edges for Function " << ei->first << ":\n";
+          for (typename EdgeWeights::iterator ewi = ei->second.begin(), ewe = ei->second.end(); 
+               ewi != ewe; ++ewi) {
+            errs() << ewi->first << ": " << format("%.20g",ewi->second) << "\n";
+          }
+        }
+      } else {
+        assert(F && "No function given, this is not supported!");
+        errs() << "Functions: \n";
+        errs() << F << "@" << format("%p", F) << ": " << format("%.20g",getExecutionCount(F)) << "\n";
+
+        errs() << "BasicBlocks for Function " << F << ":\n";
+        for (typename FType::const_iterator BI = F->begin(), BE = F->end();
+             BI != BE; ++BI) {
+          const BType *BB = &(*BI);
+          errs() << BB << "@" << format("%p", BB) << ": " << format("%.20g",getExecutionCount(BB)) << "\n";
+        }
+      }
+      errs() << "**** ProfileInfo " << this << ", over and out.\n";
     }
 
-    void removeEdge(Edge e) {
-      std::map<const Function*, EdgeWeights>::iterator J =
-        EdgeInformation.find(getFunction(e));
-      if (J == EdgeInformation.end()) return;
+    bool CalculateMissingEdge(const BType *BB, Edge &removed, bool assumeEmptyExit = false);
 
-      J->second.erase(e);
-    }
+    bool EstimateMissingEdges(const BType *BB);
 
-    void splitEdge(const BasicBlock *FirstBB, const BasicBlock *SecondBB,
-                   const BasicBlock *NewBB, bool MergeIdenticalEdges = false);
+    ProfileInfoT<MachineFunction, MachineBasicBlock> *MI() {
+      if (MachineProfile == 0)
+        MachineProfile = new ProfileInfoT<MachineFunction, MachineBasicBlock>();
+      return MachineProfile;
+    }
 
-    void replaceAllUses(const BasicBlock *RmBB, const BasicBlock *DestBB);
+    bool hasMI() const {
+      return (MachineProfile != 0);
+    }
   };
 
+  typedef ProfileInfoT<Function, BasicBlock> ProfileInfo;
+  typedef ProfileInfoT<MachineFunction, MachineBasicBlock> MachineProfileInfo;
+
   /// createProfileLoaderPass - This function returns a Pass that loads the
   /// profiling information for the module from the specified filename, making
   /// it available to the optimizers.
   Pass *createProfileLoaderPass(const std::string &Filename);
 
-  raw_ostream& operator<<(raw_ostream &O, ProfileInfo::Edge E);
-
 } // End llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/Argument.h b/libclamav/c++/llvm/include/llvm/Argument.h
index 3a846c2..ca54f48 100644
--- a/libclamav/c++/llvm/include/llvm/Argument.h
+++ b/libclamav/c++/llvm/include/llvm/Argument.h
@@ -51,6 +51,10 @@ public:
   /// in its containing function.
   bool hasByValAttr() const;
 
+  /// hasNestAttr - Return true if this argument has the nest attribute on
+  /// it in its containing function.
+  bool hasNestAttr() const;
+
   /// hasNoAliasAttr - Return true if this argument has the noalias attribute on
   /// it in its containing function.
   bool hasNoAliasAttr() const;
diff --git a/libclamav/c++/llvm/include/llvm/CallingConv.h b/libclamav/c++/llvm/include/llvm/CallingConv.h
index 318ea28..c54527c 100644
--- a/libclamav/c++/llvm/include/llvm/CallingConv.h
+++ b/libclamav/c++/llvm/include/llvm/CallingConv.h
@@ -68,7 +68,10 @@ namespace CallingConv {
     ARM_AAPCS = 67,
 
     /// ARM_AAPCS_VFP - Same as ARM_AAPCS, but uses hard floating point ABI.
-    ARM_AAPCS_VFP = 68
+    ARM_AAPCS_VFP = 68,
+
+    /// MSP430_INTR - Calling convention used for MSP430 interrupt routines.
+    MSP430_INTR = 69
   };
 } // End CallingConv namespace
 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/BreakCriticalMachineEdge.h b/libclamav/c++/llvm/include/llvm/CodeGen/BreakCriticalMachineEdge.h
deleted file mode 100644
index 4861297..0000000
--- a/libclamav/c++/llvm/include/llvm/CodeGen/BreakCriticalMachineEdge.h
+++ /dev/null
@@ -1,108 +0,0 @@
-//===--------- BreakCriticalMachineEdge.h - Break critical edges ---------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===---------------------------------------------------------------------===//
-//
-// Helper function to break a critical machine edge.
-//
-//===---------------------------------------------------------------------===//
-
-#ifndef LLVM_CODEGEN_BREAKCRITICALMACHINEEDGE_H
-#define LLVM_CODEGEN_BREAKCRITICALMACHINEEDGE_H
-
-#include "llvm/CodeGen/MachineJumpTableInfo.h"
-#include "llvm/Target/TargetInstrInfo.h"
-#include "llvm/Target/TargetMachine.h"
-
-namespace llvm {
-
-MachineBasicBlock* SplitCriticalMachineEdge(MachineBasicBlock* src,
-                                            MachineBasicBlock* dst) {
-  MachineFunction &MF = *src->getParent();
-  const BasicBlock* srcBB = src->getBasicBlock();
-
-  MachineBasicBlock* crit_mbb = MF.CreateMachineBasicBlock(srcBB);
-
-  // modify the llvm control flow graph
-  src->removeSuccessor(dst);
-  src->addSuccessor(crit_mbb);
-  crit_mbb->addSuccessor(dst);
-
-  // insert the new block into the machine function.
-  MF.push_back(crit_mbb);
-
-  // insert a unconditional branch linking the new block to dst
-  const TargetMachine& TM = MF.getTarget();
-  const TargetInstrInfo* TII = TM.getInstrInfo();
-  std::vector<MachineOperand> emptyConditions;
-  TII->InsertBranch(*crit_mbb, dst, (MachineBasicBlock*)0, 
-                    emptyConditions);
-
-  // modify every branch in src that points to dst to point to the new
-  // machine basic block instead:
-  MachineBasicBlock::iterator mii = src->end();
-  bool found_branch = false;
-  while (mii != src->begin()) {
-    mii--;
-    // if there are no more branches, finish the loop
-    if (!mii->getDesc().isTerminator()) {
-      break;
-    }
-
-    // Scan the operands of this branch, replacing any uses of dst with
-    // crit_mbb.
-    for (unsigned i = 0, e = mii->getNumOperands(); i != e; ++i) {
-      MachineOperand & mo = mii->getOperand(i);
-      if (mo.isMBB() && mo.getMBB() == dst) {
-        found_branch = true;
-        mo.setMBB(crit_mbb);
-      }
-    }
-  }
-
-  // TODO: This is tentative. It may be necessary to fix this code. Maybe
-  // I am inserting too many gotos, but I am trusting that the asm printer
-  // will optimize the unnecessary gotos.
-  if(!found_branch) {
-    TII->InsertBranch(*src, crit_mbb, (MachineBasicBlock*)0, 
-                      emptyConditions);
-  }
-
-  /// Change all the phi functions in dst, so that the incoming block be
-  /// crit_mbb, instead of src
-  for(mii = dst->begin(); mii != dst->end(); mii++) {
-    /// the first instructions are always phi functions.
-    if(mii->getOpcode() != TargetInstrInfo::PHI)
-      break;
-
-    // Find the operands corresponding to the source block
-    std::vector<unsigned> toRemove;
-    unsigned reg = 0;
-    for (unsigned u = 0; u != mii->getNumOperands(); ++u)
-      if (mii->getOperand(u).isMBB() &&
-          mii->getOperand(u).getMBB() == src) {
-        reg = mii->getOperand(u-1).getReg();
-        toRemove.push_back(u-1);
-      }
-    // Remove all uses of this MBB
-    for (std::vector<unsigned>::reverse_iterator I = toRemove.rbegin(),
-         E = toRemove.rend(); I != E; ++I) {
-      mii->RemoveOperand(*I+1);
-      mii->RemoveOperand(*I);
-    }
-
-    // Add a single use corresponding to the new MBB
-    mii->addOperand(MachineOperand::CreateReg(reg, false));
-    mii->addOperand(MachineOperand::CreateMBB(crit_mbb));
-  }
-
-  return crit_mbb;
-}
-
-}
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h b/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
index 624f18a..6a2b166 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
@@ -93,7 +93,7 @@ void SelectRoot(SelectionDAG &DAG) {
   // a reference to the root node, preventing it from being deleted,
   // and tracking any changes of the root.
   HandleSDNode Dummy(CurDAG->getRoot());
-  ISelPosition = next(SelectionDAG::allnodes_iterator(CurDAG->getRoot().getNode()));
+  ISelPosition = llvm::next(SelectionDAG::allnodes_iterator(CurDAG->getRoot().getNode()));
 
   // The AllNodes list is now topological-sorted. Visit the
   // nodes by starting at the end of the list (the root of the
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/FastISel.h b/libclamav/c++/llvm/include/llvm/CodeGen/FastISel.h
index 1efd1e0..806952a 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/FastISel.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/FastISel.h
@@ -98,14 +98,6 @@ public:
   ///
   bool SelectOperator(User *I, unsigned Opcode);
 
-  /// TargetSelectInstruction - This method is called by target-independent
-  /// code when the normal FastISel process fails to select an instruction.
-  /// This gives targets a chance to emit code for anything that doesn't
-  /// fit into FastISel's framework. It returns true if it was successful.
-  ///
-  virtual bool
-  TargetSelectInstruction(Instruction *I) = 0;
-
   /// getRegForValue - Create a virtual register and arrange for it to
   /// be assigned the value for the given LLVM value.
   unsigned getRegForValue(Value *V);
@@ -134,6 +126,14 @@ protected:
 #endif
            );
 
+  /// TargetSelectInstruction - This method is called by target-independent
+  /// code when the normal FastISel process fails to select an instruction.
+  /// This gives targets a chance to emit code for anything that doesn't
+  /// fit into FastISel's framework. It returns true if it was successful.
+  ///
+  virtual bool
+  TargetSelectInstruction(Instruction *I) = 0;
+
   /// FastEmit_r - This method is called by target-independent code
   /// to request that an instruction with the given type and opcode
   /// be emitted.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LinkAllCodegenComponents.h b/libclamav/c++/llvm/include/llvm/CodeGen/LinkAllCodegenComponents.h
index 4d2d0ee..5608c99 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LinkAllCodegenComponents.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/LinkAllCodegenComponents.h
@@ -19,6 +19,7 @@
 #include "llvm/CodeGen/SchedulerRegistry.h"
 #include "llvm/CodeGen/GCs.h"
 #include "llvm/Target/TargetMachine.h"
+#include <cstdlib>
 
 namespace {
   struct ForceCodegenLinking {
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LiveIntervalAnalysis.h b/libclamav/c++/llvm/include/llvm/CodeGen/LiveIntervalAnalysis.h
index 7a02d0f..d7ff8da 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LiveIntervalAnalysis.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/LiveIntervalAnalysis.h
@@ -112,10 +112,13 @@ namespace llvm {
       return (unsigned)(IntervalPercentage * indexes_->getFunctionSize());
     }
 
-    /// conflictsWithPhysRegDef - Returns true if the specified register
-    /// is defined during the duration of the specified interval.
-    bool conflictsWithPhysRegDef(const LiveInterval &li, VirtRegMap &vrm,
-                                 unsigned reg);
+    /// conflictsWithPhysReg - Returns true if the specified register is used or
+    /// defined during the duration of the specified interval. Copies to and
+    /// from li.reg are allowed. This method is only able to analyze simple
+    /// ranges that stay within a single basic block. Anything else is
+    /// considered a conflict.
+    bool conflictsWithPhysReg(const LiveInterval &li, VirtRegMap &vrm,
+                              unsigned reg);
 
     /// conflictsWithPhysRegRef - Similar to conflictsWithPhysRegRef except
     /// it can check use as well.
@@ -186,6 +189,10 @@ namespace llvm {
       return indexes_->getMBBFromIndex(index);
     }
 
+    SlotIndex getMBBTerminatorGap(const MachineBasicBlock *mbb) {
+      return indexes_->getTerminatorGap(mbb);
+    }
+
     SlotIndex InsertMachineInstrInMaps(MachineInstr *MI) {
       return indexes_->insertMachineInstrInMaps(MI);
     }
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LiveVariables.h b/libclamav/c++/llvm/include/llvm/CodeGen/LiveVariables.h
index a37abd4..a7bf600 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LiveVariables.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/LiveVariables.h
@@ -163,8 +163,13 @@ private:   // Intermediate data structures
                         SmallVector<unsigned, 4> &Defs);
   void UpdatePhysRegDefs(MachineInstr *MI, SmallVector<unsigned, 4> &Defs);
 
-  /// FindLastPartialDef - Return the last partial def of the specified register.
-  /// Also returns the sub-registers that're defined by the instruction.
+  /// FindLastRefOrPartRef - Return the last reference or partial reference of
+  /// the specified register.
+  MachineInstr *FindLastRefOrPartRef(unsigned Reg);
+
+  /// FindLastPartialDef - Return the last partial def of the specified
+  /// register. Also returns the sub-registers that're defined by the
+  /// instruction.
   MachineInstr *FindLastPartialDef(unsigned Reg,
                                    SmallSet<unsigned,4> &PartDefRegs);
 
@@ -278,6 +283,11 @@ public:
     return getVarInfo(Reg).isLiveIn(MBB, Reg, *MRI);
   }
 
+  /// isLiveOut - Determine if Reg is live out from MBB, when not considering
+  /// PHI nodes. This means that Reg is either killed by a successor block or
+  /// passed through one.
+  bool isLiveOut(unsigned Reg, const MachineBasicBlock &MBB);
+
   /// addNewBlock - Add a new basic block BB between DomBB and SuccBB. All
   /// variables that are live out of DomBB and live into SuccBB will be marked
   /// as passing live through BB. This method assumes that the machine code is
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
index 6b4c640..7e3ce6b 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
@@ -327,6 +327,11 @@ public:
   /// 'Old', change the code and CFG so that it branches to 'New' instead.
   void ReplaceUsesOfBlockWith(MachineBasicBlock *Old, MachineBasicBlock *New);
 
+  /// BranchesToLandingPad - The basic block is a landing pad or branches only
+  /// to a landing pad. No other instructions are present other than the
+  /// unconditional branch.
+  bool BranchesToLandingPad(const MachineBasicBlock *MBB) const;
+
   /// CorrectExtraCFGEdges - Various pieces of code can cause excess edges in
   /// the CFG to be inserted.  If we have proven that MBB can only branch to
   /// DestA and DestB, remove any other MBB successors from the CFG. DestA and
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h
index bed82af..968e4ea 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h
@@ -327,7 +327,20 @@ public:
   /// setMaxAlignment - Set the preferred alignment.
   ///
   void setMaxAlignment(unsigned Align) { MaxAlignment = Align; }
-  
+
+  /// calculateMaxStackAlignment() - If there is a local object which requires
+  /// greater alignment than the current max alignment, adjust accordingly.
+  void calculateMaxStackAlignment() {
+    for (int i = getObjectIndexBegin(),
+         e = getObjectIndexEnd(); i != e; ++i) {
+      if (isDeadObjectIndex(i))
+        continue;
+
+      unsigned Align = getObjectAlignment(i);
+      MaxAlignment = std::max(MaxAlignment, Align);
+    }
+  }
+
   /// hasCalls - Return true if the current function has no function calls.
   /// This is only valid during or after prolog/epilog code emission.
   ///
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h
index c620449..87b67d6 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h
@@ -320,6 +320,11 @@ public:
   /// loads the instruction does are invariant (if it does multiple loads).
   bool isInvariantLoad(AliasAnalysis *AA) const;
 
+  /// isConstantValuePHI - If the specified instruction is a PHI that always
+  /// merges together the same virtual register, return the register, otherwise
+  /// return 0.
+  unsigned isConstantValuePHI() const;
+
   //
   // Debugging support
   //
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineSSAUpdater.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineSSAUpdater.h
new file mode 100644
index 0000000..ab663fe
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineSSAUpdater.h
@@ -0,0 +1,115 @@
+//===-- MachineSSAUpdater.h - Unstructured SSA Update Tool ------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file declares the MachineSSAUpdater class.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_CODEGEN_MACHINESSAUPDATER_H
+#define LLVM_CODEGEN_MACHINESSAUPDATER_H
+
+namespace llvm {
+  class MachineBasicBlock;
+  class MachineFunction;
+  class MachineInstr;
+  class MachineOperand;
+  class MachineRegisterInfo;
+  class TargetInstrInfo;
+  class TargetRegisterClass;
+  template<typename T> class SmallVectorImpl;
+
+/// MachineSSAUpdater - This class updates SSA form for a set of virtual
+/// registers defined in multiple blocks.  This is used when code duplication
+/// or another unstructured transformation wants to rewrite a set of uses of one
+/// vreg with uses of a set of vregs.
+class MachineSSAUpdater {
+  /// AvailableVals - This keeps track of which value to use on a per-block
+  /// basis.  When we insert PHI nodes, we keep track of them here.
+  //typedef DenseMap<MachineBasicBlock*, unsigned > AvailableValsTy;
+  void *AV;
+
+  /// IncomingPredInfo - We use this as scratch space when doing our recursive
+  /// walk.  This should only be used in GetValueInBlockInternal, normally it
+  /// should be empty.
+  //std::vector<std::pair<MachineBasicBlock*, unsigned > > IncomingPredInfo;
+  void *IPI;
+
+  /// VR - Current virtual register whose uses are being updated.
+  unsigned VR;
+
+  /// VRC - Register class of the current virtual register.
+  const TargetRegisterClass *VRC;
+
+  /// InsertedPHIs - If this is non-null, the MachineSSAUpdater adds all PHI
+  /// nodes that it creates to the vector.
+  SmallVectorImpl<MachineInstr*> *InsertedPHIs;
+
+  const TargetInstrInfo *TII;
+  MachineRegisterInfo *MRI;
+public:
+  /// MachineSSAUpdater constructor.  If InsertedPHIs is specified, it will be
+  /// filled in with all PHI Nodes created by rewriting.
+  explicit MachineSSAUpdater(MachineFunction &MF,
+                             SmallVectorImpl<MachineInstr*> *InsertedPHIs = 0);
+  ~MachineSSAUpdater();
+
+  /// Initialize - Reset this object to get ready for a new set of SSA
+  /// updates.
+  void Initialize(unsigned V);
+
+  /// AddAvailableValue - Indicate that a rewritten value is available at the
+  /// end of the specified block with the specified value.
+  void AddAvailableValue(MachineBasicBlock *BB, unsigned V);
+
+  /// HasValueForBlock - Return true if the MachineSSAUpdater already has a
+  /// value for the specified block.
+  bool HasValueForBlock(MachineBasicBlock *BB) const;
+
+  /// GetValueAtEndOfBlock - Construct SSA form, materializing a value that is
+  /// live at the end of the specified block.
+  unsigned GetValueAtEndOfBlock(MachineBasicBlock *BB);
+
+  /// GetValueInMiddleOfBlock - Construct SSA form, materializing a value that
+  /// is live in the middle of the specified block.
+  ///
+  /// GetValueInMiddleOfBlock is the same as GetValueAtEndOfBlock except in one
+  /// important case: if there is a definition of the rewritten value after the
+  /// 'use' in BB.  Consider code like this:
+  ///
+  ///      X1 = ...
+  ///   SomeBB:
+  ///      use(X)
+  ///      X2 = ...
+  ///      br Cond, SomeBB, OutBB
+  ///
+  /// In this case, there are two values (X1 and X2) added to the AvailableVals
+  /// set by the client of the rewriter, and those values are both live out of
+  /// their respective blocks.  However, the use of X happens in the *middle* of
+  /// a block.  Because of this, we need to insert a new PHI node in SomeBB to
+  /// merge the appropriate values, and this value isn't live out of the block.
+  ///
+  unsigned GetValueInMiddleOfBlock(MachineBasicBlock *BB);
+
+  /// RewriteUse - Rewrite a use of the symbolic value.  This handles PHI nodes,
+  /// which use their value in the corresponding predecessor.  Note that this
+  /// will not work if the use is supposed to be rewritten to a value defined in
+  /// the same block as the use, but above it.  Any 'AddAvailableValue's added
+  /// for the use's block will be considered to be below it.
+  void RewriteUse(MachineOperand &U);
+
+private:
+  void ReplaceRegWith(unsigned OldReg, unsigned NewReg);
+  unsigned GetValueAtEndOfBlockInternal(MachineBasicBlock *BB);
+  void operator=(const MachineSSAUpdater&); // DO NOT IMPLEMENT
+  MachineSSAUpdater(const MachineSSAUpdater&);     // DO NOT IMPLEMENT
+};
+
+} // End llvm namespace
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h b/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h
index 8e89702..99f8c34 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h
@@ -131,7 +131,7 @@ namespace llvm {
 
   /// TailDuplicate Pass - Duplicate blocks with unconditional branches
   /// into tails of their predecessors.
-  FunctionPass *createTailDuplicatePass();
+  FunctionPass *createTailDuplicatePass(bool PreRegAlloc = false);
 
   /// IfConverter Pass - This pass performs machine code if conversion.
   FunctionPass *createIfConverterPass();
@@ -191,6 +191,10 @@ namespace llvm {
   /// the GCC-style builtin setjmp/longjmp (sjlj) to handling EH control flow.
   FunctionPass *createSjLjEHPass(const TargetLowering *tli);
 
+  /// createMaxStackAlignmentCalculatorPass() - Determine the maximum required
+  /// alignment for a function.
+  FunctionPass* createMaxStackAlignmentCalculatorPass();
+
 } // End llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
index f194e4e..6e15617 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
@@ -220,7 +220,7 @@ public:
   ///
   /// Note that this is an involved process that may invalidate pointers into
   /// the graph.
-  void Legalize(bool TypesNeedLegalizing, CodeGenOpt::Level OptLevel);
+  void Legalize(CodeGenOpt::Level OptLevel);
 
   /// LegalizeVectors - This transforms the SelectionDAG into a SelectionDAG
   /// that only uses vector math operations supported by the target.  This is
@@ -882,6 +882,24 @@ public:
   /// element of the result of the vector shuffle.
   SDValue getShuffleScalarElt(const ShuffleVectorSDNode *N, unsigned Idx);
 
+  /// UnrollVectorOp - Utility function used by legalize and lowering to
+  /// "unroll" a vector operation by splitting out the scalars and operating
+  /// on each element individually.  If the ResNE is 0, fully unroll the vector
+  /// op. If ResNE is less than the width of the vector op, unroll up to ResNE.
+  /// If the  ResNE is greater than the width of the vector op, unroll the
+  /// vector op and fill the end of the resulting vector with UNDEFS.
+  SDValue UnrollVectorOp(SDNode *N, unsigned ResNE = 0);
+
+  /// isConsecutiveLoad - Return true if LD is loading 'Bytes' bytes from a 
+  /// location that is 'Dist' units away from the location that the 'Base' load 
+  /// is loading from.
+  bool isConsecutiveLoad(LoadSDNode *LD, LoadSDNode *Base,
+                         unsigned Bytes, int Dist) const;
+
+  /// InferPtrAlignment - Infer alignment of a load / store address. Return 0 if
+  /// it cannot be inferred.
+  unsigned InferPtrAlignment(SDValue Ptr) const;
+
 private:
   bool RemoveNodeFromCSEMaps(SDNode *N);
   void AddModifiedNodeToCSEMaps(SDNode *N, DAGUpdateListener *UpdateListener);
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGISel.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGISel.h
index 4130d2c..bfd3492 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGISel.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGISel.h
@@ -113,7 +113,6 @@ protected:
   // Calls to these functions are generated by tblgen.
   SDNode *Select_INLINEASM(SDValue N);
   SDNode *Select_UNDEF(const SDValue &N);
-  SDNode *Select_DBG_LABEL(const SDValue &N);
   SDNode *Select_EH_LABEL(const SDValue &N);
   void CannotYetSelect(SDValue N);
   void CannotYetSelectIntrinsic(SDValue N);
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
index 950fd32..580986a 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
@@ -1095,7 +1095,7 @@ public:
   /// hasOneUse - Return true if there is exactly one use of this node.
   ///
   bool hasOneUse() const {
-    return !use_empty() && next(use_begin()) == use_end();
+    return !use_empty() && llvm::next(use_begin()) == use_end();
   }
 
   /// use_size - Return the number of uses of this node. This method takes
@@ -2397,6 +2397,11 @@ public:
   SDNodeIterator operator++(int) { // Postincrement
     SDNodeIterator tmp = *this; ++*this; return tmp;
   }
+  size_t operator-(SDNodeIterator Other) const {
+    assert(Node == Other.Node &&
+           "Cannot compare iterators of two different nodes!");
+    return Operand - Other.Operand;
+  }
 
   static SDNodeIterator begin(SDNode *N) { return SDNodeIterator(N, 0); }
   static SDNodeIterator end  (SDNode *N) {
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
index 45ef9b9..3106213 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
@@ -166,6 +166,12 @@ namespace llvm {
         return *this;
       }
     }
+
+    /// getScalarType - If this is a vector type, return the element type,
+    /// otherwise return this.
+    MVT getScalarType() const {
+      return isVector() ? getVectorElementType() : *this;
+    }
     
     MVT getVectorElementType() const {
       switch (SimpleTy) {
@@ -524,6 +530,12 @@ namespace llvm {
       return V;
     }
 
+    /// getScalarType - If this is a vector type, return the element type,
+    /// otherwise return this.
+    EVT getScalarType() const {
+      return isVector() ? getVectorElementType() : *this;
+    }
+    
     /// getVectorElementType - Given a vector type, return the type of
     /// each element.
     EVT getVectorElementType() const {
diff --git a/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td b/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
index 79edb02..cfd675b 100644
--- a/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
+++ b/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
@@ -45,6 +45,7 @@ def one_or_more;
 def really_hidden;
 def required;
 def zero_or_one;
+def comma_separated;
 
 // The 'case' construct.
 def case;
@@ -77,6 +78,8 @@ def any_empty;
 def append_cmd;
 def forward;
 def forward_as;
+def forward_value;
+def forward_transformed_value;
 def stop_compilation;
 def unpack_values;
 def warning;
diff --git a/libclamav/c++/llvm/include/llvm/IntrinsicInst.h b/libclamav/c++/llvm/include/llvm/IntrinsicInst.h
index 1e1dca2..a516409 100644
--- a/libclamav/c++/llvm/include/llvm/IntrinsicInst.h
+++ b/libclamav/c++/llvm/include/llvm/IntrinsicInst.h
@@ -70,6 +70,7 @@ namespace llvm {
       case Intrinsic::dbg_region_start:
       case Intrinsic::dbg_region_end:
       case Intrinsic::dbg_declare:
+      case Intrinsic::dbg_value:
         return true;
       default: return false;
       }
@@ -171,6 +172,25 @@ namespace llvm {
     }
   };
 
+  /// DbgValueInst - This represents the llvm.dbg.value instruction.
+  ///
+  struct DbgValueInst : public DbgInfoIntrinsic {
+    Value *getValue()  const {
+      return cast<MDNode>(getOperand(1))->getElement(0);
+    }
+    Value *getOffset() const { return getOperand(2); }
+    MDNode *getVariable() const { return cast<MDNode>(getOperand(3)); }
+
+    // Methods for support type inquiry through isa, cast, and dyn_cast:
+    static inline bool classof(const DbgValueInst *) { return true; }
+    static inline bool classof(const IntrinsicInst *I) {
+      return I->getIntrinsicID() == Intrinsic::dbg_value;
+    }
+    static inline bool classof(const Value *V) {
+      return isa<IntrinsicInst>(V) && classof(cast<IntrinsicInst>(V));
+    }
+  };
+
   /// MemIntrinsic - This is the common base class for memset/memcpy/memmove.
   ///
   struct MemIntrinsic : public IntrinsicInst {
diff --git a/libclamav/c++/llvm/include/llvm/Intrinsics.td b/libclamav/c++/llvm/include/llvm/Intrinsics.td
index c0cf00e..6ff87ba 100644
--- a/libclamav/c++/llvm/include/llvm/Intrinsics.td
+++ b/libclamav/c++/llvm/include/llvm/Intrinsics.td
@@ -290,6 +290,9 @@ let Properties = [IntrNoMem] in {
   def int_dbg_func_start   : Intrinsic<[llvm_void_ty], [llvm_metadata_ty]>;
   def int_dbg_declare      : Intrinsic<[llvm_void_ty],
                                        [llvm_descriptor_ty, llvm_metadata_ty]>;
+  def int_dbg_value  	   : Intrinsic<[llvm_void_ty],
+                                       [llvm_metadata_ty, llvm_i64_ty,
+                                        llvm_metadata_ty]>;
 }
 
 //===------------------ Exception Handling Intrinsics----------------------===//
@@ -341,19 +344,25 @@ def int_init_trampoline : Intrinsic<[llvm_ptr_ty],
 
 // Expose the carry flag from add operations on two integrals.
 def int_sadd_with_overflow : Intrinsic<[llvm_anyint_ty, llvm_i1_ty],
-                                       [LLVMMatchType<0>, LLVMMatchType<0>]>;
+                                       [LLVMMatchType<0>, LLVMMatchType<0>],
+                                       [IntrNoMem]>;
 def int_uadd_with_overflow : Intrinsic<[llvm_anyint_ty, llvm_i1_ty],
-                                       [LLVMMatchType<0>, LLVMMatchType<0>]>;
+                                       [LLVMMatchType<0>, LLVMMatchType<0>],
+                                       [IntrNoMem]>;
 
 def int_ssub_with_overflow : Intrinsic<[llvm_anyint_ty, llvm_i1_ty],
-                                       [LLVMMatchType<0>, LLVMMatchType<0>]>;
+                                       [LLVMMatchType<0>, LLVMMatchType<0>],
+                                       [IntrNoMem]>;
 def int_usub_with_overflow : Intrinsic<[llvm_anyint_ty, llvm_i1_ty],
-                                       [LLVMMatchType<0>, LLVMMatchType<0>]>;
+                                       [LLVMMatchType<0>, LLVMMatchType<0>],
+                                       [IntrNoMem]>;
 
 def int_smul_with_overflow : Intrinsic<[llvm_anyint_ty, llvm_i1_ty],
-                                       [LLVMMatchType<0>, LLVMMatchType<0>]>;
+                                       [LLVMMatchType<0>, LLVMMatchType<0>],
+                                       [IntrNoMem]>;
 def int_umul_with_overflow : Intrinsic<[llvm_anyint_ty, llvm_i1_ty],
-                                       [LLVMMatchType<0>, LLVMMatchType<0>]>;
+                                       [LLVMMatchType<0>, LLVMMatchType<0>],
+                                       [IntrNoMem]>;
 
 //===------------------------- Atomic Intrinsics --------------------------===//
 //
diff --git a/libclamav/c++/llvm/include/llvm/IntrinsicsX86.td b/libclamav/c++/llvm/include/llvm/IntrinsicsX86.td
index 2f75ed5..50ee358 100644
--- a/libclamav/c++/llvm/include/llvm/IntrinsicsX86.td
+++ b/libclamav/c++/llvm/include/llvm/IntrinsicsX86.td
@@ -671,10 +671,10 @@ let TargetPrefix = "x86" in {  // All intrinsics start with "llvm.x86.".
 
 // Align ops
 let TargetPrefix = "x86" in {  // All intrinsics start with "llvm.x86.".
-  def int_x86_ssse3_palign_r        : GCCBuiltin<"__builtin_ia32_palignr">,
+  def int_x86_ssse3_palign_r        :
               Intrinsic<[llvm_v1i64_ty], [llvm_v1i64_ty,
                          llvm_v1i64_ty, llvm_i8_ty], [IntrNoMem]>;
-  def int_x86_ssse3_palign_r_128    : GCCBuiltin<"__builtin_ia32_palignr128">,
+  def int_x86_ssse3_palign_r_128    :
               Intrinsic<[llvm_v2i64_ty], [llvm_v2i64_ty,
                          llvm_v2i64_ty, llvm_i8_ty], [IntrNoMem]>;
 }
diff --git a/libclamav/c++/llvm/include/llvm/LinkAllVMCore.h b/libclamav/c++/llvm/include/llvm/LinkAllVMCore.h
index 0ee18d5..2145bf8 100644
--- a/libclamav/c++/llvm/include/llvm/LinkAllVMCore.h
+++ b/libclamav/c++/llvm/include/llvm/LinkAllVMCore.h
@@ -35,6 +35,7 @@
 #include "llvm/Support/Mangler.h"
 #include "llvm/Support/MathExtras.h"
 #include "llvm/Support/SlowOperationInformer.h"
+#include <cstdlib>
 
 namespace {
   struct ForceVMCoreLinking {
diff --git a/libclamav/c++/llvm/include/llvm/Metadata.h b/libclamav/c++/llvm/include/llvm/Metadata.h
index 1d18eba..c7f2b44 100644
--- a/libclamav/c++/llvm/include/llvm/Metadata.h
+++ b/libclamav/c++/llvm/include/llvm/Metadata.h
@@ -91,7 +91,7 @@ class MDNode : public MetadataBase, public FoldingSetNode {
   MDNode(const MDNode &);                // DO NOT IMPLEMENT
 
   friend class ElementVH;
-  // Use CallbackVH to hold MDNOde elements.
+  // Use CallbackVH to hold MDNode elements.
   struct ElementVH : public CallbackVH {
     MDNode *Parent;
     ElementVH() {}
@@ -264,7 +264,7 @@ public:
   /// the same metadata to In2.
   void copyMD(Instruction *In1, Instruction *In2);
 
-  /// getHandlerNames - Populate client supplied smallvector using custome
+  /// getHandlerNames - Populate client supplied smallvector using custom
   /// metadata name and ID.
   void getHandlerNames(SmallVectorImpl<std::pair<unsigned, StringRef> >&) const;
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/CommandLine.h b/libclamav/c++/llvm/include/llvm/Support/CommandLine.h
index 2e65fdd..7f8b10c 100644
--- a/libclamav/c++/llvm/include/llvm/Support/CommandLine.h
+++ b/libclamav/c++/llvm/include/llvm/Support/CommandLine.h
@@ -986,7 +986,7 @@ template<class DataType>
 class list_storage<DataType, bool> : public std::vector<DataType> {
 public:
   template<class T>
-  void addValue(const T &V) { push_back(V); }
+  void addValue(const T &V) { std::vector<DataType>::push_back(V); }
 };
 
 
@@ -1011,7 +1011,7 @@ class list : public Option, public list_storage<DataType, Storage> {
       typename ParserClass::parser_data_type();
     if (Parser.parse(*this, ArgName, Arg, Val))
       return true;  // Parse Error!
-    addValue(Val);
+    list_storage<DataType, Storage>::addValue(Val);
     setPosition(pos);
     Positions.push_back(pos);
     return false;
diff --git a/libclamav/c++/llvm/include/llvm/Support/DOTGraphTraits.h b/libclamav/c++/llvm/include/llvm/Support/DOTGraphTraits.h
index 080297f..54ced15 100644
--- a/libclamav/c++/llvm/include/llvm/Support/DOTGraphTraits.h
+++ b/libclamav/c++/llvm/include/llvm/Support/DOTGraphTraits.h
@@ -27,6 +27,17 @@ namespace llvm {
 /// implementations.
 ///
 struct DefaultDOTGraphTraits {
+private:
+  bool IsSimple;
+
+protected:
+  bool isSimple() {
+    return IsSimple;
+  }
+
+public:
+  DefaultDOTGraphTraits (bool simple=false) : IsSimple (simple) {}
+
   /// getGraphName - Return the label for the graph as a whole.  Printed at the
   /// top of the graph.
   ///
@@ -51,8 +62,7 @@ struct DefaultDOTGraphTraits {
   /// getNodeLabel - Given a node and a pointer to the top level graph, return
   /// the label to print in the node.
   template<typename GraphType>
-  static std::string getNodeLabel(const void *Node,
-                                  const GraphType& Graph, bool ShortNames) {
+  std::string getNodeLabel(const void *Node, const GraphType& Graph) {
     return "";
   }
 
@@ -135,7 +145,9 @@ struct DefaultDOTGraphTraits {
 /// from DefaultDOTGraphTraits if you don't need to override everything.
 ///
 template <typename Ty>
-struct DOTGraphTraits : public DefaultDOTGraphTraits {};
+struct DOTGraphTraits : public DefaultDOTGraphTraits {
+  DOTGraphTraits (bool simple=false) : DefaultDOTGraphTraits (simple) {}
+};
 
 } // End llvm namespace
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/Debug.h b/libclamav/c++/llvm/include/llvm/Support/Debug.h
index afa828c..e8bc0ce 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Debug.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Debug.h
@@ -63,7 +63,8 @@ void SetCurrentDebugType(const char *Type);
 /// This will emit the debug information if -debug is present, and -debug-only
 /// is not specified, or is specified as "bitset".
 #define DEBUG_WITH_TYPE(TYPE, X)                                        \
-  do { if (DebugFlag && isCurrentDebugType(TYPE)) { X; } } while (0)
+  do { if (::llvm::DebugFlag && ::llvm::isCurrentDebugType(TYPE)) { X; } \
+  } while (0)
 
 #else
 #define isCurrentDebugType(X) (false)
diff --git a/libclamav/c++/llvm/include/llvm/Support/ErrorHandling.h b/libclamav/c++/llvm/include/llvm/Support/ErrorHandling.h
index 6067795..4d24ada 100644
--- a/libclamav/c++/llvm/include/llvm/Support/ErrorHandling.h
+++ b/libclamav/c++/llvm/include/llvm/Support/ErrorHandling.h
@@ -79,9 +79,10 @@ namespace llvm {
 /// Use this instead of assert(0), so that the compiler knows this path
 /// is not reachable even for NDEBUG builds.
 #ifndef NDEBUG
-#define llvm_unreachable(msg) llvm_unreachable_internal(msg, __FILE__, __LINE__)
+#define llvm_unreachable(msg) \
+  ::llvm::llvm_unreachable_internal(msg, __FILE__, __LINE__)
 #else
-#define llvm_unreachable(msg) llvm_unreachable_internal()
+#define llvm_unreachable(msg) ::llvm::llvm_unreachable_internal()
 #endif
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/Support/GetElementPtrTypeIterator.h b/libclamav/c++/llvm/include/llvm/Support/GetElementPtrTypeIterator.h
index f5915c9..e5e7fc7 100644
--- a/libclamav/c++/llvm/include/llvm/Support/GetElementPtrTypeIterator.h
+++ b/libclamav/c++/llvm/include/llvm/Support/GetElementPtrTypeIterator.h
@@ -84,7 +84,7 @@ namespace llvm {
 
   inline gep_type_iterator gep_type_begin(const User *GEP) {
     return gep_type_iterator::begin(GEP->getOperand(0)->getType(),
-                                      GEP->op_begin()+1);
+                                    GEP->op_begin()+1);
   }
   inline gep_type_iterator gep_type_end(const User *GEP) {
     return gep_type_iterator::end(GEP->op_end());
diff --git a/libclamav/c++/llvm/include/llvm/Support/GraphWriter.h b/libclamav/c++/llvm/include/llvm/Support/GraphWriter.h
index bd3fcea..28fa92f 100644
--- a/libclamav/c++/llvm/include/llvm/Support/GraphWriter.h
+++ b/libclamav/c++/llvm/include/llvm/Support/GraphWriter.h
@@ -52,19 +52,48 @@ template<typename GraphType>
 class GraphWriter {
   raw_ostream &O;
   const GraphType &G;
-  bool ShortNames;
 
   typedef DOTGraphTraits<GraphType>           DOTTraits;
   typedef GraphTraits<GraphType>              GTraits;
   typedef typename GTraits::NodeType          NodeType;
   typedef typename GTraits::nodes_iterator    node_iterator;
   typedef typename GTraits::ChildIteratorType child_iterator;
+  DOTTraits DTraits;
+
+  // Writes the edge labels of the node to O and returns true if there are any
+  // edge labels not equal to the empty string "".
+  bool getEdgeSourceLabels(raw_ostream &O, NodeType *Node) {
+    child_iterator EI = GTraits::child_begin(Node);
+    child_iterator EE = GTraits::child_end(Node);
+    bool hasEdgeSourceLabels = false;
+
+    for (unsigned i = 0; EI != EE && i != 64; ++EI, ++i) {
+      std::string label = DTraits.getEdgeSourceLabel(Node, EI);
+
+      if (label == "")
+        continue;
+
+      hasEdgeSourceLabels = true;
+
+      if (i)
+        O << "|";
+
+      O << "<s" << i << ">" << DTraits.getEdgeSourceLabel(Node, EI);
+    }
+
+    if (EI != EE && hasEdgeSourceLabels)
+      O << "|<s64>truncated...";
+
+    return hasEdgeSourceLabels;
+  }
+
 public:
-  GraphWriter(raw_ostream &o, const GraphType &g, bool SN) :
-    O(o), G(g), ShortNames(SN) {}
+  GraphWriter(raw_ostream &o, const GraphType &g, bool SN) : O(o), G(g) {
+  DTraits = DOTTraits(SN); 
+}
 
   void writeHeader(const std::string &Name) {
-    std::string GraphName = DOTTraits::getGraphName(G);
+    std::string GraphName = DTraits.getGraphName(G);
 
     if (!Name.empty())
       O << "digraph \"" << DOT::EscapeString(Name) << "\" {\n";
@@ -73,14 +102,14 @@ public:
     else
       O << "digraph unnamed {\n";
 
-    if (DOTTraits::renderGraphFromBottomUp())
+    if (DTraits.renderGraphFromBottomUp())
       O << "\trankdir=\"BT\";\n";
 
     if (!Name.empty())
       O << "\tlabel=\"" << DOT::EscapeString(Name) << "\";\n";
     else if (!GraphName.empty())
       O << "\tlabel=\"" << DOT::EscapeString(GraphName) << "\";\n";
-    O << DOTTraits::getGraphProperties(G);
+    O << DTraits.getGraphProperties(G);
     O << "\n";
   }
 
@@ -105,53 +134,47 @@ public:
   }
 
   void writeNode(NodeType *Node) {
-    std::string NodeAttributes = DOTTraits::getNodeAttributes(Node, G);
+    std::string NodeAttributes = DTraits.getNodeAttributes(Node, G);
 
     O << "\tNode" << static_cast<const void*>(Node) << " [shape=record,";
     if (!NodeAttributes.empty()) O << NodeAttributes << ",";
     O << "label=\"{";
 
-    if (!DOTTraits::renderGraphFromBottomUp()) {
-      O << DOT::EscapeString(DOTTraits::getNodeLabel(Node, G, ShortNames));
+    if (!DTraits.renderGraphFromBottomUp()) {
+      O << DOT::EscapeString(DTraits.getNodeLabel(Node, G));
 
       // If we should include the address of the node in the label, do so now.
-      if (DOTTraits::hasNodeAddressLabel(Node, G))
+      if (DTraits.hasNodeAddressLabel(Node, G))
         O << "|" << (void*)Node;
     }
 
-    // Print out the fields of the current node...
-    child_iterator EI = GTraits::child_begin(Node);
-    child_iterator EE = GTraits::child_end(Node);
-    if (EI != EE) {
-      if (!DOTTraits::renderGraphFromBottomUp()) O << "|";
-      O << "{";
+    std::string edgeSourceLabels;
+    raw_string_ostream EdgeSourceLabels(edgeSourceLabels);
+    bool hasEdgeSourceLabels = getEdgeSourceLabels(EdgeSourceLabels, Node);
 
-      for (unsigned i = 0; EI != EE && i != 64; ++EI, ++i) {
-        if (i) O << "|";
-        O << "<s" << i << ">" << DOTTraits::getEdgeSourceLabel(Node, EI);
-      }
+    if (hasEdgeSourceLabels) {
+      if (!DTraits.renderGraphFromBottomUp()) O << "|";
 
-      if (EI != EE)
-        O << "|<s64>truncated...";
-      O << "}";
-      if (DOTTraits::renderGraphFromBottomUp()) O << "|";
+      O << "{" << EdgeSourceLabels.str() << "}";
+
+      if (DTraits.renderGraphFromBottomUp()) O << "|";
     }
 
-    if (DOTTraits::renderGraphFromBottomUp()) {
-      O << DOT::EscapeString(DOTTraits::getNodeLabel(Node, G, ShortNames));
+    if (DTraits.renderGraphFromBottomUp()) {
+      O << DOT::EscapeString(DTraits.getNodeLabel(Node, G));
 
       // If we should include the address of the node in the label, do so now.
-      if (DOTTraits::hasNodeAddressLabel(Node, G))
+      if (DTraits.hasNodeAddressLabel(Node, G))
         O << "|" << (void*)Node;
     }
 
-    if (DOTTraits::hasEdgeDestLabels()) {
+    if (DTraits.hasEdgeDestLabels()) {
       O << "|{";
 
-      unsigned i = 0, e = DOTTraits::numEdgeDestLabels(Node);
+      unsigned i = 0, e = DTraits.numEdgeDestLabels(Node);
       for (; i != e && i != 64; ++i) {
         if (i) O << "|";
-        O << "<d" << i << ">" << DOTTraits::getEdgeDestLabel(Node, i);
+        O << "<d" << i << ">" << DTraits.getEdgeDestLabel(Node, i);
       }
 
       if (i != e)
@@ -162,7 +185,8 @@ public:
     O << "}\"];\n";   // Finish printing the "node" line
 
     // Output all of the edges now
-    EI = GTraits::child_begin(Node);
+    child_iterator EI = GTraits::child_begin(Node);
+    child_iterator EE = GTraits::child_end(Node);
     for (unsigned i = 0; EI != EE && i != 64; ++EI, ++i)
       writeEdge(Node, i, EI);
     for (; EI != EE; ++EI)
@@ -172,8 +196,8 @@ public:
   void writeEdge(NodeType *Node, unsigned edgeidx, child_iterator EI) {
     if (NodeType *TargetNode = *EI) {
       int DestPort = -1;
-      if (DOTTraits::edgeTargetsEdgeSource(Node, EI)) {
-        child_iterator TargetIt = DOTTraits::getEdgeTarget(Node, EI);
+      if (DTraits.edgeTargetsEdgeSource(Node, EI)) {
+        child_iterator TargetIt = DTraits.getEdgeTarget(Node, EI);
 
         // Figure out which edge this targets...
         unsigned Offset =
@@ -181,9 +205,12 @@ public:
         DestPort = static_cast<int>(Offset);
       }
 
+      if (DTraits.getEdgeSourceLabel(Node, EI) == "")
+        edgeidx = -1;
+
       emitEdge(static_cast<const void*>(Node), edgeidx,
                static_cast<const void*>(TargetNode), DestPort,
-               DOTTraits::getEdgeAttributes(Node, EI));
+               DTraits.getEdgeAttributes(Node, EI));
     }
   }
 
@@ -221,12 +248,8 @@ public:
     if (SrcNodePort >= 0)
       O << ":s" << SrcNodePort;
     O << " -> Node" << DestNodeID;
-    if (DestNodePort >= 0) {
-      if (DOTTraits::hasEdgeDestLabels())
-        O << ":d" << DestNodePort;
-      else
-        O << ":s" << DestNodePort;
-    }
+    if (DestNodePort >= 0 && DTraits.hasEdgeDestLabels())
+      O << ":d" << DestNodePort;
 
     if (!Attrs.empty())
       O << "[" << Attrs << "]";
diff --git a/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h b/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
index 2db2477..1310d70 100644
--- a/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
@@ -269,6 +269,27 @@ public:
     return Insert(IndirectBrInst::Create(Addr, NumDests));
   }
 
+  InvokeInst *CreateInvoke(Value *Callee, BasicBlock *NormalDest,
+                           BasicBlock *UnwindDest, const Twine &Name = "") {
+    Value *Args[] = { 0 };
+    return Insert(InvokeInst::Create(Callee, NormalDest, UnwindDest, Args,
+                                     Args), Name);
+  }
+  InvokeInst *CreateInvoke(Value *Callee, BasicBlock *NormalDest,
+                           BasicBlock *UnwindDest, Value *Arg1,
+                           const Twine &Name = "") {
+    Value *Args[] = { Arg1 };
+    return Insert(InvokeInst::Create(Callee, NormalDest, UnwindDest, Args,
+                                     Args+1), Name);
+  }
+  InvokeInst *CreateInvoke3(Value *Callee, BasicBlock *NormalDest,
+                            BasicBlock *UnwindDest, Value *Arg1,
+                            Value *Arg2, Value *Arg3,
+                            const Twine &Name = "") {
+    Value *Args[] = { Arg1, Arg2, Arg3 };
+    return Insert(InvokeInst::Create(Callee, NormalDest, UnwindDest, Args,
+                                     Args+3), Name);
+  }
   /// CreateInvoke - Create an invoke instruction.
   template<typename InputIterator>
   InvokeInst *CreateInvoke(Value *Callee, BasicBlock *NormalDest,
@@ -386,18 +407,39 @@ public:
         return Folder.CreateShl(LC, RC);
     return Insert(BinaryOperator::CreateShl(LHS, RHS), Name);
   }
+  Value *CreateShl(Value *LHS, uint64_t RHS, const Twine &Name = "") {
+    Constant *RHSC = ConstantInt::get(LHS->getType(), RHS);
+    if (Constant *LC = dyn_cast<Constant>(LHS))
+      return Folder.CreateShl(LC, RHSC);
+    return Insert(BinaryOperator::CreateShl(LHS, RHSC), Name);
+  }
+
   Value *CreateLShr(Value *LHS, Value *RHS, const Twine &Name = "") {
     if (Constant *LC = dyn_cast<Constant>(LHS))
       if (Constant *RC = dyn_cast<Constant>(RHS))
         return Folder.CreateLShr(LC, RC);
     return Insert(BinaryOperator::CreateLShr(LHS, RHS), Name);
   }
+  Value *CreateLShr(Value *LHS, uint64_t RHS, const Twine &Name = "") {
+    Constant *RHSC = ConstantInt::get(LHS->getType(), RHS);
+    if (Constant *LC = dyn_cast<Constant>(LHS))
+      return Folder.CreateLShr(LC, RHSC);
+    return Insert(BinaryOperator::CreateLShr(LHS, RHSC), Name);
+  }
+  
   Value *CreateAShr(Value *LHS, Value *RHS, const Twine &Name = "") {
     if (Constant *LC = dyn_cast<Constant>(LHS))
       if (Constant *RC = dyn_cast<Constant>(RHS))
         return Folder.CreateAShr(LC, RC);
     return Insert(BinaryOperator::CreateAShr(LHS, RHS), Name);
   }
+  Value *CreateAShr(Value *LHS, uint64_t RHS, const Twine &Name = "") {
+    Constant *RHSC = ConstantInt::get(LHS->getType(), RHS);
+    if (Constant *LC = dyn_cast<Constant>(LHS))
+      return Folder.CreateSShr(LC, RHSC);
+    return Insert(BinaryOperator::CreateAShr(LHS, RHSC), Name);
+  }
+
   Value *CreateAnd(Value *LHS, Value *RHS, const Twine &Name = "") {
     if (Constant *RC = dyn_cast<Constant>(RHS)) {
       if (isa<ConstantInt>(RC) && cast<ConstantInt>(RC)->isAllOnesValue())
diff --git a/libclamav/c++/llvm/include/llvm/System/DataTypes.h.cmake b/libclamav/c++/llvm/include/llvm/System/DataTypes.h.cmake
index 180c86c..d9ca273 100644
--- a/libclamav/c++/llvm/include/llvm/System/DataTypes.h.cmake
+++ b/libclamav/c++/llvm/include/llvm/System/DataTypes.h.cmake
@@ -118,14 +118,33 @@ typedef signed int ssize_t;
 #define INT32_MAX 2147483647
 #define INT32_MIN -2147483648
 #define UINT32_MAX 4294967295U
-#define INT8_C(C)   C
-#define UINT8_C(C)  C
-#define INT16_C(C)  C
-#define UINT16_C(C) C
-#define INT32_C(C)  C
-#define UINT32_C(C) C ## U
-#define INT64_C(C)  ((int64_t) C ## LL)
-#define UINT64_C(C) ((uint64_t) C ## ULL)
+/* Certain compatibility updates to VC++ introduce the `cstdint'
+ * header, which defines the INT*_C macros. On default installs they
+ * are absent. */
+#ifndef INT8_C
+# define INT8_C(C)   C
+#endif
+#ifndef UINT8_C
+# define UINT8_C(C)  C
+#endif
+#ifndef INT16_C
+# define INT16_C(C)  C
+#endif
+#ifndef UINT16_C
+# define UINT16_C(C) C
+#endif
+#ifndef INT32_C
+# define INT32_C(C)  C
+#endif
+#ifndef UINT32_C
+# define UINT32_C(C) C ## U
+#endif
+#ifndef INT64_C
+# define INT64_C(C)  ((int64_t) C ## LL)
+#endif
+#ifndef UINT64_C
+# define UINT64_C(C) ((uint64_t) C ## ULL)
+#endif
 #endif /* _MSC_VER */
 
 /* Set defaults for constants which we cannot find. */
diff --git a/libclamav/c++/llvm/include/llvm/System/DataTypes.h.in b/libclamav/c++/llvm/include/llvm/System/DataTypes.h.in
index d574910..1f8ce79 100644
--- a/libclamav/c++/llvm/include/llvm/System/DataTypes.h.in
+++ b/libclamav/c++/llvm/include/llvm/System/DataTypes.h.in
@@ -36,8 +36,6 @@
 #include <math.h>
 #endif
 
-#ifndef _MSC_VER
-
 /* Note that this header's correct operation depends on __STDC_LIMIT_MACROS
    being defined.  We would define it here, but in order to prevent Bad Things
    happening when system headers or C++ STL headers include stdint.h before we
@@ -89,40 +87,6 @@ typedef u_int64_t uint64_t;
 #define UINT32_MAX 4294967295U
 #endif
 
-#else /* _MSC_VER */
-/* Visual C++ doesn't provide standard integer headers, but it does provide
-   built-in data types. */
-#include <stdlib.h>
-#include <stddef.h>
-#include <sys/types.h>
-typedef __int64 int64_t;
-typedef unsigned __int64 uint64_t;
-typedef signed int int32_t;
-typedef unsigned int uint32_t;
-typedef short int16_t;
-typedef unsigned short uint16_t;
-typedef signed char int8_t;
-typedef unsigned char uint8_t;
-typedef signed int ssize_t;
-#define INT8_MAX 127
-#define INT8_MIN -128
-#define UINT8_MAX 255
-#define INT16_MAX 32767
-#define INT16_MIN -32768
-#define UINT16_MAX 65535
-#define INT32_MAX 2147483647
-#define INT32_MIN -2147483648
-#define UINT32_MAX 4294967295U
-#define INT8_C(C)   C
-#define UINT8_C(C)  C
-#define INT16_C(C)  C
-#define UINT16_C(C) C
-#define INT32_C(C)  C
-#define UINT32_C(C) C ## U
-#define INT64_C(C)  ((int64_t) C ## LL)
-#define UINT64_C(C) ((uint64_t) C ## ULL)
-#endif /* _MSC_VER */
-
 /* Set defaults for constants which we cannot find. */
 #if !defined(INT64_MAX)
 # define INT64_MAX 9223372036854775807LL
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetData.h b/libclamav/c++/llvm/include/llvm/Target/TargetData.h
index e1d052e..2e63188 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetData.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetData.h
@@ -30,7 +30,6 @@ class Type;
 class IntegerType;
 class StructType;
 class StructLayout;
-class StructLayoutMap;
 class GlobalVariable;
 class LLVMContext;
 
@@ -60,8 +59,6 @@ struct TargetAlignElem {
                              unsigned char pref_align, uint32_t bit_width);
   /// Equality predicate
   bool operator==(const TargetAlignElem &rhs) const;
-  /// output stream operator
-  std::ostream &dump(std::ostream &os) const;
 };
 
 class TargetData : public ImmutablePass {
@@ -86,7 +83,7 @@ private:
   static const TargetAlignElem InvalidAlignmentElem;
 
   // The StructType -> StructLayout map.
-  mutable StructLayoutMap *LayoutMap;
+  mutable void *LayoutMap;
 
   //! Set/initialize target alignments
   void setAlignment(AlignTypeEnum align_type, unsigned char abi_align,
@@ -153,7 +150,7 @@ public:
   /// The width is specified in bits.
   ///
   bool isLegalInteger(unsigned Width) const {
-    for (unsigned i = 0, e = LegalIntWidths.size(); i != e; ++i)
+    for (unsigned i = 0, e = (unsigned)LegalIntWidths.size(); i != e; ++i)
       if (LegalIntWidths[i] == Width)
         return true;
     return false;
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h b/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
index 8070d45..91ee923 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
@@ -26,6 +26,7 @@ class LiveVariables;
 class CalleeSavedInfo;
 class SDNode;
 class SelectionDAG;
+class MachineMemOperand;
 
 template<class T> class SmallVectorImpl;
 
@@ -182,11 +183,13 @@ public:
 
   /// hasLoadFromStackSlot - If the specified machine instruction has
   /// a load from a stack slot, return true along with the FrameIndex
-  /// of the loaded stack slot.  If not, return false.  Unlike
+  /// of the loaded stack slot and the machine mem operand containing
+  /// the reference.  If not, return false.  Unlike
   /// isLoadFromStackSlot, this returns true for any instructions that
   /// loads from the stack.  This is just a hint, as some cases may be
   /// missed.
   virtual bool hasLoadFromStackSlot(const MachineInstr *MI,
+                                    const MachineMemOperand *&MMO,
                                     int &FrameIndex) const {
     return 0;
   }
@@ -205,17 +208,18 @@ public:
   /// stack locations as well.  This uses a heuristic so it isn't
   /// reliable for correctness.
   virtual unsigned isStoreToStackSlotPostFE(const MachineInstr *MI,
-                                      int &FrameIndex) const {
+                                            int &FrameIndex) const {
     return 0;
   }
 
   /// hasStoreToStackSlot - If the specified machine instruction has a
   /// store to a stack slot, return true along with the FrameIndex of
-  /// the loaded stack slot.  If not, return false.  Unlike
-  /// isStoreToStackSlot, this returns true for any instructions that
-  /// loads from the stack.  This is just a hint, as some cases may be
-  /// missed.
+  /// the loaded stack slot and the machine mem operand containing the
+  /// reference.  If not, return false.  Unlike isStoreToStackSlot,
+  /// this returns true for any instructions that loads from the
+  /// stack.  This is just a hint, as some cases may be missed.
   virtual bool hasStoreToStackSlot(const MachineInstr *MI,
+                                   const MachineMemOperand *&MMO,
                                    int &FrameIndex) const {
     return 0;
   }
@@ -461,14 +465,6 @@ public:
     return 0;
   }
   
-  /// BlockHasNoFallThrough - Return true if the specified block does not
-  /// fall-through into its successor block.  This is primarily used when a
-  /// branch is unanalyzable.  It is useful for things like unconditional
-  /// indirect branches (jump tables).
-  virtual bool BlockHasNoFallThrough(const MachineBasicBlock &MBB) const {
-    return false;
-  }
-  
   /// ReverseBranchCondition - Reverses the branch condition of the specified
   /// condition list, returning false on success and true if it cannot be
   /// reversed.
@@ -543,10 +539,6 @@ public:
   /// length.
   virtual unsigned getInlineAsmLength(const char *Str,
                                       const MCAsmInfo &MAI) const;
-
-  /// isProfitableToDuplicateIndirectBranch - Returns true if tail duplication
-  /// is especially profitable for indirect branches.
-  virtual bool isProfitableToDuplicateIndirectBranch() const { return false; }
 };
 
 /// TargetInstrInfoImpl - This is the default implementation of
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h b/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
index ca51102..e4ea5a5 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
@@ -857,12 +857,6 @@ public:
   virtual bool
   isGAPlusOffset(SDNode *N, GlobalValue* &GA, int64_t &Offset) const;
 
-  /// isConsecutiveLoad - Return true if LD is loading 'Bytes' bytes from a 
-  /// location that is 'Dist' units away from the location that the 'Base' load 
-  /// is loading from.
-  bool isConsecutiveLoad(LoadSDNode *LD, LoadSDNode *Base, unsigned Bytes,
-                         int Dist, const MachineFrameInfo *MFI) const;
-
   /// PerformDAGCombine - This method will be invoked for all target nodes and
   /// for any target-independent nodes that the target has registered with
   /// invoke it for.
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h b/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h
index cb29c73..dec0b1d 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h
@@ -299,7 +299,7 @@ public:
     /// FirstVirtualRegister - This is the first register number that is
     /// considered to be a 'virtual' register, which is part of the SSA
     /// namespace.  This must be the same for all targets, which means that each
-    /// target is limited to 1024 registers.
+    /// target is limited to this fixed number of registers.
     FirstVirtualRegister = 1024
   };
 
diff --git a/libclamav/c++/llvm/lib/Analysis/CMakeLists.txt b/libclamav/c++/llvm/lib/Analysis/CMakeLists.txt
index 0a83c3d..17c9b86 100644
--- a/libclamav/c++/llvm/lib/Analysis/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Analysis/CMakeLists.txt
@@ -27,6 +27,7 @@ add_llvm_library(LLVMAnalysis
   LoopPass.cpp
   MemoryBuiltins.cpp
   MemoryDependenceAnalysis.cpp
+  PHITransAddr.cpp
   PointerTracking.cpp
   PostDominators.cpp
   ProfileEstimatorPass.cpp
diff --git a/libclamav/c++/llvm/lib/Analysis/CaptureTracking.cpp b/libclamav/c++/llvm/lib/Analysis/CaptureTracking.cpp
index a276c64..10a8b11 100644
--- a/libclamav/c++/llvm/lib/Analysis/CaptureTracking.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/CaptureTracking.cpp
@@ -25,6 +25,16 @@
 #include "llvm/Support/CallSite.h"
 using namespace llvm;
 
+/// As its comment mentions, PointerMayBeCaptured can be expensive.
+/// However, it's not easy for BasicAA to cache the result, because
+/// it's an ImmutablePass. To work around this, bound queries at a
+/// fixed number of uses.
+///
+/// TODO: Write a new FunctionPass AliasAnalysis so that it can keep
+/// a cache. Then we can move the code from BasicAliasAnalysis into
+/// that path, and remove this threshold.
+static int const Threshold = 20;
+
 /// PointerMayBeCaptured - Return true if this pointer value may be captured
 /// by the enclosing function (which is required to exist).  This routine can
 /// be expensive, so consider caching the results.  The boolean ReturnCaptures
@@ -35,11 +45,17 @@ using namespace llvm;
 bool llvm::PointerMayBeCaptured(const Value *V,
                                 bool ReturnCaptures, bool StoreCaptures) {
   assert(isa<PointerType>(V->getType()) && "Capture is for pointers only!");
-  SmallVector<Use*, 16> Worklist;
-  SmallSet<Use*, 16> Visited;
+  SmallVector<Use*, Threshold> Worklist;
+  SmallSet<Use*, Threshold> Visited;
+  int Count = 0;
 
   for (Value::use_const_iterator UI = V->use_begin(), UE = V->use_end();
        UI != UE; ++UI) {
+    // If there are lots of uses, conservatively say that the value
+    // is captured to avoid taking too much compile time.
+    if (Count++ >= Threshold)
+      return true;
+
     Use *U = &UI.getUse();
     Visited.insert(U);
     Worklist.push_back(U);
diff --git a/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp b/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp
index 8d60907..eaf90d0 100644
--- a/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp
@@ -432,7 +432,7 @@ Constant *llvm::ConstantFoldLoadFromConstPtr(Constant *C,
   // Instead of loading constant c string, use corresponding integer value
   // directly if string length is small enough.
   std::string Str;
-  if (TD && GetConstantStringInfo(CE->getOperand(0), Str) && !Str.empty()) {
+  if (TD && GetConstantStringInfo(CE, Str) && !Str.empty()) {
     unsigned StrLen = Str.length();
     const Type *Ty = cast<PointerType>(CE->getType())->getElementType();
     unsigned NumBits = Ty->getPrimitiveSizeInBits();
@@ -564,13 +564,21 @@ static Constant *SymbolicallyEvaluateGEP(Constant *const *Ops, unsigned NumOps,
   // we eliminate over-indexing of the notional static type array bounds.
   // This makes it easy to determine if the getelementptr is "inbounds".
   // Also, this helps GlobalOpt do SROA on GlobalVariables.
+  Ptr = cast<Constant>(Ptr->stripPointerCasts());
   const Type *Ty = Ptr->getType();
   SmallVector<Constant*, 32> NewIdxs;
   do {
     if (const SequentialType *ATy = dyn_cast<SequentialType>(Ty)) {
-      // The only pointer indexing we'll do is on the first index of the GEP.
-      if (isa<PointerType>(ATy) && !NewIdxs.empty())
-        break;
+      if (isa<PointerType>(ATy)) {
+        // The only pointer indexing we'll do is on the first index of the GEP.
+        if (!NewIdxs.empty())
+          break;
+       
+        // Only handle pointers to sized types, not pointers to functions.
+        if (!ATy->getElementType()->isSized())
+          return 0;
+      }
+        
       // Determine which element of the array the offset points into.
       APInt ElemSize(BitWidth, TD->getTypeAllocSize(ATy->getElementType()));
       if (ElemSize == 0)
diff --git a/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp b/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
index 41d803c..1c9f500 100644
--- a/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
@@ -866,7 +866,9 @@ DISubprogram DIFactory::CreateSubprogram(DIDescriptor Context,
                                          DICompileUnit CompileUnit,
                                          unsigned LineNo, DIType Type,
                                          bool isLocalToUnit,
-                                         bool isDefinition) {
+                                         bool isDefinition,
+                                         unsigned VK, unsigned VIndex,
+                                         DIType ContainingType) {
 
   Value *Elts[] = {
     GetTagConstant(dwarf::DW_TAG_subprogram),
@@ -879,9 +881,38 @@ DISubprogram DIFactory::CreateSubprogram(DIDescriptor Context,
     ConstantInt::get(Type::getInt32Ty(VMContext), LineNo),
     Type.getNode(),
     ConstantInt::get(Type::getInt1Ty(VMContext), isLocalToUnit),
-    ConstantInt::get(Type::getInt1Ty(VMContext), isDefinition)
+    ConstantInt::get(Type::getInt1Ty(VMContext), isDefinition),
+    ConstantInt::get(Type::getInt32Ty(VMContext), (unsigned)VK),
+    ConstantInt::get(Type::getInt32Ty(VMContext), VIndex),
+    ContainingType.getNode()
   };
-  return DISubprogram(MDNode::get(VMContext, &Elts[0], 11));
+  return DISubprogram(MDNode::get(VMContext, &Elts[0], 14));
+}
+
+/// CreateSubprogramDefinition - Create new subprogram descriptor for the
+/// given declaration. 
+DISubprogram DIFactory::CreateSubprogramDefinition(DISubprogram &SPDeclaration) {
+  if (SPDeclaration.isDefinition())
+    return DISubprogram(SPDeclaration.getNode());
+
+  MDNode *DeclNode = SPDeclaration.getNode();
+  Value *Elts[] = {
+    GetTagConstant(dwarf::DW_TAG_subprogram),
+    llvm::Constant::getNullValue(Type::getInt32Ty(VMContext)),
+    DeclNode->getElement(2), // Context
+    DeclNode->getElement(3), // Name
+    DeclNode->getElement(4), // DisplayName
+    DeclNode->getElement(5), // LinkageName
+    DeclNode->getElement(6), // CompileUnit
+    DeclNode->getElement(7), // LineNo
+    DeclNode->getElement(8), // Type
+    DeclNode->getElement(9), // isLocalToUnit
+    ConstantInt::get(Type::getInt1Ty(VMContext), true),
+    DeclNode->getElement(11), // Virtuality
+    DeclNode->getElement(12), // VIndex
+    DeclNode->getElement(13)  // Containting Type
+  };
+  return DISubprogram(MDNode::get(VMContext, &Elts[0], 14));
 }
 
 /// CreateGlobalVariable - Create a new descriptor for the specified global.
@@ -1019,6 +1050,37 @@ Instruction *DIFactory::InsertDeclare(Value *Storage, DIVariable D,
   return CallInst::Create(DeclareFn, Args, Args+2, "", InsertAtEnd);
 }
 
+/// InsertDbgValueIntrinsic - Insert a new llvm.dbg.value intrinsic call.
+Instruction *DIFactory::InsertDbgValueIntrinsic(Value *V, Value *Offset,
+                                                DIVariable D,
+                                                Instruction *InsertBefore) {
+  assert(V && "no value passed to dbg.value");
+  assert(Offset->getType() == Type::getInt64Ty(V->getContext()) &&
+         "offset must be i64");
+  if (!ValueFn)
+    ValueFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_value);
+
+  Value *Elts[] = { V };
+  Value *Args[] = { MDNode::get(V->getContext(), Elts, 1), Offset,
+                    D.getNode() };
+  return CallInst::Create(ValueFn, Args, Args+3, "", InsertBefore);
+}
+
+/// InsertDbgValueIntrinsic - Insert a new llvm.dbg.value intrinsic call.
+Instruction *DIFactory::InsertDbgValueIntrinsic(Value *V, Value *Offset,
+                                                DIVariable D,
+                                                BasicBlock *InsertAtEnd) {
+  assert(V && "no value passed to dbg.value");
+  assert(Offset->getType() == Type::getInt64Ty(V->getContext()) &&
+         "offset must be i64");
+  if (!ValueFn)
+    ValueFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_value);
+
+  Value *Elts[] = { V };
+  Value *Args[] = { MDNode::get(V->getContext(), Elts, 1), Offset,
+                    D.getNode() };
+  return CallInst::Create(ValueFn, Args, Args+3, "", InsertAtEnd);
+}
 
 //===----------------------------------------------------------------------===//
 // DebugInfoFinder implementations.
diff --git a/libclamav/c++/llvm/lib/Analysis/DomPrinter.cpp b/libclamav/c++/llvm/lib/Analysis/DomPrinter.cpp
index f1b44d0..32b8994 100644
--- a/libclamav/c++/llvm/lib/Analysis/DomPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/DomPrinter.cpp
@@ -30,46 +30,55 @@ using namespace llvm;
 namespace llvm {
 template<>
 struct DOTGraphTraits<DomTreeNode*> : public DefaultDOTGraphTraits {
-  static std::string getNodeLabel(DomTreeNode *Node, DomTreeNode *Graph,
-                                  bool ShortNames) {
+
+  DOTGraphTraits (bool isSimple=false)
+    : DefaultDOTGraphTraits(isSimple) {}
+
+  std::string getNodeLabel(DomTreeNode *Node, DomTreeNode *Graph) {
 
     BasicBlock *BB = Node->getBlock();
 
     if (!BB)
       return "Post dominance root node";
 
-    return DOTGraphTraits<const Function*>::getNodeLabel(BB, BB->getParent(),
-                                                         ShortNames);
+
+    if (isSimple())
+      return DOTGraphTraits<const Function*>
+	       ::getSimpleNodeLabel(BB, BB->getParent());
+    else
+      return DOTGraphTraits<const Function*>
+	       ::getCompleteNodeLabel(BB, BB->getParent());
   }
 };
 
 template<>
 struct DOTGraphTraits<DominatorTree*> : public DOTGraphTraits<DomTreeNode*> {
 
+  DOTGraphTraits (bool isSimple=false)
+    : DOTGraphTraits<DomTreeNode*>(isSimple) {}
+
   static std::string getGraphName(DominatorTree *DT) {
     return "Dominator tree";
   }
 
-  static std::string getNodeLabel(DomTreeNode *Node,
-                                  DominatorTree *G,
-                                  bool ShortNames) {
-    return DOTGraphTraits<DomTreeNode*>::getNodeLabel(Node, G->getRootNode(),
-                                                      ShortNames);
+  std::string getNodeLabel(DomTreeNode *Node, DominatorTree *G) {
+    return DOTGraphTraits<DomTreeNode*>::getNodeLabel(Node, G->getRootNode());
   }
 };
 
 template<>
 struct DOTGraphTraits<PostDominatorTree*>
   : public DOTGraphTraits<DomTreeNode*> {
+
+  DOTGraphTraits (bool isSimple=false)
+    : DOTGraphTraits<DomTreeNode*>(isSimple) {}
+
   static std::string getGraphName(PostDominatorTree *DT) {
     return "Post dominator tree";
   }
-  static std::string getNodeLabel(DomTreeNode *Node,
-                                  PostDominatorTree *G,
-                                  bool ShortNames) {
-    return DOTGraphTraits<DomTreeNode*>::getNodeLabel(Node,
-                                                      G->getRootNode(),
-                                                      ShortNames);
+
+  std::string getNodeLabel(DomTreeNode *Node, PostDominatorTree *G ) {
+    return DOTGraphTraits<DomTreeNode*>::getNodeLabel(Node, G->getRootNode());
   }
 };
 }
@@ -85,9 +94,11 @@ struct GenericGraphViewer : public FunctionPass {
 
   virtual bool runOnFunction(Function &F) {
     Analysis *Graph;
-
+    std::string Title, GraphName;
     Graph = &getAnalysis<Analysis>();
-    ViewGraph(Graph, Name, OnlyBBS);
+    GraphName = DOTGraphTraits<Analysis*>::getGraphName(Graph);
+    Title = GraphName + " for '" + F.getNameStr() + "' function";
+    ViewGraph(Graph, Name, OnlyBBS, Title);
 
     return false;
   }
@@ -163,8 +174,12 @@ struct GenericGraphPrinter : public FunctionPass {
     raw_fd_ostream File(Filename.c_str(), ErrorInfo);
     Graph = &getAnalysis<Analysis>();
 
+    std::string Title, GraphName;
+    GraphName = DOTGraphTraits<Analysis*>::getGraphName(Graph);
+    Title = GraphName + " for '" + F.getNameStr() + "' function";
+
     if (ErrorInfo.empty())
-      WriteGraph(File, Graph, OnlyBBS);
+      WriteGraph(File, Graph, OnlyBBS, Name, Title);
     else
       errs() << "  error opening file for writing!";
     errs() << "\n";
diff --git a/libclamav/c++/llvm/lib/Analysis/InstructionSimplify.cpp b/libclamav/c++/llvm/lib/Analysis/InstructionSimplify.cpp
index 7a7eb6b..b53ac13 100644
--- a/libclamav/c++/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -21,13 +21,41 @@
 using namespace llvm;
 using namespace llvm::PatternMatch;
 
-/// SimplifyAndInst - Given operands for an And, see if we can
+/// SimplifyAddInst - Given operands for an Add, see if we can
 /// fold the result.  If not, this returns null.
-Value *llvm::SimplifyAndInst(Value *Op0, Value *Op1,
+Value *llvm::SimplifyAddInst(Value *Op0, Value *Op1, bool isNSW, bool isNUW,
                              const TargetData *TD) {
   if (Constant *CLHS = dyn_cast<Constant>(Op0)) {
     if (Constant *CRHS = dyn_cast<Constant>(Op1)) {
       Constant *Ops[] = { CLHS, CRHS };
+      return ConstantFoldInstOperands(Instruction::Add, CLHS->getType(),
+                                      Ops, 2, TD);
+    }
+    
+    // Canonicalize the constant to the RHS.
+    std::swap(Op0, Op1);
+  }
+  
+  if (Constant *Op1C = dyn_cast<Constant>(Op1)) {
+    // X + undef -> undef
+    if (isa<UndefValue>(Op1C))
+      return Op1C;
+    
+    // X + 0 --> X
+    if (Op1C->isNullValue())
+      return Op0;
+  }
+  
+  // FIXME: Could pull several more out of instcombine.
+  return 0;
+}
+
+/// SimplifyAndInst - Given operands for an And, see if we can
+/// fold the result.  If not, this returns null.
+Value *llvm::SimplifyAndInst(Value *Op0, Value *Op1, const TargetData *TD) {
+  if (Constant *CLHS = dyn_cast<Constant>(Op0)) {
+    if (Constant *CRHS = dyn_cast<Constant>(Op1)) {
+      Constant *Ops[] = { CLHS, CRHS };
       return ConstantFoldInstOperands(Instruction::And, CLHS->getType(),
                                       Ops, 2, TD);
     }
@@ -83,8 +111,7 @@ Value *llvm::SimplifyAndInst(Value *Op0, Value *Op1,
 
 /// SimplifyOrInst - Given operands for an Or, see if we can
 /// fold the result.  If not, this returns null.
-Value *llvm::SimplifyOrInst(Value *Op0, Value *Op1,
-                            const TargetData *TD) {
+Value *llvm::SimplifyOrInst(Value *Op0, Value *Op1, const TargetData *TD) {
   if (Constant *CLHS = dyn_cast<Constant>(Op0)) {
     if (Constant *CRHS = dyn_cast<Constant>(Op1)) {
       Constant *Ops[] = { CLHS, CRHS };
@@ -142,8 +169,6 @@ Value *llvm::SimplifyOrInst(Value *Op0, Value *Op1,
 }
 
 
-
-
 static const Type *GetCompareTy(Value *Op) {
   return CmpInst::makeCmpResultType(Op->getType());
 }
@@ -327,6 +352,10 @@ Value *llvm::SimplifyInstruction(Instruction *I, const TargetData *TD) {
   switch (I->getOpcode()) {
   default:
     return ConstantFoldInstruction(I, TD);
+  case Instruction::Add:
+    return SimplifyAddInst(I->getOperand(0), I->getOperand(1),
+                           cast<BinaryOperator>(I)->hasNoSignedWrap(),
+                           cast<BinaryOperator>(I)->hasNoUnsignedWrap(), TD);
   case Instruction::And:
     return SimplifyAndInst(I->getOperand(0), I->getOperand(1), TD);
   case Instruction::Or:
diff --git a/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp b/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp
index 4de756c..34089ee 100644
--- a/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/LoopInfo.cpp
@@ -316,12 +316,12 @@ bool Loop::hasDedicatedExits() const {
 
 /// getUniqueExitBlocks - Return all unique successor blocks of this loop.
 /// These are the blocks _outside of the current loop_ which are branched to.
-/// This assumes that loop is in canonical form.
+/// This assumes that loop exits are in canonical form.
 ///
 void
 Loop::getUniqueExitBlocks(SmallVectorImpl<BasicBlock *> &ExitBlocks) const {
-  assert(isLoopSimplifyForm() &&
-         "getUniqueExitBlocks assumes the loop is in canonical form!");
+  assert(hasDedicatedExits() &&
+         "getUniqueExitBlocks assumes the loop has canonical form exits!");
 
   // Sort the blocks vector so that we can use binary search to do quick
   // lookups.
diff --git a/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp b/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
index f958e75..a0c7706 100644
--- a/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
@@ -20,8 +20,10 @@
 #include "llvm/IntrinsicInst.h"
 #include "llvm/Function.h"
 #include "llvm/Analysis/AliasAnalysis.h"
+#include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/InstructionSimplify.h"
 #include "llvm/Analysis/MemoryBuiltins.h"
+#include "llvm/Analysis/PHITransAddr.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/Support/PredIteratorCache.h"
@@ -171,7 +173,7 @@ MemDepResult MemoryDependenceAnalysis::
 getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad, 
                          BasicBlock::iterator ScanIt, BasicBlock *BB) {
 
-  Value *invariantTag = 0;
+  Value *InvariantTag = 0;
 
   // Walk backwards through the basic block, looking for dependencies.
   while (ScanIt != BB->begin()) {
@@ -179,34 +181,36 @@ getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad,
 
     // If we're in an invariant region, no dependencies can be found before
     // we pass an invariant-begin marker.
-    if (invariantTag == Inst) {
-      invariantTag = 0;
+    if (InvariantTag == Inst) {
+      InvariantTag = 0;
       continue;
-    } else if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(Inst)) {
+    }
+    
+    if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(Inst)) {
+      // Debug intrinsics don't cause dependences.
+      if (isa<DbgInfoIntrinsic>(Inst)) continue;
+      
       // If we pass an invariant-end marker, then we've just entered an
       // invariant region and can start ignoring dependencies.
       if (II->getIntrinsicID() == Intrinsic::invariant_end) {
-        uint64_t invariantSize = ~0ULL;
-        if (ConstantInt *CI = dyn_cast<ConstantInt>(II->getOperand(2)))
-          invariantSize = CI->getZExtValue();
-        
-        AliasAnalysis::AliasResult R =
-          AA->alias(II->getOperand(3), invariantSize, MemPtr, MemSize);
+        // FIXME: This only considers queries directly on the invariant-tagged
+        // pointer, not on query pointers that are indexed off of them.  It'd
+        // be nice to handle that at some point.
+        AliasAnalysis::AliasResult R = 
+          AA->alias(II->getOperand(3), ~0U, MemPtr, ~0U);
         if (R == AliasAnalysis::MustAlias) {
-          invariantTag = II->getOperand(1);
+          InvariantTag = II->getOperand(1);
           continue;
         }
       
       // If we reach a lifetime begin or end marker, then the query ends here
       // because the value is undefined.
-      } else if (II->getIntrinsicID() == Intrinsic::lifetime_start ||
-                   II->getIntrinsicID() == Intrinsic::lifetime_end) {
-        uint64_t invariantSize = ~0ULL;
-        if (ConstantInt *CI = dyn_cast<ConstantInt>(II->getOperand(1)))
-          invariantSize = CI->getZExtValue();
-
+      } else if (II->getIntrinsicID() == Intrinsic::lifetime_start) {
+        // FIXME: This only considers queries directly on the invariant-tagged
+        // pointer, not on query pointers that are indexed off of them.  It'd
+        // be nice to handle that at some point.
         AliasAnalysis::AliasResult R =
-          AA->alias(II->getOperand(2), invariantSize, MemPtr, MemSize);
+          AA->alias(II->getOperand(2), ~0U, MemPtr, ~0U);
         if (R == AliasAnalysis::MustAlias)
           return MemDepResult::getDef(II);
       }
@@ -214,10 +218,7 @@ getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad,
 
     // If we're querying on a load and we're in an invariant region, we're done
     // at this point. Nothing a load depends on can live in an invariant region.
-    if (isLoad && invariantTag) continue;
-
-    // Debug intrinsics don't cause dependences.
-    if (isa<DbgInfoIntrinsic>(Inst)) continue;
+    if (isLoad && InvariantTag) continue;
 
     // Values depend on loads if the pointers are must aliased.  This means that
     // a load depends on another must aliased load from the same value.
@@ -242,7 +243,7 @@ getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad,
     if (StoreInst *SI = dyn_cast<StoreInst>(Inst)) {
       // There can't be stores to the value we care about inside an 
       // invariant region.
-      if (invariantTag) continue;
+      if (InvariantTag) continue;
       
       // If alias analysis can tell that this store is guaranteed to not modify
       // the query pointer, ignore it.  Use getModRefInfo to handle cases where
@@ -291,7 +292,7 @@ getPointerDependencyFrom(Value *MemPtr, uint64_t MemSize, bool isLoad,
     case AliasAnalysis::Mod:
       // If we're in an invariant region, we can ignore calls that ONLY
       // modify the pointer.
-      if (invariantTag) continue;
+      if (InvariantTag) continue;
       return MemDepResult::getClobber(Inst);
     case AliasAnalysis::Ref:
       // If the call is known to never store to the pointer, and if this is a
@@ -368,20 +369,42 @@ MemDepResult MemoryDependenceAnalysis::getDependency(Instruction *QueryInst) {
     // calls to free() erase the entire structure, not just a field.
     MemSize = ~0UL;
   } else if (isa<CallInst>(QueryInst) || isa<InvokeInst>(QueryInst)) {
-    CallSite QueryCS = CallSite::get(QueryInst);
-    bool isReadOnly = AA->onlyReadsMemory(QueryCS);
-    LocalCache = getCallSiteDependencyFrom(QueryCS, isReadOnly, ScanPos,
-                                           QueryParent);
+    int IntrinsicID = 0;  // Intrinsic IDs start at 1.
+    if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(QueryInst))
+      IntrinsicID = II->getIntrinsicID();
+
+    switch (IntrinsicID) {
+    case Intrinsic::lifetime_start:
+    case Intrinsic::lifetime_end:
+    case Intrinsic::invariant_start:
+      MemPtr = QueryInst->getOperand(2);
+      MemSize = cast<ConstantInt>(QueryInst->getOperand(1))->getZExtValue();
+      break;
+    case Intrinsic::invariant_end:
+      MemPtr = QueryInst->getOperand(3);
+      MemSize = cast<ConstantInt>(QueryInst->getOperand(2))->getZExtValue();
+      break;
+    default:
+      CallSite QueryCS = CallSite::get(QueryInst);
+      bool isReadOnly = AA->onlyReadsMemory(QueryCS);
+      LocalCache = getCallSiteDependencyFrom(QueryCS, isReadOnly, ScanPos,
+                                             QueryParent);
+      break;
+    }
   } else {
     // Non-memory instruction.
     LocalCache = MemDepResult::getClobber(--BasicBlock::iterator(ScanPos));
   }
   
   // If we need to do a pointer scan, make it happen.
-  if (MemPtr)
-    LocalCache = getPointerDependencyFrom(MemPtr, MemSize, 
-                                          isa<LoadInst>(QueryInst),
-                                          ScanPos, QueryParent);
+  if (MemPtr) {
+    bool isLoad = !QueryInst->mayWriteToMemory();
+    if (IntrinsicInst *II = dyn_cast<MemoryUseIntrinsic>(QueryInst)) {
+      isLoad |= II->getIntrinsicID() == Intrinsic::lifetime_end;
+    }
+    LocalCache = getPointerDependencyFrom(MemPtr, MemSize, isLoad, ScanPos,
+                                          QueryParent);
+  }
   
   // Remember the result!
   if (Instruction *I = LocalCache.getInst())
@@ -399,7 +422,7 @@ static void AssertSorted(MemoryDependenceAnalysis::NonLocalDepInfo &Cache,
   if (Count == 0) return;
 
   for (unsigned i = 1; i != unsigned(Count); ++i)
-    assert(Cache[i-1] <= Cache[i] && "Cache isn't sorted!");
+    assert(!(Cache[i] < Cache[i-1]) && "Cache isn't sorted!");
 }
 #endif
 
@@ -440,8 +463,8 @@ MemoryDependenceAnalysis::getNonLocalCallDependency(CallSite QueryCS) {
     // determine what is dirty, seeding our initial DirtyBlocks worklist.
     for (NonLocalDepInfo::iterator I = Cache.begin(), E = Cache.end();
        I != E; ++I)
-      if (I->second.isDirty())
-        DirtyBlocks.push_back(I->first);
+      if (I->getResult().isDirty())
+        DirtyBlocks.push_back(I->getBB());
     
     // Sort the cache so that we can do fast binary search lookups below.
     std::sort(Cache.begin(), Cache.end());
@@ -479,27 +502,27 @@ MemoryDependenceAnalysis::getNonLocalCallDependency(CallSite QueryCS) {
     DEBUG(AssertSorted(Cache, NumSortedEntries));
     NonLocalDepInfo::iterator Entry = 
       std::upper_bound(Cache.begin(), Cache.begin()+NumSortedEntries,
-                       std::make_pair(DirtyBB, MemDepResult()));
-    if (Entry != Cache.begin() && prior(Entry)->first == DirtyBB)
+                       NonLocalDepEntry(DirtyBB));
+    if (Entry != Cache.begin() && prior(Entry)->getBB() == DirtyBB)
       --Entry;
     
-    MemDepResult *ExistingResult = 0;
+    NonLocalDepEntry *ExistingResult = 0;
     if (Entry != Cache.begin()+NumSortedEntries && 
-        Entry->first == DirtyBB) {
+        Entry->getBB() == DirtyBB) {
       // If we already have an entry, and if it isn't already dirty, the block
       // is done.
-      if (!Entry->second.isDirty())
+      if (!Entry->getResult().isDirty())
         continue;
       
       // Otherwise, remember this slot so we can update the value.
-      ExistingResult = &Entry->second;
+      ExistingResult = &*Entry;
     }
     
     // If the dirty entry has a pointer, start scanning from it so we don't have
     // to rescan the entire block.
     BasicBlock::iterator ScanPos = DirtyBB->end();
     if (ExistingResult) {
-      if (Instruction *Inst = ExistingResult->getInst()) {
+      if (Instruction *Inst = ExistingResult->getResult().getInst()) {
         ScanPos = Inst;
         // We're removing QueryInst's use of Inst.
         RemoveFromReverseMap(ReverseNonLocalDeps, Inst,
@@ -523,9 +546,9 @@ MemoryDependenceAnalysis::getNonLocalCallDependency(CallSite QueryCS) {
     // If we had a dirty entry for the block, update it.  Otherwise, just add
     // a new entry.
     if (ExistingResult)
-      *ExistingResult = Dep;
+      ExistingResult->setResult(Dep, 0);
     else
-      Cache.push_back(std::make_pair(DirtyBB, Dep));
+      Cache.push_back(NonLocalDepEntry(DirtyBB, Dep, 0));
     
     // If the block has a dependency (i.e. it isn't completely transparent to
     // the value), remember the association!
@@ -565,17 +588,20 @@ getNonLocalPointerDependency(Value *Pointer, bool isLoad, BasicBlock *FromBB,
   const Type *EltTy = cast<PointerType>(Pointer->getType())->getElementType();
   uint64_t PointeeSize = AA->getTypeStoreSize(EltTy);
   
+  PHITransAddr Address(Pointer, TD);
+  
   // This is the set of blocks we've inspected, and the pointer we consider in
   // each block.  Because of critical edges, we currently bail out if querying
   // a block with multiple different pointers.  This can happen during PHI
   // translation.
   DenseMap<BasicBlock*, Value*> Visited;
-  if (!getNonLocalPointerDepFromBB(Pointer, PointeeSize, isLoad, FromBB,
+  if (!getNonLocalPointerDepFromBB(Address, PointeeSize, isLoad, FromBB,
                                    Result, Visited, true))
     return;
   Result.clear();
-  Result.push_back(std::make_pair(FromBB,
-                                  MemDepResult::getClobber(FromBB->begin())));
+  Result.push_back(NonLocalDepEntry(FromBB,
+                                    MemDepResult::getClobber(FromBB->begin()),
+                                    Pointer));
 }
 
 /// GetNonLocalInfoForBlock - Compute the memdep value for BB with
@@ -591,30 +617,30 @@ GetNonLocalInfoForBlock(Value *Pointer, uint64_t PointeeSize,
   // the cache set.  If so, find it.
   NonLocalDepInfo::iterator Entry =
     std::upper_bound(Cache->begin(), Cache->begin()+NumSortedEntries,
-                     std::make_pair(BB, MemDepResult()));
-  if (Entry != Cache->begin() && prior(Entry)->first == BB)
+                     NonLocalDepEntry(BB));
+  if (Entry != Cache->begin() && (Entry-1)->getBB() == BB)
     --Entry;
   
-  MemDepResult *ExistingResult = 0;
-  if (Entry != Cache->begin()+NumSortedEntries && Entry->first == BB)
-    ExistingResult = &Entry->second;
+  NonLocalDepEntry *ExistingResult = 0;
+  if (Entry != Cache->begin()+NumSortedEntries && Entry->getBB() == BB)
+    ExistingResult = &*Entry;
   
   // If we have a cached entry, and it is non-dirty, use it as the value for
   // this dependency.
-  if (ExistingResult && !ExistingResult->isDirty()) {
+  if (ExistingResult && !ExistingResult->getResult().isDirty()) {
     ++NumCacheNonLocalPtr;
-    return *ExistingResult;
+    return ExistingResult->getResult();
   }    
   
   // Otherwise, we have to scan for the value.  If we have a dirty cache
   // entry, start scanning from its position, otherwise we scan from the end
   // of the block.
   BasicBlock::iterator ScanPos = BB->end();
-  if (ExistingResult && ExistingResult->getInst()) {
-    assert(ExistingResult->getInst()->getParent() == BB &&
+  if (ExistingResult && ExistingResult->getResult().getInst()) {
+    assert(ExistingResult->getResult().getInst()->getParent() == BB &&
            "Instruction invalidated?");
     ++NumCacheDirtyNonLocalPtr;
-    ScanPos = ExistingResult->getInst();
+    ScanPos = ExistingResult->getResult().getInst();
     
     // Eliminating the dirty entry from 'Cache', so update the reverse info.
     ValueIsLoadPair CacheKey(Pointer, isLoad);
@@ -630,9 +656,9 @@ GetNonLocalInfoForBlock(Value *Pointer, uint64_t PointeeSize,
   // If we had a dirty entry for the block, update it.  Otherwise, just add
   // a new entry.
   if (ExistingResult)
-    *ExistingResult = Dep;
+    ExistingResult->setResult(Dep, Pointer);
   else
-    Cache->push_back(std::make_pair(BB, Dep));
+    Cache->push_back(NonLocalDepEntry(BB, Dep, Pointer));
   
   // If the block has a dependency (i.e. it isn't completely transparent to
   // the value), remember the reverse association because we just added it
@@ -661,7 +687,7 @@ SortNonLocalDepInfoCache(MemoryDependenceAnalysis::NonLocalDepInfo &Cache,
     break;
   case 2: {
     // Two new entries, insert the last one into place.
-    MemoryDependenceAnalysis::NonLocalDepEntry Val = Cache.back();
+    NonLocalDepEntry Val = Cache.back();
     Cache.pop_back();
     MemoryDependenceAnalysis::NonLocalDepInfo::iterator Entry =
       std::upper_bound(Cache.begin(), Cache.end()-1, Val);
@@ -671,7 +697,7 @@ SortNonLocalDepInfoCache(MemoryDependenceAnalysis::NonLocalDepInfo &Cache,
   case 1:
     // One new entry, Just insert the new value at the appropriate position.
     if (Cache.size() != 1) {
-      MemoryDependenceAnalysis::NonLocalDepEntry Val = Cache.back();
+      NonLocalDepEntry Val = Cache.back();
       Cache.pop_back();
       MemoryDependenceAnalysis::NonLocalDepInfo::iterator Entry =
         std::upper_bound(Cache.begin(), Cache.end(), Val);
@@ -685,171 +711,6 @@ SortNonLocalDepInfoCache(MemoryDependenceAnalysis::NonLocalDepInfo &Cache,
   }
 }
 
-/// isPHITranslatable - Return true if the specified computation is derived from
-/// a PHI node in the current block and if it is simple enough for us to handle.
-static bool isPHITranslatable(Instruction *Inst) {
-  if (isa<PHINode>(Inst))
-    return true;
-  
-  // We can handle bitcast of a PHI, but the PHI needs to be in the same block
-  // as the bitcast.
-  if (BitCastInst *BC = dyn_cast<BitCastInst>(Inst))
-    if (PHINode *PN = dyn_cast<PHINode>(BC->getOperand(0)))
-      if (PN->getParent() == BC->getParent())
-        return true;
-  
-  // We can translate a GEP that uses a PHI in the current block for at least
-  // one of its operands.
-  if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(Inst)) {
-    for (unsigned i = 0, e = GEP->getNumOperands(); i != e; ++i)
-      if (PHINode *PN = dyn_cast<PHINode>(GEP->getOperand(i)))
-        if (PN->getParent() == GEP->getParent())
-          return true;
-  }
-
-  //   cerr << "MEMDEP: Could not PHI translate: " << *Pointer;
-  //   if (isa<BitCastInst>(PtrInst) || isa<GetElementPtrInst>(PtrInst))
-  //     cerr << "OP:\t\t\t\t" << *PtrInst->getOperand(0);
-  
-  return false;
-}
-
-/// PHITranslateForPred - Given a computation that satisfied the
-/// isPHITranslatable predicate, see if we can translate the computation into
-/// the specified predecessor block.  If so, return that value.
-Value *MemoryDependenceAnalysis::
-PHITranslatePointer(Value *InVal, BasicBlock *CurBB, BasicBlock *Pred,
-                    const TargetData *TD) const {  
-  // If the input value is not an instruction, or if it is not defined in CurBB,
-  // then we don't need to phi translate it.
-  Instruction *Inst = dyn_cast<Instruction>(InVal);
-  if (Inst == 0 || Inst->getParent() != CurBB)
-    return InVal;
-  
-  if (PHINode *PN = dyn_cast<PHINode>(Inst))
-    return PN->getIncomingValueForBlock(Pred);
-  
-  // Handle bitcast of PHI.
-  if (BitCastInst *BC = dyn_cast<BitCastInst>(Inst)) {
-    PHINode *BCPN = cast<PHINode>(BC->getOperand(0));
-    Value *PHIIn = BCPN->getIncomingValueForBlock(Pred);
-    
-    // Constants are trivial to phi translate.
-    if (Constant *C = dyn_cast<Constant>(PHIIn))
-      return ConstantExpr::getBitCast(C, BC->getType());
-    
-    // Otherwise we have to see if a bitcasted version of the incoming pointer
-    // is available.  If so, we can use it, otherwise we have to fail.
-    for (Value::use_iterator UI = PHIIn->use_begin(), E = PHIIn->use_end();
-         UI != E; ++UI) {
-      if (BitCastInst *BCI = dyn_cast<BitCastInst>(*UI))
-        if (BCI->getType() == BC->getType())
-          return BCI;
-    }
-    return 0;
-  }
-
-  // Handle getelementptr with at least one PHI operand.
-  if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(Inst)) {
-    SmallVector<Value*, 8> GEPOps;
-    BasicBlock *CurBB = GEP->getParent();
-    for (unsigned i = 0, e = GEP->getNumOperands(); i != e; ++i) {
-      Value *GEPOp = GEP->getOperand(i);
-      // No PHI translation is needed of operands whose values are live in to
-      // the predecessor block.
-      if (!isa<Instruction>(GEPOp) ||
-          cast<Instruction>(GEPOp)->getParent() != CurBB) {
-        GEPOps.push_back(GEPOp);
-        continue;
-      }
-      
-      // If the operand is a phi node, do phi translation.
-      if (PHINode *PN = dyn_cast<PHINode>(GEPOp)) {
-        GEPOps.push_back(PN->getIncomingValueForBlock(Pred));
-        continue;
-      }
-      
-      // Otherwise, we can't PHI translate this random value defined in this
-      // block.
-      return 0;
-    }
-    
-    // Simplify the GEP to handle 'gep x, 0' -> x etc.
-    if (Value *V = SimplifyGEPInst(&GEPOps[0], GEPOps.size(), TD))
-      return V;
-
-
-    // Scan to see if we have this GEP available.
-    Value *APHIOp = GEPOps[0];
-    for (Value::use_iterator UI = APHIOp->use_begin(), E = APHIOp->use_end();
-         UI != E; ++UI) {
-      if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(*UI))
-        if (GEPI->getType() == GEP->getType() &&
-            GEPI->getNumOperands() == GEPOps.size() &&
-            GEPI->getParent()->getParent() == CurBB->getParent()) {
-          bool Mismatch = false;
-          for (unsigned i = 0, e = GEPOps.size(); i != e; ++i)
-            if (GEPI->getOperand(i) != GEPOps[i]) {
-              Mismatch = true;
-              break;
-            }
-          if (!Mismatch)
-            return GEPI;
-        }
-    }
-    return 0;
-  }
-  
-  return 0;
-}
-
-/// InsertPHITranslatedPointer - Insert a computation of the PHI translated
-/// version of 'V' for the edge PredBB->CurBB into the end of the PredBB
-/// block.
-///
-/// This is only called when PHITranslatePointer returns a value that doesn't
-/// dominate the block, so we don't need to handle the trivial cases here.
-Value *MemoryDependenceAnalysis::
-InsertPHITranslatedPointer(Value *InVal, BasicBlock *CurBB,
-                           BasicBlock *PredBB, const TargetData *TD) const {
-  // If the input value isn't an instruction in CurBB, it doesn't need phi
-  // translation.
-  Instruction *Inst = cast<Instruction>(InVal);
-  assert(Inst->getParent() == CurBB && "Doesn't need phi trans");
-
-  // Handle bitcast of PHI.
-  if (BitCastInst *BC = dyn_cast<BitCastInst>(Inst)) {
-    PHINode *BCPN = cast<PHINode>(BC->getOperand(0));
-    Value *PHIIn = BCPN->getIncomingValueForBlock(PredBB);
-    
-    // Otherwise insert a bitcast at the end of PredBB.
-    return new BitCastInst(PHIIn, InVal->getType(),
-                           InVal->getName()+".phi.trans.insert",
-                           PredBB->getTerminator());
-  }
-  
-  // Handle getelementptr with at least one PHI operand.
-  if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(Inst)) {
-    SmallVector<Value*, 8> GEPOps;
-    Value *APHIOp = 0;
-    BasicBlock *CurBB = GEP->getParent();
-    for (unsigned i = 0, e = GEP->getNumOperands(); i != e; ++i) {
-      GEPOps.push_back(GEP->getOperand(i)->DoPHITranslation(CurBB, PredBB));
-      if (!isa<Constant>(GEPOps.back()))
-        APHIOp = GEPOps.back();
-    }
-    
-    GetElementPtrInst *Result = 
-      GetElementPtrInst::Create(GEPOps[0], GEPOps.begin()+1, GEPOps.end(),
-                                InVal->getName()+".phi.trans.insert",
-                                PredBB->getTerminator());
-    Result->setIsInBounds(GEP->isInBounds());
-    return Result;
-  }
-  
-  return 0;
-}
-
 /// getNonLocalPointerDepFromBB - Perform a dependency query based on
 /// pointer/pointeesize starting at the end of StartBB.  Add any clobber/def
 /// results to the results vector and keep track of which blocks are visited in
@@ -863,14 +724,14 @@ InsertPHITranslatedPointer(Value *InVal, BasicBlock *CurBB,
 /// not compute dependence information for some reason.  This should be treated
 /// as a clobber dependence on the first instruction in the predecessor block.
 bool MemoryDependenceAnalysis::
-getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
+getNonLocalPointerDepFromBB(const PHITransAddr &Pointer, uint64_t PointeeSize,
                             bool isLoad, BasicBlock *StartBB,
                             SmallVectorImpl<NonLocalDepEntry> &Result,
                             DenseMap<BasicBlock*, Value*> &Visited,
                             bool SkipFirstBlock) {
   
   // Look up the cached info for Pointer.
-  ValueIsLoadPair CacheKey(Pointer, isLoad);
+  ValueIsLoadPair CacheKey(Pointer.getAddr(), isLoad);
   
   std::pair<BBSkipFirstBlockPair, NonLocalDepInfo> *CacheInfo =
     &NonLocalPointerDeps[CacheKey];
@@ -887,8 +748,9 @@ getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
     if (!Visited.empty()) {
       for (NonLocalDepInfo::iterator I = Cache->begin(), E = Cache->end();
            I != E; ++I) {
-        DenseMap<BasicBlock*, Value*>::iterator VI = Visited.find(I->first);
-        if (VI == Visited.end() || VI->second == Pointer) continue;
+        DenseMap<BasicBlock*, Value*>::iterator VI = Visited.find(I->getBB());
+        if (VI == Visited.end() || VI->second == Pointer.getAddr())
+          continue;
         
         // We have a pointer mismatch in a block.  Just return clobber, saying
         // that something was clobbered in this result.  We could also do a
@@ -899,8 +761,8 @@ getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
     
     for (NonLocalDepInfo::iterator I = Cache->begin(), E = Cache->end();
          I != E; ++I) {
-      Visited.insert(std::make_pair(I->first, Pointer));
-      if (!I->second.isNonLocal())
+      Visited.insert(std::make_pair(I->getBB(), Pointer.getAddr()));
+      if (!I->getResult().isNonLocal())
         Result.push_back(*I);
     }
     ++NumCacheCompleteNonLocalPtr;
@@ -939,30 +801,27 @@ getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
       // Get the dependency info for Pointer in BB.  If we have cached
       // information, we will use it, otherwise we compute it.
       DEBUG(AssertSorted(*Cache, NumSortedEntries));
-      MemDepResult Dep = GetNonLocalInfoForBlock(Pointer, PointeeSize, isLoad,
-                                                 BB, Cache, NumSortedEntries);
+      MemDepResult Dep = GetNonLocalInfoForBlock(Pointer.getAddr(), PointeeSize,
+                                                 isLoad, BB, Cache,
+                                                 NumSortedEntries);
       
       // If we got a Def or Clobber, add this to the list of results.
       if (!Dep.isNonLocal()) {
-        Result.push_back(NonLocalDepEntry(BB, Dep));
+        Result.push_back(NonLocalDepEntry(BB, Dep, Pointer.getAddr()));
         continue;
       }
     }
     
     // If 'Pointer' is an instruction defined in this block, then we need to do
     // phi translation to change it into a value live in the predecessor block.
-    // If phi translation fails, then we can't continue dependence analysis.
-    Instruction *PtrInst = dyn_cast<Instruction>(Pointer);
-    bool NeedsPHITranslation = PtrInst && PtrInst->getParent() == BB;
-    
-    // If no PHI translation is needed, just add all the predecessors of this
-    // block to scan them as well.
-    if (!NeedsPHITranslation) {
+    // If not, we just add the predecessors to the worklist and scan them with
+    // the same Pointer.
+    if (!Pointer.NeedsPHITranslationFromBlock(BB)) {
       SkipFirstBlock = false;
       for (BasicBlock **PI = PredCache->GetPreds(BB); *PI; ++PI) {
         // Verify that we haven't looked at this block yet.
         std::pair<DenseMap<BasicBlock*,Value*>::iterator, bool>
-          InsertRes = Visited.insert(std::make_pair(*PI, Pointer));
+          InsertRes = Visited.insert(std::make_pair(*PI, Pointer.getAddr()));
         if (InsertRes.second) {
           // First time we've looked at *PI.
           Worklist.push_back(*PI);
@@ -972,16 +831,17 @@ getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
         // If we have seen this block before, but it was with a different
         // pointer then we have a phi translation failure and we have to treat
         // this as a clobber.
-        if (InsertRes.first->second != Pointer)
+        if (InsertRes.first->second != Pointer.getAddr())
           goto PredTranslationFailure;
       }
       continue;
     }
     
-    // If we do need to do phi translation, then there are a bunch of different
-    // cases, because we have to find a Value* live in the predecessor block. We
-    // know that PtrInst is defined in this block at least.
-
+    // We do need to do phi translation, if we know ahead of time we can't phi
+    // translate this value, don't even try.
+    if (!Pointer.IsPotentiallyPHITranslatable())
+      goto PredTranslationFailure;
+    
     // We may have added values to the cache list before this PHI translation.
     // If so, we haven't done anything to ensure that the cache remains sorted.
     // Sort it now (if needed) so that recursive invocations of
@@ -991,25 +851,17 @@ getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
       SortNonLocalDepInfoCache(*Cache, NumSortedEntries);
       NumSortedEntries = Cache->size();
     }
-    
-    // If this is a computation derived from a PHI node, use the suitably
-    // translated incoming values for each pred as the phi translated version.
-    if (!isPHITranslatable(PtrInst))
-      goto PredTranslationFailure;
-
     Cache = 0;
-      
+    
     for (BasicBlock **PI = PredCache->GetPreds(BB); *PI; ++PI) {
       BasicBlock *Pred = *PI;
-      Value *PredPtr = PHITranslatePointer(PtrInst, BB, Pred, TD);
       
-      // If PHI translation fails, bail out.
-      if (PredPtr == 0) {
-        // FIXME: Instead of modelling this as a phi trans failure, we should
-        // model this as a clobber in the one predecessor.  This will allow
-        // us to PRE values that are only available in some preds but not all.
-        goto PredTranslationFailure;
-      }
+      // Get the PHI translated pointer in this predecessor.  This can fail if
+      // not translatable, in which case the getAddr() returns null.
+      PHITransAddr PredPointer(Pointer);
+      PredPointer.PHITranslateValue(BB, Pred);
+
+      Value *PredPtrVal = PredPointer.getAddr();
       
       // Check to see if we have already visited this pred block with another
       // pointer.  If so, we can't do this lookup.  This failure can occur
@@ -1017,12 +869,12 @@ getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
       // the successor translates to a pointer value different than the
       // pointer the block was first analyzed with.
       std::pair<DenseMap<BasicBlock*,Value*>::iterator, bool>
-        InsertRes = Visited.insert(std::make_pair(Pred, PredPtr));
+        InsertRes = Visited.insert(std::make_pair(Pred, PredPtrVal));
 
       if (!InsertRes.second) {
         // If the predecessor was visited with PredPtr, then we already did
         // the analysis and can ignore it.
-        if (InsertRes.first->second == PredPtr)
+        if (InsertRes.first->second == PredPtrVal)
           continue;
         
         // Otherwise, the block was previously analyzed with a different
@@ -1030,6 +882,50 @@ getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
         // treat this as a phi translation failure.
         goto PredTranslationFailure;
       }
+      
+      // If PHI translation was unable to find an available pointer in this
+      // predecessor, then we have to assume that the pointer is clobbered in
+      // that predecessor.  We can still do PRE of the load, which would insert
+      // a computation of the pointer in this predecessor.
+      if (PredPtrVal == 0) {
+        // Add the entry to the Result list.
+        NonLocalDepEntry Entry(Pred,
+                               MemDepResult::getClobber(Pred->getTerminator()),
+                               PredPtrVal);
+        Result.push_back(Entry);
+
+        // Add it to the cache for this CacheKey so that subsequent queries get
+        // this result.
+        Cache = &NonLocalPointerDeps[CacheKey].second;
+        MemoryDependenceAnalysis::NonLocalDepInfo::iterator It =
+          std::upper_bound(Cache->begin(), Cache->end(), Entry);
+        
+        if (It != Cache->begin() && (It-1)->getBB() == Pred)
+          --It;
+
+        if (It == Cache->end() || It->getBB() != Pred) {
+          Cache->insert(It, Entry);
+          // Add it to the reverse map.
+          ReverseNonLocalPtrDeps[Pred->getTerminator()].insert(CacheKey);
+        } else if (!It->getResult().isDirty()) {
+          // noop
+        } else if (It->getResult().getInst() == Pred->getTerminator()) {
+          // Same instruction, clear the dirty marker.
+          It->setResult(Entry.getResult(), PredPtrVal);
+        } else if (It->getResult().getInst() == 0) {
+          // Dirty, with no instruction, just add this.
+          It->setResult(Entry.getResult(), PredPtrVal);
+          ReverseNonLocalPtrDeps[Pred->getTerminator()].insert(CacheKey);
+        } else {
+          // Otherwise, dirty with a different instruction.
+          RemoveFromReverseMap(ReverseNonLocalPtrDeps,
+                               It->getResult().getInst(), CacheKey);
+          It->setResult(Entry.getResult(),PredPtrVal);
+          ReverseNonLocalPtrDeps[Pred->getTerminator()].insert(CacheKey);
+        }
+        Cache = 0;
+        continue;
+      }
 
       // FIXME: it is entirely possible that PHI translating will end up with
       // the same value.  Consider PHI translating something like:
@@ -1038,7 +934,7 @@ getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
       
       // If we have a problem phi translating, fall through to the code below
       // to handle the failure condition.
-      if (getNonLocalPointerDepFromBB(PredPtr, PointeeSize, isLoad, Pred,
+      if (getNonLocalPointerDepFromBB(PredPointer, PointeeSize, isLoad, Pred,
                                       Result, Visited))
         goto PredTranslationFailure;
     }
@@ -1082,12 +978,12 @@ getNonLocalPointerDepFromBB(Value *Pointer, uint64_t PointeeSize,
     
     for (NonLocalDepInfo::reverse_iterator I = Cache->rbegin(); ; ++I) {
       assert(I != Cache->rend() && "Didn't find current block??");
-      if (I->first != BB)
+      if (I->getBB() != BB)
         continue;
       
-      assert(I->second.isNonLocal() &&
+      assert(I->getResult().isNonLocal() &&
              "Should only be here with transparent block");
-      I->second = MemDepResult::getClobber(BB->begin());
+      I->setResult(MemDepResult::getClobber(BB->begin()), Pointer.getAddr());
       ReverseNonLocalPtrDeps[BB->begin()].insert(CacheKey);
       Result.push_back(*I);
       break;
@@ -1113,9 +1009,9 @@ RemoveCachedNonLocalPointerDependencies(ValueIsLoadPair P) {
   NonLocalDepInfo &PInfo = It->second.second;
   
   for (unsigned i = 0, e = PInfo.size(); i != e; ++i) {
-    Instruction *Target = PInfo[i].second.getInst();
+    Instruction *Target = PInfo[i].getResult().getInst();
     if (Target == 0) continue;  // Ignore non-local dep results.
-    assert(Target->getParent() == PInfo[i].first);
+    assert(Target->getParent() == PInfo[i].getBB());
     
     // Eliminating the dirty entry from 'Cache', so update the reverse info.
     RemoveFromReverseMap(ReverseNonLocalPtrDeps, Target, P);
@@ -1152,7 +1048,7 @@ void MemoryDependenceAnalysis::removeInstruction(Instruction *RemInst) {
     NonLocalDepInfo &BlockMap = NLDI->second.first;
     for (NonLocalDepInfo::iterator DI = BlockMap.begin(), DE = BlockMap.end();
          DI != DE; ++DI)
-      if (Instruction *Inst = DI->second.getInst())
+      if (Instruction *Inst = DI->getResult().getInst())
         RemoveFromReverseMap(ReverseNonLocalDeps, Inst, RemInst);
     NonLocalDeps.erase(NLDI);
   }
@@ -1240,10 +1136,10 @@ void MemoryDependenceAnalysis::removeInstruction(Instruction *RemInst) {
       
       for (NonLocalDepInfo::iterator DI = INLD.first.begin(), 
            DE = INLD.first.end(); DI != DE; ++DI) {
-        if (DI->second.getInst() != RemInst) continue;
+        if (DI->getResult().getInst() != RemInst) continue;
         
         // Convert to a dirty entry for the subsequent instruction.
-        DI->second = NewDirtyVal;
+        DI->setResult(NewDirtyVal, DI->getAddress());
         
         if (Instruction *NextI = NewDirtyVal.getInst())
           ReverseDepsToAdd.push_back(std::make_pair(NextI, *I));
@@ -1282,10 +1178,10 @@ void MemoryDependenceAnalysis::removeInstruction(Instruction *RemInst) {
       // Update any entries for RemInst to use the instruction after it.
       for (NonLocalDepInfo::iterator DI = NLPDI.begin(), DE = NLPDI.end();
            DI != DE; ++DI) {
-        if (DI->second.getInst() != RemInst) continue;
+        if (DI->getResult().getInst() != RemInst) continue;
         
         // Convert to a dirty entry for the subsequent instruction.
-        DI->second = NewDirtyVal;
+        DI->setResult(NewDirtyVal, DI->getAddress());
         
         if (Instruction *NewDirtyInst = NewDirtyVal.getInst())
           ReversePtrDepsToAdd.push_back(std::make_pair(NewDirtyInst, P));
@@ -1326,7 +1222,7 @@ void MemoryDependenceAnalysis::verifyRemoved(Instruction *D) const {
     const NonLocalDepInfo &Val = I->second.second;
     for (NonLocalDepInfo::const_iterator II = Val.begin(), E = Val.end();
          II != E; ++II)
-      assert(II->second.getInst() != D && "Inst occurs as NLPD value");
+      assert(II->getResult().getInst() != D && "Inst occurs as NLPD value");
   }
   
   for (NonLocalDepMapType::const_iterator I = NonLocalDeps.begin(),
@@ -1335,7 +1231,7 @@ void MemoryDependenceAnalysis::verifyRemoved(Instruction *D) const {
     const PerInstNLInfo &INLD = I->second;
     for (NonLocalDepInfo::const_iterator II = INLD.first.begin(),
          EE = INLD.first.end(); II  != EE; ++II)
-      assert(II->second.getInst() != D && "Inst occurs in data structures");
+      assert(II->getResult().getInst() != D && "Inst occurs in data structures");
   }
   
   for (ReverseDepMapType::const_iterator I = ReverseLocalDeps.begin(),
diff --git a/libclamav/c++/llvm/lib/Analysis/PHITransAddr.cpp b/libclamav/c++/llvm/lib/Analysis/PHITransAddr.cpp
new file mode 100644
index 0000000..07e2919
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Analysis/PHITransAddr.cpp
@@ -0,0 +1,432 @@
+//===- PHITransAddr.cpp - PHI Translation for Addresses -------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the PHITransAddr class.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/Analysis/PHITransAddr.h"
+#include "llvm/Analysis/Dominators.h"
+#include "llvm/Analysis/InstructionSimplify.h"
+#include "llvm/Support/raw_ostream.h"
+using namespace llvm;
+
+static bool CanPHITrans(Instruction *Inst) {
+  if (isa<PHINode>(Inst) ||
+      isa<BitCastInst>(Inst) ||
+      isa<GetElementPtrInst>(Inst))
+    return true;
+  
+  if (Inst->getOpcode() == Instruction::Add &&
+      isa<ConstantInt>(Inst->getOperand(1)))
+    return true;
+  
+  //   cerr << "MEMDEP: Could not PHI translate: " << *Pointer;
+  //   if (isa<BitCastInst>(PtrInst) || isa<GetElementPtrInst>(PtrInst))
+  //     cerr << "OP:\t\t\t\t" << *PtrInst->getOperand(0);
+  return false;
+}
+
+void PHITransAddr::dump() const {
+  if (Addr == 0) {
+    errs() << "PHITransAddr: null\n";
+    return;
+  }
+  errs() << "PHITransAddr: " << *Addr << "\n";
+  for (unsigned i = 0, e = InstInputs.size(); i != e; ++i)
+    errs() << "  Input #" << i << " is " << *InstInputs[i] << "\n";
+}
+
+
+static bool VerifySubExpr(Value *Expr,
+                          SmallVectorImpl<Instruction*> &InstInputs) {
+  // If this is a non-instruction value, there is nothing to do.
+  Instruction *I = dyn_cast<Instruction>(Expr);
+  if (I == 0) return true;
+  
+  // If it's an instruction, it is either in Tmp or its operands recursively
+  // are.
+  SmallVectorImpl<Instruction*>::iterator Entry =
+    std::find(InstInputs.begin(), InstInputs.end(), I);
+  if (Entry != InstInputs.end()) {
+    InstInputs.erase(Entry);
+    return true;
+  }
+  
+  // If it isn't in the InstInputs list it is a subexpr incorporated into the
+  // address.  Sanity check that it is phi translatable.
+  if (!CanPHITrans(I)) {
+    errs() << "Non phi translatable instruction found in PHITransAddr, either "
+              "something is missing from InstInputs or CanPHITrans is wrong:\n";
+    errs() << *I << '\n';
+    return false;
+  }
+  
+  // Validate the operands of the instruction.
+  for (unsigned i = 0, e = I->getNumOperands(); i != e; ++i)
+    if (!VerifySubExpr(I->getOperand(i), InstInputs))
+      return false;
+
+  return true;
+}
+
+/// Verify - Check internal consistency of this data structure.  If the
+/// structure is valid, it returns true.  If invalid, it prints errors and
+/// returns false.
+bool PHITransAddr::Verify() const {
+  if (Addr == 0) return true;
+  
+  SmallVector<Instruction*, 8> Tmp(InstInputs.begin(), InstInputs.end());  
+  
+  if (!VerifySubExpr(Addr, Tmp))
+    return false;
+  
+  if (!Tmp.empty()) {
+    errs() << "PHITransAddr inconsistent, contains extra instructions:\n";
+    for (unsigned i = 0, e = InstInputs.size(); i != e; ++i)
+      errs() << "  InstInput #" << i << " is " << *InstInputs[i] << "\n";
+    return false;
+  }
+  
+  // a-ok.
+  return true;
+}
+
+
+/// IsPotentiallyPHITranslatable - If this needs PHI translation, return true
+/// if we have some hope of doing it.  This should be used as a filter to
+/// avoid calling PHITranslateValue in hopeless situations.
+bool PHITransAddr::IsPotentiallyPHITranslatable() const {
+  // If the input value is not an instruction, or if it is not defined in CurBB,
+  // then we don't need to phi translate it.
+  Instruction *Inst = dyn_cast<Instruction>(Addr);
+  return Inst == 0 || CanPHITrans(Inst);
+}
+
+
+static void RemoveInstInputs(Value *V, 
+                             SmallVectorImpl<Instruction*> &InstInputs) {
+  Instruction *I = dyn_cast<Instruction>(V);
+  if (I == 0) return;
+  
+  // If the instruction is in the InstInputs list, remove it.
+  SmallVectorImpl<Instruction*>::iterator Entry =
+    std::find(InstInputs.begin(), InstInputs.end(), I);
+  if (Entry != InstInputs.end()) {
+    InstInputs.erase(Entry);
+    return;
+  }
+  
+  assert(!isa<PHINode>(I) && "Error, removing something that isn't an input");
+  
+  // Otherwise, it must have instruction inputs itself.  Zap them recursively.
+  for (unsigned i = 0, e = I->getNumOperands(); i != e; ++i) {
+    if (Instruction *Op = dyn_cast<Instruction>(I->getOperand(i)))
+      RemoveInstInputs(Op, InstInputs);
+  }
+}
+
+Value *PHITransAddr::PHITranslateSubExpr(Value *V, BasicBlock *CurBB,
+                                         BasicBlock *PredBB) {
+  // If this is a non-instruction value, it can't require PHI translation.
+  Instruction *Inst = dyn_cast<Instruction>(V);
+  if (Inst == 0) return V;
+  
+  // Determine whether 'Inst' is an input to our PHI translatable expression.
+  bool isInput = std::count(InstInputs.begin(), InstInputs.end(), Inst);
+
+  // Handle inputs instructions if needed.
+  if (isInput) {
+    if (Inst->getParent() != CurBB) {
+      // If it is an input defined in a different block, then it remains an
+      // input.
+      return Inst;
+    }
+
+    // If 'Inst' is defined in this block and is an input that needs to be phi
+    // translated, we need to incorporate the value into the expression or fail.
+
+    // In either case, the instruction itself isn't an input any longer.
+    InstInputs.erase(std::find(InstInputs.begin(), InstInputs.end(), Inst));
+    
+    // If this is a PHI, go ahead and translate it.
+    if (PHINode *PN = dyn_cast<PHINode>(Inst))
+      return AddAsInput(PN->getIncomingValueForBlock(PredBB));
+    
+    // If this is a non-phi value, and it is analyzable, we can incorporate it
+    // into the expression by making all instruction operands be inputs.
+    if (!CanPHITrans(Inst))
+      return 0;
+   
+    // All instruction operands are now inputs (and of course, they may also be
+    // defined in this block, so they may need to be phi translated themselves.
+    for (unsigned i = 0, e = Inst->getNumOperands(); i != e; ++i)
+      if (Instruction *Op = dyn_cast<Instruction>(Inst->getOperand(i)))
+        InstInputs.push_back(Op);
+  }
+
+  // Ok, it must be an intermediate result (either because it started that way
+  // or because we just incorporated it into the expression).  See if its
+  // operands need to be phi translated, and if so, reconstruct it.
+  
+  if (BitCastInst *BC = dyn_cast<BitCastInst>(Inst)) {
+    Value *PHIIn = PHITranslateSubExpr(BC->getOperand(0), CurBB, PredBB);
+    if (PHIIn == 0) return 0;
+    if (PHIIn == BC->getOperand(0))
+      return BC;
+    
+    // Find an available version of this cast.
+    
+    // Constants are trivial to find.
+    if (Constant *C = dyn_cast<Constant>(PHIIn))
+      return AddAsInput(ConstantExpr::getBitCast(C, BC->getType()));
+    
+    // Otherwise we have to see if a bitcasted version of the incoming pointer
+    // is available.  If so, we can use it, otherwise we have to fail.
+    for (Value::use_iterator UI = PHIIn->use_begin(), E = PHIIn->use_end();
+         UI != E; ++UI) {
+      if (BitCastInst *BCI = dyn_cast<BitCastInst>(*UI))
+        if (BCI->getType() == BC->getType())
+          return BCI;
+    }
+    return 0;
+  }
+  
+  // Handle getelementptr with at least one PHI translatable operand.
+  if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(Inst)) {
+    SmallVector<Value*, 8> GEPOps;
+    bool AnyChanged = false;
+    for (unsigned i = 0, e = GEP->getNumOperands(); i != e; ++i) {
+      Value *GEPOp = PHITranslateSubExpr(GEP->getOperand(i), CurBB, PredBB);
+      if (GEPOp == 0) return 0;
+      
+      AnyChanged |= GEPOp != GEP->getOperand(i);
+      GEPOps.push_back(GEPOp);
+    }
+    
+    if (!AnyChanged)
+      return GEP;
+    
+    // Simplify the GEP to handle 'gep x, 0' -> x etc.
+    if (Value *V = SimplifyGEPInst(&GEPOps[0], GEPOps.size(), TD)) {
+      for (unsigned i = 0, e = GEPOps.size(); i != e; ++i)
+        RemoveInstInputs(GEPOps[i], InstInputs);
+      
+      return AddAsInput(V);
+    }
+    
+    // Scan to see if we have this GEP available.
+    Value *APHIOp = GEPOps[0];
+    for (Value::use_iterator UI = APHIOp->use_begin(), E = APHIOp->use_end();
+         UI != E; ++UI) {
+      if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(*UI))
+        if (GEPI->getType() == GEP->getType() &&
+            GEPI->getNumOperands() == GEPOps.size() &&
+            GEPI->getParent()->getParent() == CurBB->getParent()) {
+          bool Mismatch = false;
+          for (unsigned i = 0, e = GEPOps.size(); i != e; ++i)
+            if (GEPI->getOperand(i) != GEPOps[i]) {
+              Mismatch = true;
+              break;
+            }
+          if (!Mismatch)
+            return GEPI;
+        }
+    }
+    return 0;
+  }
+  
+  // Handle add with a constant RHS.
+  if (Inst->getOpcode() == Instruction::Add &&
+      isa<ConstantInt>(Inst->getOperand(1))) {
+    // PHI translate the LHS.
+    Constant *RHS = cast<ConstantInt>(Inst->getOperand(1));
+    bool isNSW = cast<BinaryOperator>(Inst)->hasNoSignedWrap();
+    bool isNUW = cast<BinaryOperator>(Inst)->hasNoUnsignedWrap();
+    
+    Value *LHS = PHITranslateSubExpr(Inst->getOperand(0), CurBB, PredBB);
+    if (LHS == 0) return 0;
+    
+    // If the PHI translated LHS is an add of a constant, fold the immediates.
+    if (BinaryOperator *BOp = dyn_cast<BinaryOperator>(LHS))
+      if (BOp->getOpcode() == Instruction::Add)
+        if (ConstantInt *CI = dyn_cast<ConstantInt>(BOp->getOperand(1))) {
+          LHS = BOp->getOperand(0);
+          RHS = ConstantExpr::getAdd(RHS, CI);
+          isNSW = isNUW = false;
+          
+          // If the old 'LHS' was an input, add the new 'LHS' as an input.
+          if (std::count(InstInputs.begin(), InstInputs.end(), BOp)) {
+            RemoveInstInputs(BOp, InstInputs);
+            AddAsInput(LHS);
+          }
+        }
+    
+    // See if the add simplifies away.
+    if (Value *Res = SimplifyAddInst(LHS, RHS, isNSW, isNUW, TD)) {
+      // If we simplified the operands, the LHS is no longer an input, but Res
+      // is.
+      RemoveInstInputs(LHS, InstInputs);
+      return AddAsInput(Res);
+    }
+
+    // If we didn't modify the add, just return it.
+    if (LHS == Inst->getOperand(0) && RHS == Inst->getOperand(1))
+      return Inst;
+    
+    // Otherwise, see if we have this add available somewhere.
+    for (Value::use_iterator UI = LHS->use_begin(), E = LHS->use_end();
+         UI != E; ++UI) {
+      if (BinaryOperator *BO = dyn_cast<BinaryOperator>(*UI))
+        if (BO->getOpcode() == Instruction::Add &&
+            BO->getOperand(0) == LHS && BO->getOperand(1) == RHS &&
+            BO->getParent()->getParent() == CurBB->getParent())
+          return BO;
+    }
+    
+    return 0;
+  }
+  
+  // Otherwise, we failed.
+  return 0;
+}
+
+
+/// PHITranslateValue - PHI translate the current address up the CFG from
+/// CurBB to Pred, updating our state the reflect any needed changes.  This
+/// returns true on failure and sets Addr to null.
+bool PHITransAddr::PHITranslateValue(BasicBlock *CurBB, BasicBlock *PredBB) {
+  assert(Verify() && "Invalid PHITransAddr!");
+  Addr = PHITranslateSubExpr(Addr, CurBB, PredBB);
+  assert(Verify() && "Invalid PHITransAddr!");
+  return Addr == 0;
+}
+
+/// GetAvailablePHITranslatedSubExpr - Return the value computed by
+/// PHITranslateSubExpr if it dominates PredBB, otherwise return null.
+Value *PHITransAddr::
+GetAvailablePHITranslatedSubExpr(Value *V, BasicBlock *CurBB,BasicBlock *PredBB,
+                                 const DominatorTree &DT) const {
+  PHITransAddr Tmp(V, TD);
+  Tmp.PHITranslateValue(CurBB, PredBB);
+  
+  // See if PHI translation succeeds.
+  V = Tmp.getAddr();
+  
+  // Make sure the value is live in the predecessor.
+  if (Instruction *Inst = dyn_cast_or_null<Instruction>(V))
+    if (!DT.dominates(Inst->getParent(), PredBB))
+      return 0;
+  return V;
+}
+
+
+/// PHITranslateWithInsertion - PHI translate this value into the specified
+/// predecessor block, inserting a computation of the value if it is
+/// unavailable.
+///
+/// All newly created instructions are added to the NewInsts list.  This
+/// returns null on failure.
+///
+Value *PHITransAddr::
+PHITranslateWithInsertion(BasicBlock *CurBB, BasicBlock *PredBB,
+                          const DominatorTree &DT,
+                          SmallVectorImpl<Instruction*> &NewInsts) {
+  unsigned NISize = NewInsts.size();
+  
+  // Attempt to PHI translate with insertion.
+  Addr = InsertPHITranslatedSubExpr(Addr, CurBB, PredBB, DT, NewInsts);
+  
+  // If successful, return the new value.
+  if (Addr) return Addr;
+  
+  // If not, destroy any intermediate instructions inserted.
+  while (NewInsts.size() != NISize)
+    NewInsts.pop_back_val()->eraseFromParent();
+  return 0;
+}
+
+
+/// InsertPHITranslatedPointer - Insert a computation of the PHI translated
+/// version of 'V' for the edge PredBB->CurBB into the end of the PredBB
+/// block.  All newly created instructions are added to the NewInsts list.
+/// This returns null on failure.
+///
+Value *PHITransAddr::
+InsertPHITranslatedSubExpr(Value *InVal, BasicBlock *CurBB,
+                           BasicBlock *PredBB, const DominatorTree &DT,
+                           SmallVectorImpl<Instruction*> &NewInsts) {
+  // See if we have a version of this value already available and dominating
+  // PredBB.  If so, there is no need to insert a new instance of it.
+  if (Value *Res = GetAvailablePHITranslatedSubExpr(InVal, CurBB, PredBB, DT))
+    return Res;
+
+  // If we don't have an available version of this value, it must be an
+  // instruction.
+  Instruction *Inst = cast<Instruction>(InVal);
+  
+  // Handle bitcast of PHI translatable value.
+  if (BitCastInst *BC = dyn_cast<BitCastInst>(Inst)) {
+    Value *OpVal = InsertPHITranslatedSubExpr(BC->getOperand(0),
+                                              CurBB, PredBB, DT, NewInsts);
+    if (OpVal == 0) return 0;
+    
+    // Otherwise insert a bitcast at the end of PredBB.
+    BitCastInst *New = new BitCastInst(OpVal, InVal->getType(),
+                                       InVal->getName()+".phi.trans.insert",
+                                       PredBB->getTerminator());
+    NewInsts.push_back(New);
+    return New;
+  }
+  
+  // Handle getelementptr with at least one PHI operand.
+  if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(Inst)) {
+    SmallVector<Value*, 8> GEPOps;
+    BasicBlock *CurBB = GEP->getParent();
+    for (unsigned i = 0, e = GEP->getNumOperands(); i != e; ++i) {
+      Value *OpVal = InsertPHITranslatedSubExpr(GEP->getOperand(i),
+                                                CurBB, PredBB, DT, NewInsts);
+      if (OpVal == 0) return 0;
+      GEPOps.push_back(OpVal);
+    }
+    
+    GetElementPtrInst *Result = 
+    GetElementPtrInst::Create(GEPOps[0], GEPOps.begin()+1, GEPOps.end(),
+                              InVal->getName()+".phi.trans.insert",
+                              PredBB->getTerminator());
+    Result->setIsInBounds(GEP->isInBounds());
+    NewInsts.push_back(Result);
+    return Result;
+  }
+  
+#if 0
+  // FIXME: This code works, but it is unclear that we actually want to insert
+  // a big chain of computation in order to make a value available in a block.
+  // This needs to be evaluated carefully to consider its cost trade offs.
+  
+  // Handle add with a constant RHS.
+  if (Inst->getOpcode() == Instruction::Add &&
+      isa<ConstantInt>(Inst->getOperand(1))) {
+    // PHI translate the LHS.
+    Value *OpVal = InsertPHITranslatedSubExpr(Inst->getOperand(0),
+                                              CurBB, PredBB, DT, NewInsts);
+    if (OpVal == 0) return 0;
+    
+    BinaryOperator *Res = BinaryOperator::CreateAdd(OpVal, Inst->getOperand(1),
+                                           InVal->getName()+".phi.trans.insert",
+                                                    PredBB->getTerminator());
+    Res->setHasNoSignedWrap(cast<BinaryOperator>(Inst)->hasNoSignedWrap());
+    Res->setHasNoUnsignedWrap(cast<BinaryOperator>(Inst)->hasNoUnsignedWrap());
+    NewInsts.push_back(Res);
+    return Res;
+  }
+#endif
+  
+  return 0;
+}
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp
index e767891..8148429 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileEstimatorPass.cpp
@@ -35,6 +35,7 @@ namespace {
     LoopInfo *LI;
     std::set<BasicBlock*>  BBToVisit;
     std::map<Loop*,double> LoopExitWeights;
+    std::map<Edge,double>  MinimalWeight;
   public:
     static char ID; // Class identification, replacement for typeinfo
     explicit ProfileEstimatorPass(const double execcount = 0)
@@ -91,7 +92,7 @@ static void inline printEdgeError(ProfileInfo::Edge e, const char *M) {
 
 void inline ProfileEstimatorPass::printEdgeWeight(Edge E) {
   DEBUG(errs() << "-- Weight of Edge " << E << ":"
-               << format("%g", getEdgeWeight(E)) << "\n");
+               << format("%20.20g", getEdgeWeight(E)) << "\n");
 }
 
 // recurseBasicBlock() - This calculates the ProfileInfo estimation for a
@@ -174,6 +175,12 @@ void ProfileEstimatorPass::recurseBasicBlock(BasicBlock *BB) {
         double w = getEdgeWeight(*ei);
         if (w == MissingValue) {
           Edges.push_back(*ei);
+          // Check if there is a necessary minimal weight, if yes, subtract it 
+          // from weight.
+          if (MinimalWeight.find(*ei) != MinimalWeight.end()) {
+            incoming -= MinimalWeight[*ei];
+            DEBUG(errs() << "Reserving " << format("%.20g",MinimalWeight[*ei]) << " at " << (*ei) << "\n");
+          }
         } else {
           incoming -= w;
         }
@@ -191,11 +198,43 @@ void ProfileEstimatorPass::recurseBasicBlock(BasicBlock *BB) {
         printEdgeWeight(edge);
       }
     }
-    // Distribute remaining weight onto the exit edges.
+
+    // Distribute remaining weight to the exting edges. To prevent fractions
+    // from building up and provoking precision problems the weight which is to
+    // be distributed is split and the rounded, the last edge gets a somewhat
+    // bigger value, but we are close enough for an estimation.
+    double fraction = floor(incoming/Edges.size());
     for (SmallVector<Edge, 8>::iterator ei = Edges.begin(), ee = Edges.end();
          ei != ee; ++ei) {
-      EdgeInformation[BB->getParent()][*ei] += incoming/Edges.size();
+      double w = 0;
+      if (ei != (ee-1)) {
+        w = fraction;
+        incoming -= fraction;
+      } else {
+        w = incoming;
+      }
+      EdgeInformation[BB->getParent()][*ei] += w;
+      // Read necessary minimal weight.
+      if (MinimalWeight.find(*ei) != MinimalWeight.end()) {
+        EdgeInformation[BB->getParent()][*ei] += MinimalWeight[*ei];
+        DEBUG(errs() << "Additionally " << format("%.20g",MinimalWeight[*ei]) << " at " << (*ei) << "\n");
+      }
       printEdgeWeight(*ei);
+      
+      // Add minimal weight to paths to all exit edges, this is used to ensure
+      // that enough flow is reaching this edges.
+      Path p;
+      const BasicBlock *Dest = GetPath(BB, (*ei).first, p, GetPathToDest);
+      while (Dest != BB) {
+        const BasicBlock *Parent = p.find(Dest)->second;
+        Edge e = getEdge(Parent, Dest);
+        if (MinimalWeight.find(e) == MinimalWeight.end()) {
+          MinimalWeight[e] = 0;
+        }
+        MinimalWeight[e] += w;
+        DEBUG(errs() << "Minimal Weight for " << e << ": " << format("%.20g",MinimalWeight[e]) << "\n");
+        Dest = Parent;
+      }
     }
     // Increase flow into the loop.
     BBWeight *= (ExecCount+1);
@@ -203,7 +242,7 @@ void ProfileEstimatorPass::recurseBasicBlock(BasicBlock *BB) {
 
   BlockInformation[BB->getParent()][BB] = BBWeight;
   // Up until now we considered only the loop exiting edges, now we have a
-  // definite block weight and must ditribute this onto the outgoing edges.
+  // definite block weight and must distribute this onto the outgoing edges.
   // Since there may be already flow attached to some of the edges, read this
   // flow first and remember the edges that have still now flow attached.
   Edges.clear();
@@ -225,15 +264,32 @@ void ProfileEstimatorPass::recurseBasicBlock(BasicBlock *BB) {
         BBWeight -= getEdgeWeight(edge);
       } else {
         Edges.push_back(edge);
+        // If minimal weight is necessary, reserve weight by subtracting weight
+        // from block weight, this is readded later on.
+        if (MinimalWeight.find(edge) != MinimalWeight.end()) {
+          BBWeight -= MinimalWeight[edge];
+          DEBUG(errs() << "Reserving " << format("%.20g",MinimalWeight[edge]) << " at " << edge << "\n");
+        }
       }
     }
   }
 
+  double fraction = floor(BBWeight/Edges.size());
   // Finally we know what flow is still not leaving the block, distribute this
   // flow onto the empty edges.
   for (SmallVector<Edge, 8>::iterator ei = Edges.begin(), ee = Edges.end();
        ei != ee; ++ei) {
-    EdgeInformation[BB->getParent()][*ei] += BBWeight/Edges.size();
+    if (ei != (ee-1)) {
+      EdgeInformation[BB->getParent()][*ei] += fraction;
+      BBWeight -= fraction;
+    } else {
+      EdgeInformation[BB->getParent()][*ei] += BBWeight;
+    }
+    // Readd minial necessary weight.
+    if (MinimalWeight.find(*ei) != MinimalWeight.end()) {
+      EdgeInformation[BB->getParent()][*ei] += MinimalWeight[*ei];
+      DEBUG(errs() << "Additionally " << format("%.20g",MinimalWeight[*ei]) << " at " << (*ei) << "\n");
+    }
     printEdgeWeight(*ei);
   }
 
@@ -260,20 +316,24 @@ bool ProfileEstimatorPass::runOnFunction(Function &F) {
   for (Function::iterator bi = F.begin(), be = F.end(); bi != be; ++bi)
     BBToVisit.insert(bi);
 
+  // Clear Minimal Edges.
+  MinimalWeight.clear();
+
   DEBUG(errs() << "Working on function " << F.getNameStr() << "\n");
 
   // Since the entry block is the first one and has no predecessors, the edge
   // (0,entry) is inserted with the starting weight of 1.
   BasicBlock *entry = &F.getEntryBlock();
-  BlockInformation[&F][entry] = 1;
+  BlockInformation[&F][entry] = pow(2.0, 32.0);
   Edge edge = getEdge(0,entry);
-  EdgeInformation[&F][edge] = 1;
+  EdgeInformation[&F][edge] = BlockInformation[&F][entry];
   printEdgeWeight(edge);
 
   // Since recurseBasicBlock() maybe returns with a block which was not fully
-  // estimated, use recurseBasicBlock() until everything is calculated. 
+  // estimated, use recurseBasicBlock() until everything is calculated.
+  bool cleanup = false;
   recurseBasicBlock(entry);
-  while (BBToVisit.size() > 0) {
+  while (BBToVisit.size() > 0 && !cleanup) {
     // Remember number of open blocks, this is later used to check if progress
     // was made.
     unsigned size = BBToVisit.size();
@@ -287,21 +347,65 @@ bool ProfileEstimatorPass::runOnFunction(Function &F) {
       if (BBToVisit.size() < size) break;
     }
 
-    // If there was not a single block resovled, make some assumptions.
+    // If there was not a single block resolved, make some assumptions.
     if (BBToVisit.size() == size) {
-      BasicBlock *BB = *(BBToVisit.begin());
-      // Since this BB was not calculated because of missing incoming edges,
-      // set these edges to zero.
-      for (pred_iterator bbi = pred_begin(BB), bbe = pred_end(BB);
-           bbi != bbe; ++bbi) {
-        Edge e = getEdge(*bbi,BB);
-        double w = getEdgeWeight(e);
-        if (w == MissingValue) {
-          EdgeInformation[&F][e] = 0;
-          DEBUG(errs() << "Assuming edge weight: ");
-          printEdgeWeight(e);
+      bool found = false;
+      for (std::set<BasicBlock*>::iterator BBI = BBToVisit.begin(), BBE = BBToVisit.end(); 
+           (BBI != BBE) && (!found); ++BBI) {
+        BasicBlock *BB = *BBI;
+        // Try each predecessor if it can be assumend.
+        for (pred_iterator bbi = pred_begin(BB), bbe = pred_end(BB);
+             (bbi != bbe) && (!found); ++bbi) {
+          Edge e = getEdge(*bbi,BB);
+          double w = getEdgeWeight(e);
+          // Check that edge from predecessor is still free.
+          if (w == MissingValue) {
+            // Check if there is a circle from this block to predecessor.
+            Path P;
+            const BasicBlock *Dest = GetPath(BB, *bbi, P, GetPathToDest);
+            if (Dest != *bbi) {
+              // If there is no circle, just set edge weight to 0
+              EdgeInformation[&F][e] = 0;
+              DEBUG(errs() << "Assuming edge weight: ");
+              printEdgeWeight(e);
+              found = true;
+            }
+          }
         }
       }
+      if (!found) {
+        cleanup = true;
+        DEBUG(errs() << "No assumption possible in Fuction "<<F.getName()<<", setting all to zero\n");
+      }
+    }
+  }
+  // In case there was no safe way to assume edges, set as a last measure, 
+  // set _everything_ to zero.
+  if (cleanup) {
+    FunctionInformation[&F] = 0;
+    BlockInformation[&F].clear();
+    EdgeInformation[&F].clear();
+    for (Function::const_iterator FI = F.begin(), FE = F.end(); FI != FE; ++FI) {
+      const BasicBlock *BB = &(*FI);
+      BlockInformation[&F][BB] = 0;
+      pred_const_iterator predi = pred_begin(BB), prede = pred_end(BB);
+      if (predi == prede) {
+        Edge e = getEdge(0,BB);
+        setEdgeWeight(e,0);
+      }
+      for (;predi != prede; ++predi) {
+        Edge e = getEdge(*predi,BB);
+        setEdgeWeight(e,0);
+      }
+      succ_const_iterator succi = succ_begin(BB), succe = succ_end(BB);
+      if (succi == succe) {
+        Edge e = getEdge(BB,0);
+        setEdgeWeight(e,0);
+      }
+      for (;succi != succe; ++succi) {
+        Edge e = getEdge(*succi,BB);
+        setEdgeWeight(e,0);
+      }
     }
   }
 
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
index 7f24f5a..5a7f691 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
@@ -11,25 +11,51 @@
 // "no profile" implementation.
 //
 //===----------------------------------------------------------------------===//
-
+#define DEBUG_TYPE "profile-info"
 #include "llvm/Analysis/Passes.h"
 #include "llvm/Analysis/ProfileInfo.h"
+#include "llvm/CodeGen/MachineBasicBlock.h"
+#include "llvm/CodeGen/MachineFunction.h"
 #include "llvm/Pass.h"
 #include "llvm/Support/CFG.h"
-#include "llvm/Support/Debug.h"
-#include "llvm/Support/raw_ostream.h"
-#include "llvm/Support/Format.h"
+#include "llvm/ADT/SmallSet.h"
 #include <set>
+#include <queue>
+#include <limits>
 using namespace llvm;
 
 // Register the ProfileInfo interface, providing a nice name to refer to.
 static RegisterAnalysisGroup<ProfileInfo> Z("Profile Information");
+
+namespace llvm {
+
+template <>
+ProfileInfoT<MachineFunction, MachineBasicBlock>::ProfileInfoT() {}
+template <>
+ProfileInfoT<MachineFunction, MachineBasicBlock>::~ProfileInfoT() {}
+
+template <>
+ProfileInfoT<Function, BasicBlock>::ProfileInfoT() {
+  MachineProfile = 0;
+}
+template <>
+ProfileInfoT<Function, BasicBlock>::~ProfileInfoT() {
+  if (MachineProfile) delete MachineProfile;
+}
+
+template<>
 char ProfileInfo::ID = 0;
 
-ProfileInfo::~ProfileInfo() {}
+template<>
+char MachineProfileInfo::ID = 0;
 
+template<>
 const double ProfileInfo::MissingValue = -1;
 
+template<>
+const double MachineProfileInfo::MissingValue = -1;
+
+template<>
 double ProfileInfo::getExecutionCount(const BasicBlock *BB) {
   std::map<const Function*, BlockCounts>::iterator J =
     BlockInformation.find(BB->getParent());
@@ -39,35 +65,72 @@ double ProfileInfo::getExecutionCount(const BasicBlock *BB) {
       return I->second;
   }
 
+  double Count = MissingValue;
+
   pred_const_iterator PI = pred_begin(BB), PE = pred_end(BB);
 
   // Are there zero predecessors of this block?
   if (PI == PE) {
-    // If this is the entry block, look for the Null -> Entry edge.
-    if (BB == &BB->getParent()->getEntryBlock())
-      return getEdgeWeight(getEdge(0, BB));
-    else
-      return 0;   // Otherwise, this is a dead block.
+    Edge e = getEdge(0,BB);
+    Count = getEdgeWeight(e);
+  } else {
+    // Otherwise, if there are predecessors, the execution count of this block is
+    // the sum of the edge frequencies from the incoming edges.
+    std::set<const BasicBlock*> ProcessedPreds;
+    Count = 0;
+    for (; PI != PE; ++PI)
+      if (ProcessedPreds.insert(*PI).second) {
+        double w = getEdgeWeight(getEdge(*PI, BB));
+        if (w == MissingValue) {
+          Count = MissingValue;
+          break;
+        }
+        Count += w;
+      }
   }
 
-  // Otherwise, if there are predecessors, the execution count of this block is
-  // the sum of the edge frequencies from the incoming edges.
-  std::set<const BasicBlock*> ProcessedPreds;
-  double Count = 0;
-  for (; PI != PE; ++PI)
-    if (ProcessedPreds.insert(*PI).second) {
-      double w = getEdgeWeight(getEdge(*PI, BB));
-      if (w == MissingValue) {
-        Count = MissingValue;
-        break;
-      }
-      Count += w;
+  // If the predecessors did not suffice to get block weight, try successors.
+  if (Count == MissingValue) {
+
+    succ_const_iterator SI = succ_begin(BB), SE = succ_end(BB);
+
+    // Are there zero successors of this block?
+    if (SI == SE) {
+      Edge e = getEdge(BB,0);
+      Count = getEdgeWeight(e);
+    } else {
+      std::set<const BasicBlock*> ProcessedSuccs;
+      Count = 0;
+      for (; SI != SE; ++SI)
+        if (ProcessedSuccs.insert(*SI).second) {
+          double w = getEdgeWeight(getEdge(BB, *SI));
+          if (w == MissingValue) {
+            Count = MissingValue;
+            break;
+          }
+          Count += w;
+        }
     }
+  }
 
   if (Count != MissingValue) BlockInformation[BB->getParent()][BB] = Count;
   return Count;
 }
 
+template<>
+double MachineProfileInfo::getExecutionCount(const MachineBasicBlock *MBB) {
+  std::map<const MachineFunction*, BlockCounts>::iterator J =
+    BlockInformation.find(MBB->getParent());
+  if (J != BlockInformation.end()) {
+    BlockCounts::iterator I = J->second.find(MBB);
+    if (I != J->second.end())
+      return I->second;
+  }
+
+  return MissingValue;
+}
+
+template<>
 double ProfileInfo::getExecutionCount(const Function *F) {
   std::map<const Function*, double>::iterator J =
     FunctionInformation.find(F);
@@ -83,35 +146,204 @@ double ProfileInfo::getExecutionCount(const Function *F) {
   return Count;
 }
 
+template<>
+double MachineProfileInfo::getExecutionCount(const MachineFunction *MF) {
+  std::map<const MachineFunction*, double>::iterator J =
+    FunctionInformation.find(MF);
+  if (J != FunctionInformation.end())
+    return J->second;
+
+  double Count = getExecutionCount(&MF->front());
+  if (Count != MissingValue) FunctionInformation[MF] = Count;
+  return Count;
+}
+
+template<>
+void ProfileInfo::setExecutionCount(const BasicBlock *BB, double w) {
+  DEBUG(errs() << "Creating Block " << BB->getName() 
+               << " (weight: " << format("%.20g",w) << ")\n");
+  BlockInformation[BB->getParent()][BB] = w;
+}
+
+template<>
+void MachineProfileInfo::setExecutionCount(const MachineBasicBlock *MBB, double w) {
+  DEBUG(errs() << "Creating Block " << MBB->getBasicBlock()->getName()
+               << " (weight: " << format("%.20g",w) << ")\n");
+  BlockInformation[MBB->getParent()][MBB] = w;
+}
+
+template<>
+void ProfileInfo::addEdgeWeight(Edge e, double w) {
+  double oldw = getEdgeWeight(e);
+  assert (oldw != MissingValue && "Adding weight to Edge with no previous weight");
+  DEBUG(errs() << "Adding to Edge " << e
+               << " (new weight: " << format("%.20g",oldw + w) << ")\n");
+  EdgeInformation[getFunction(e)][e] = oldw + w;
+}
+
+template<>
+void ProfileInfo::addExecutionCount(const BasicBlock *BB, double w) {
+  double oldw = getExecutionCount(BB);
+  assert (oldw != MissingValue && "Adding weight to Block with no previous weight");
+  DEBUG(errs() << "Adding to Block " << BB->getName()
+               << " (new weight: " << format("%.20g",oldw + w) << ")\n");
+  BlockInformation[BB->getParent()][BB] = oldw + w;
+}
+
+template<>
+void ProfileInfo::removeBlock(const BasicBlock *BB) {
+  std::map<const Function*, BlockCounts>::iterator J =
+    BlockInformation.find(BB->getParent());
+  if (J == BlockInformation.end()) return;
+
+  DEBUG(errs() << "Deleting " << BB->getName() << "\n");
+  J->second.erase(BB);
+}
+
+template<>
+void ProfileInfo::removeEdge(Edge e) {
+  std::map<const Function*, EdgeWeights>::iterator J =
+    EdgeInformation.find(getFunction(e));
+  if (J == EdgeInformation.end()) return;
+
+  DEBUG(errs() << "Deleting" << e << "\n");
+  J->second.erase(e);
+}
+
+template<>
+void ProfileInfo::replaceEdge(const Edge &oldedge, const Edge &newedge) {
+  double w;
+  if ((w = getEdgeWeight(newedge)) == MissingValue) {
+    w = getEdgeWeight(oldedge);
+    DEBUG(errs() << "Replacing " << oldedge << " with " << newedge  << "\n");
+  } else {
+    w += getEdgeWeight(oldedge);
+    DEBUG(errs() << "Adding " << oldedge << " to " << newedge  << "\n");
+  }
+  setEdgeWeight(newedge,w);
+  removeEdge(oldedge);
+}
+
+template<>
+const BasicBlock *ProfileInfo::GetPath(const BasicBlock *Src, const BasicBlock *Dest,
+                                       Path &P, unsigned Mode) {
+  const BasicBlock *BB = 0;
+  bool hasFoundPath = false;
+
+  std::queue<const BasicBlock *> BFS;
+  BFS.push(Src);
+
+  while(BFS.size() && !hasFoundPath) {
+    BB = BFS.front();
+    BFS.pop();
+
+    succ_const_iterator Succ = succ_begin(BB), End = succ_end(BB);
+    if (Succ == End) {
+      P[0] = BB;
+      if (Mode & GetPathToExit) {
+        hasFoundPath = true;
+        BB = 0;
+      }
+    }
+    for(;Succ != End; ++Succ) {
+      if (P.find(*Succ) != P.end()) continue;
+      Edge e = getEdge(BB,*Succ);
+      if ((Mode & GetPathWithNewEdges) && (getEdgeWeight(e) != MissingValue)) continue;
+      P[*Succ] = BB;
+      BFS.push(*Succ);
+      if ((Mode & GetPathToDest) && *Succ == Dest) {
+        hasFoundPath = true;
+        BB = *Succ;
+        break;
+      }
+      if ((Mode & GetPathToValue) && (getExecutionCount(*Succ) != MissingValue)) {
+        hasFoundPath = true;
+        BB = *Succ;
+        break;
+      }
+    }
+  }
+
+  return BB;
+}
+
+template<>
+void ProfileInfo::divertFlow(const Edge &oldedge, const Edge &newedge) {
+  DEBUG(errs() << "Diverting " << oldedge << " via " << newedge );
+
+  // First check if the old edge was taken, if not, just delete it...
+  if (getEdgeWeight(oldedge) == 0) {
+    removeEdge(oldedge);
+    return;
+  }
+
+  Path P;
+  P[newedge.first] = 0;
+  P[newedge.second] = newedge.first;
+  const BasicBlock *BB = GetPath(newedge.second,oldedge.second,P,GetPathToExit | GetPathToDest);
+
+  double w = getEdgeWeight (oldedge);
+  DEBUG(errs() << ", Weight: " << format("%.20g",w) << "\n");
+  do {
+    const BasicBlock *Parent = P.find(BB)->second;
+    Edge e = getEdge(Parent,BB);
+    double oldw = getEdgeWeight(e);
+    double oldc = getExecutionCount(e.first);
+    setEdgeWeight(e, w+oldw);
+    if (Parent != oldedge.first) {
+      setExecutionCount(e.first, w+oldc);
+    }
+    BB = Parent;
+  } while (BB != newedge.first);
+  removeEdge(oldedge);
+}
+
 /// Replaces all occurences of RmBB in the ProfilingInfo with DestBB.
 /// This checks all edges of the function the blocks reside in and replaces the
 /// occurences of RmBB with DestBB.
+template<>
 void ProfileInfo::replaceAllUses(const BasicBlock *RmBB, 
                                  const BasicBlock *DestBB) {
-  DEBUG(errs() << "Replacing " << RmBB->getNameStr()
-               << " with " << DestBB->getNameStr() << "\n");
+  DEBUG(errs() << "Replacing " << RmBB->getName()
+               << " with " << DestBB->getName() << "\n");
   const Function *F = DestBB->getParent();
   std::map<const Function*, EdgeWeights>::iterator J =
     EdgeInformation.find(F);
   if (J == EdgeInformation.end()) return;
 
-  for (EdgeWeights::iterator I = J->second.begin(), E = J->second.end();
-       I != E; ++I) {
-    Edge e = I->first;
-    Edge newedge; bool foundedge = false;
+  Edge e, newedge;
+  bool erasededge = false;
+  EdgeWeights::iterator I = J->second.begin(), E = J->second.end();
+  while(I != E) {
+    e = (I++)->first;
+    bool foundedge = false; bool eraseedge = false;
     if (e.first == RmBB) {
-      newedge = getEdge(DestBB, e.second);
-      foundedge = true;
+      if (e.second == DestBB) {
+        eraseedge = true;
+      } else {
+        newedge = getEdge(DestBB, e.second);
+        foundedge = true;
+      }
     }
     if (e.second == RmBB) {
-      newedge = getEdge(e.first, DestBB);
-      foundedge = true;
+      if (e.first == DestBB) {
+        eraseedge = true;
+      } else {
+        newedge = getEdge(e.first, DestBB);
+        foundedge = true;
+      }
     }
     if (foundedge) {
-      double w = getEdgeWeight(e);
-      EdgeInformation[F][newedge] = w;
-      DEBUG(errs() << "Replacing " << e << " with " << newedge  << "\n");
-      J->second.erase(e);
+      replaceEdge(e, newedge);
+    }
+    if (eraseedge) {
+      if (erasededge) {
+        Edge newedge = getEdge(DestBB, DestBB);
+        replaceEdge(e, newedge);
+      } else {
+        removeEdge(e);
+        erasededge = true;
+      }
     }
   }
 }
@@ -119,6 +351,7 @@ void ProfileInfo::replaceAllUses(const BasicBlock *RmBB,
 /// Splits an edge in the ProfileInfo and redirects flow over NewBB.
 /// Since its possible that there is more than one edge in the CFG from FristBB
 /// to SecondBB its necessary to redirect the flow proporionally.
+template<>
 void ProfileInfo::splitEdge(const BasicBlock *FirstBB,
                             const BasicBlock *SecondBB,
                             const BasicBlock *NewBB,
@@ -153,7 +386,7 @@ void ProfileInfo::splitEdge(const BasicBlock *FirstBB,
 
   // We know now how many edges there are from FirstBB to SecondBB, reroute a
   // proportional part of the edge weight over NewBB.
-  double neww = w / succ_count;
+  double neww = floor(w / succ_count);
   ECs[n1] += neww;
   ECs[n2] += neww;
   BlockInformation[F][NewBB] += neww;
@@ -164,14 +397,666 @@ void ProfileInfo::splitEdge(const BasicBlock *FirstBB,
   }
 }
 
-raw_ostream& llvm::operator<<(raw_ostream &O, ProfileInfo::Edge E) {
+template<>
+void ProfileInfo::splitBlock(const BasicBlock *Old, const BasicBlock* New) {
+  const Function *F = Old->getParent();
+  std::map<const Function*, EdgeWeights>::iterator J =
+    EdgeInformation.find(F);
+  if (J == EdgeInformation.end()) return;
+
+  DEBUG(errs() << "Splitting " << Old->getName() << " to " << New->getName() << "\n");
+
+  std::set<Edge> Edges;
+  for (EdgeWeights::iterator ewi = J->second.begin(), ewe = J->second.end(); 
+       ewi != ewe; ++ewi) {
+    Edge old = ewi->first;
+    if (old.first == Old) {
+      Edges.insert(old);
+    }
+  }
+  for (std::set<Edge>::iterator EI = Edges.begin(), EE = Edges.end(); 
+       EI != EE; ++EI) {
+    Edge newedge = getEdge(New, EI->second);
+    replaceEdge(*EI, newedge);
+  }
+
+  double w = getExecutionCount(Old);
+  setEdgeWeight(getEdge(Old, New), w);
+  setExecutionCount(New, w);
+}
+
+template<>
+void ProfileInfo::splitBlock(const BasicBlock *BB, const BasicBlock* NewBB,
+                            BasicBlock *const *Preds, unsigned NumPreds) {
+  const Function *F = BB->getParent();
+  std::map<const Function*, EdgeWeights>::iterator J =
+    EdgeInformation.find(F);
+  if (J == EdgeInformation.end()) return;
+
+  DEBUG(errs() << "Splitting " << NumPreds << " Edges from " << BB->getName() 
+               << " to " << NewBB->getName() << "\n");
+
+  // Collect weight that was redirected over NewBB.
+  double newweight = 0;
+  
+  std::set<const BasicBlock *> ProcessedPreds;
+  // For all requestes Predecessors.
+  for (unsigned pred = 0; pred < NumPreds; ++pred) {
+    const BasicBlock * Pred = Preds[pred];
+    if (ProcessedPreds.insert(Pred).second) {
+      // Create edges and read old weight.
+      Edge oldedge = getEdge(Pred, BB);
+      Edge newedge = getEdge(Pred, NewBB);
+
+      // Remember how much weight was redirected.
+      newweight += getEdgeWeight(oldedge);
+    
+      replaceEdge(oldedge,newedge);
+    }
+  }
+
+  Edge newedge = getEdge(NewBB,BB);
+  setEdgeWeight(newedge, newweight);
+  setExecutionCount(NewBB, newweight);
+}
+
+template<>
+void ProfileInfo::transfer(const Function *Old, const Function *New) {
+  DEBUG(errs() << "Replacing Function " << Old->getName() << " with "
+               << New->getName() << "\n");
+  std::map<const Function*, EdgeWeights>::iterator J =
+    EdgeInformation.find(Old);
+  if(J != EdgeInformation.end()) {
+    EdgeInformation[New] = J->second;
+  }
+  EdgeInformation.erase(Old);
+  BlockInformation.erase(Old);
+  FunctionInformation.erase(Old);
+}
+
+static double readEdgeOrRemember(ProfileInfo::Edge edge, double w, ProfileInfo::Edge &tocalc,
+                                 unsigned &uncalc) {
+  if (w == ProfileInfo::MissingValue) {
+    tocalc = edge;
+    uncalc++;
+    return 0;
+  } else {
+    return w;
+  }
+}
+
+template<>
+bool ProfileInfo::CalculateMissingEdge(const BasicBlock *BB, Edge &removed, bool assumeEmptySelf) {
+  Edge edgetocalc;
+  unsigned uncalculated = 0;
+
+  // collect weights of all incoming and outgoing edges, rememer edges that
+  // have no value
+  double incount = 0;
+  SmallSet<const BasicBlock*,8> pred_visited;
+  pred_const_iterator bbi = pred_begin(BB), bbe = pred_end(BB);
+  if (bbi==bbe) {
+    Edge e = getEdge(0,BB);
+    incount += readEdgeOrRemember(e, getEdgeWeight(e) ,edgetocalc,uncalculated);
+  }
+  for (;bbi != bbe; ++bbi) {
+    if (pred_visited.insert(*bbi)) {
+      Edge e = getEdge(*bbi,BB);
+      incount += readEdgeOrRemember(e, getEdgeWeight(e) ,edgetocalc,uncalculated);
+    }
+  }
+
+  double outcount = 0;
+  SmallSet<const BasicBlock*,8> succ_visited;
+  succ_const_iterator sbbi = succ_begin(BB), sbbe = succ_end(BB);
+  if (sbbi==sbbe) {
+    Edge e = getEdge(BB,0);
+    if (getEdgeWeight(e) == MissingValue) {
+      double w = getExecutionCount(BB);
+      if (w != MissingValue) {
+        setEdgeWeight(e,w);
+        removed = e;
+      }
+    }
+    outcount += readEdgeOrRemember(e, getEdgeWeight(e), edgetocalc, uncalculated);
+  }
+  for (;sbbi != sbbe; ++sbbi) {
+    if (succ_visited.insert(*sbbi)) {
+      Edge e = getEdge(BB,*sbbi);
+      outcount += readEdgeOrRemember(e, getEdgeWeight(e), edgetocalc, uncalculated);
+    }
+  }
+
+  // if exactly one edge weight was missing, calculate it and remove it from
+  // spanning tree
+  if (uncalculated == 0 ) {
+    return true;
+  } else
+  if (uncalculated == 1) {
+    if (incount < outcount) {
+      EdgeInformation[BB->getParent()][edgetocalc] = outcount-incount;
+    } else {
+      EdgeInformation[BB->getParent()][edgetocalc] = incount-outcount;
+    }
+    DEBUG(errs() << "--Calc Edge Counter for " << edgetocalc << ": "
+                 << format("%.20g", getEdgeWeight(edgetocalc)) << "\n");
+    removed = edgetocalc;
+    return true;
+  } else 
+  if (uncalculated == 2 && assumeEmptySelf && edgetocalc.first == edgetocalc.second && incount == outcount) {
+    setEdgeWeight(edgetocalc, incount * 10);
+    removed = edgetocalc;
+    return true;
+  } else {
+    return false;
+  }
+}
+
+static void readEdge(ProfileInfo *PI, ProfileInfo::Edge e, double &calcw, std::set<ProfileInfo::Edge> &misscount) {
+  double w = PI->getEdgeWeight(e);
+  if (w != ProfileInfo::MissingValue) {
+    calcw += w;
+  } else {
+    misscount.insert(e);
+  }
+}
+
+template<>
+bool ProfileInfo::EstimateMissingEdges(const BasicBlock *BB) {
+  bool hasNoSuccessors = false;
+
+  double inWeight = 0;
+  std::set<Edge> inMissing;
+  std::set<const BasicBlock*> ProcessedPreds;
+  pred_const_iterator bbi = pred_begin(BB), bbe = pred_end(BB);
+  if (bbi == bbe) {
+    readEdge(this,getEdge(0,BB),inWeight,inMissing);
+  }
+  for( ; bbi != bbe; ++bbi ) {
+    if (ProcessedPreds.insert(*bbi).second) {
+      readEdge(this,getEdge(*bbi,BB),inWeight,inMissing);
+    }
+  }
+
+  double outWeight = 0;
+  std::set<Edge> outMissing;
+  std::set<const BasicBlock*> ProcessedSuccs;
+  succ_const_iterator sbbi = succ_begin(BB), sbbe = succ_end(BB);
+  if (sbbi == sbbe) {
+    readEdge(this,getEdge(BB,0),outWeight,outMissing);
+    hasNoSuccessors = true;
+  }
+  for ( ; sbbi != sbbe; ++sbbi ) {
+    if (ProcessedSuccs.insert(*sbbi).second) {
+      readEdge(this,getEdge(BB,*sbbi),outWeight,outMissing);
+    }
+  }
+
+  double share;
+  std::set<Edge>::iterator ei,ee;
+  if (inMissing.size() == 0 && outMissing.size() > 0) {
+    ei = outMissing.begin();
+    ee = outMissing.end();
+    share = inWeight/outMissing.size();
+    setExecutionCount(BB,inWeight);
+  } else
+  if (inMissing.size() > 0 && outMissing.size() == 0 && outWeight == 0) {
+    ei = inMissing.begin();
+    ee = inMissing.end();
+    share = 0;
+    setExecutionCount(BB,0);
+  } else
+  if (inMissing.size() == 0 && outMissing.size() == 0) {
+    setExecutionCount(BB,outWeight);
+    return true;
+  } else {
+    return false;
+  }
+  for ( ; ei != ee; ++ei ) {
+    setEdgeWeight(*ei,share);
+  }
+  return true;
+}
+
+template<>
+void ProfileInfo::repair(const Function *F) {
+//  if (getExecutionCount(&(F->getEntryBlock())) == 0) {
+//    for (Function::const_iterator FI = F->begin(), FE = F->end();
+//         FI != FE; ++FI) {
+//      const BasicBlock* BB = &(*FI);
+//      {
+//        pred_const_iterator NBB = pred_begin(BB), End = pred_end(BB);
+//        if (NBB == End) {
+//          setEdgeWeight(getEdge(0,BB),0);
+//        }
+//        for(;NBB != End; ++NBB) {
+//          setEdgeWeight(getEdge(*NBB,BB),0);
+//        }
+//      }
+//      {
+//        succ_const_iterator NBB = succ_begin(BB), End = succ_end(BB);
+//        if (NBB == End) {
+//          setEdgeWeight(getEdge(0,BB),0);
+//        }
+//        for(;NBB != End; ++NBB) {
+//          setEdgeWeight(getEdge(*NBB,BB),0);
+//        }
+//      }
+//    }
+//    return;
+//  }
+  // The set of BasicBlocks that are still unvisited.
+  std::set<const BasicBlock*> Unvisited;
+
+  // The set of return edges (Edges with no successors).
+  std::set<Edge> ReturnEdges;
+  double ReturnWeight = 0;
+  
+  // First iterate over the whole function and collect:
+  // 1) The blocks in this function in the Unvisited set.
+  // 2) The return edges in the ReturnEdges set.
+  // 3) The flow that is leaving the function already via return edges.
+
+  // Data structure for searching the function.
+  std::queue<const BasicBlock *> BFS;
+  const BasicBlock *BB = &(F->getEntryBlock());
+  BFS.push(BB);
+  Unvisited.insert(BB);
+
+  while (BFS.size()) {
+    BB = BFS.front(); BFS.pop();
+    succ_const_iterator NBB = succ_begin(BB), End = succ_end(BB);
+    if (NBB == End) {
+      Edge e = getEdge(BB,0);
+      double w = getEdgeWeight(e);
+      if (w == MissingValue) {
+        // If the return edge has no value, try to read value from block.
+        double bw = getExecutionCount(BB);
+        if (bw != MissingValue) {
+          setEdgeWeight(e,bw);
+          ReturnWeight += bw;
+        } else {
+          // If both return edge and block provide no value, collect edge.
+          ReturnEdges.insert(e);
+        }
+      } else {
+        // If the return edge has a proper value, collect it.
+        ReturnWeight += w;
+      }
+    }
+    for (;NBB != End; ++NBB) {
+      if (Unvisited.insert(*NBB).second) {
+        BFS.push(*NBB);
+      }
+    }
+  }
+
+  while (Unvisited.size() > 0) {
+    unsigned oldUnvisitedCount = Unvisited.size();
+    bool FoundPath = false;
+
+    // If there is only one edge left, calculate it.
+    if (ReturnEdges.size() == 1) {
+      ReturnWeight = getExecutionCount(&(F->getEntryBlock())) - ReturnWeight;
+
+      Edge e = *ReturnEdges.begin();
+      setEdgeWeight(e,ReturnWeight);
+      setExecutionCount(e.first,ReturnWeight);
+
+      Unvisited.erase(e.first);
+      ReturnEdges.erase(e);
+      continue;
+    }
+
+    // Calculate all blocks where only one edge is missing, this may also
+    // resolve furhter return edges.
+    std::set<const BasicBlock *>::iterator FI = Unvisited.begin(), FE = Unvisited.end();
+    while(FI != FE) {
+      const BasicBlock *BB = *FI; ++FI;
+      Edge e;
+      if(CalculateMissingEdge(BB,e,true)) {
+        if (BlockInformation[F].find(BB) == BlockInformation[F].end()) {
+          setExecutionCount(BB,getExecutionCount(BB));
+        }
+        Unvisited.erase(BB);
+        if (e.first != 0 && e.second == 0) {
+          ReturnEdges.erase(e);
+          ReturnWeight += getEdgeWeight(e);
+        }
+      }
+    }
+    if (oldUnvisitedCount > Unvisited.size()) continue;
+
+    // Estimate edge weights by dividing the flow proportionally.
+    FI = Unvisited.begin(), FE = Unvisited.end();
+    while(FI != FE) {
+      const BasicBlock *BB = *FI; ++FI;
+      const BasicBlock *Dest = 0;
+      bool AllEdgesHaveSameReturn = true;
+      // Check each Successor, these must all end up in the same or an empty
+      // return block otherwise its dangerous to do an estimation on them.
+      for (succ_const_iterator Succ = succ_begin(BB), End = succ_end(BB);
+           Succ != End; ++Succ) {
+        Path P;
+        GetPath(*Succ, 0, P, GetPathToExit);
+        if (Dest && Dest != P[0]) {
+          AllEdgesHaveSameReturn = false;
+        }
+        Dest = P[0];
+      }
+      if (AllEdgesHaveSameReturn) {
+        if(EstimateMissingEdges(BB)) {
+          Unvisited.erase(BB);
+          break;
+        }
+      }
+    }
+    if (oldUnvisitedCount > Unvisited.size()) continue;
+
+    // Check if there is a path to an block that has a known value and redirect
+    // flow accordingly.
+    FI = Unvisited.begin(), FE = Unvisited.end();
+    while(FI != FE && !FoundPath) {
+      // Fetch path.
+      const BasicBlock *BB = *FI; ++FI;
+      Path P;
+      const BasicBlock *Dest = GetPath(BB, 0, P, GetPathToValue);
+
+      // Calculate incoming flow.
+      double iw = 0; unsigned inmissing = 0; unsigned incount = 0; unsigned invalid = 0;
+      std::set<const BasicBlock *> Processed;
+      for (pred_const_iterator NBB = pred_begin(BB), End = pred_end(BB);
+           NBB != End; ++NBB) {
+        if (Processed.insert(*NBB).second) {
+          Edge e = getEdge(*NBB, BB);
+          double ew = getEdgeWeight(e);
+          if (ew != MissingValue) {
+            iw += ew;
+            invalid++;
+          } else {
+            // If the path contains the successor, this means its a backedge,
+            // do not count as missing.
+            if (P.find(*NBB) == P.end())
+              inmissing++;
+          }
+          incount++;
+        }
+      }
+      if (inmissing == incount) continue;
+      if (invalid == 0) continue;
+
+      // Subtract (already) outgoing flow.
+      Processed.clear();
+      for (succ_const_iterator NBB = succ_begin(BB), End = succ_end(BB);
+           NBB != End; ++NBB) {
+        if (Processed.insert(*NBB).second) {
+          Edge e = getEdge(BB, *NBB);
+          double ew = getEdgeWeight(e);
+          if (ew != MissingValue) {
+            iw -= ew;
+          }
+        }
+      }
+      if (iw < 0) continue;
+
+      // Check the recieving end of the path if it can handle the flow.
+      double ow = getExecutionCount(Dest);
+      Processed.clear();
+      for (succ_const_iterator NBB = succ_begin(BB), End = succ_end(BB);
+           NBB != End; ++NBB) {
+        if (Processed.insert(*NBB).second) {
+          Edge e = getEdge(BB, *NBB);
+          double ew = getEdgeWeight(e);
+          if (ew != MissingValue) {
+            ow -= ew;
+          }
+        }
+      }
+      if (ow < 0) continue;
+
+      // Determine how much flow shall be used.
+      double ew = getEdgeWeight(getEdge(P[Dest],Dest));
+      if (ew != MissingValue) {
+        ew = ew<ow?ew:ow;
+        ew = ew<iw?ew:iw;
+      } else {
+        if (inmissing == 0)
+          ew = iw;
+      }
+
+      // Create flow.
+      if (ew != MissingValue) {
+        do {
+          Edge e = getEdge(P[Dest],Dest);
+          if (getEdgeWeight(e) == MissingValue) {
+            setEdgeWeight(e,ew);
+            FoundPath = true;
+          }
+          Dest = P[Dest];
+        } while (Dest != BB);
+      }
+    }
+    if (FoundPath) continue;
+
+    // Calculate a block with self loop.
+    FI = Unvisited.begin(), FE = Unvisited.end();
+    while(FI != FE && !FoundPath) {
+      const BasicBlock *BB = *FI; ++FI;
+      bool SelfEdgeFound = false;
+      for (succ_const_iterator NBB = succ_begin(BB), End = succ_end(BB);
+           NBB != End; ++NBB) {
+        if (*NBB == BB) {
+          SelfEdgeFound = true;
+          break;
+        }
+      }
+      if (SelfEdgeFound) {
+        Edge e = getEdge(BB,BB);
+        if (getEdgeWeight(e) == MissingValue) {
+          double iw = 0;
+          std::set<const BasicBlock *> Processed;
+          for (pred_const_iterator NBB = pred_begin(BB), End = pred_end(BB);
+               NBB != End; ++NBB) {
+            if (Processed.insert(*NBB).second) {
+              Edge e = getEdge(*NBB, BB);
+              double ew = getEdgeWeight(e);
+              if (ew != MissingValue) {
+                iw += ew;
+              }
+            }
+          }
+          setEdgeWeight(e,iw * 10);
+          FoundPath = true;
+        }
+      }
+    }
+    if (FoundPath) continue;
+
+    // Determine backedges, set them to zero.
+    FI = Unvisited.begin(), FE = Unvisited.end();
+    while(FI != FE && !FoundPath) {
+      const BasicBlock *BB = *FI; ++FI;
+      const BasicBlock *Dest;
+      Path P;
+      bool BackEdgeFound = false;
+      for (pred_const_iterator NBB = pred_begin(BB), End = pred_end(BB);
+           NBB != End; ++NBB) {
+        Dest = GetPath(BB, *NBB, P, GetPathToDest | GetPathWithNewEdges);
+        if (Dest == *NBB) {
+          BackEdgeFound = true;
+          break;
+        }
+      }
+      if (BackEdgeFound) {
+        Edge e = getEdge(Dest,BB);
+        double w = getEdgeWeight(e);
+        if (w == MissingValue) {
+          setEdgeWeight(e,0);
+          FoundPath = true;
+        }
+        do {
+          Edge e = getEdge(P[Dest], Dest);
+          double w = getEdgeWeight(e);
+          if (w == MissingValue) {
+            setEdgeWeight(e,0);
+            FoundPath = true;
+          }
+          Dest = P[Dest];
+        } while (Dest != BB);
+      }
+    }
+    if (FoundPath) continue;
+
+    // Channel flow to return block.
+    FI = Unvisited.begin(), FE = Unvisited.end();
+    while(FI != FE && !FoundPath) {
+      const BasicBlock *BB = *FI; ++FI;
+
+      Path P;
+      const BasicBlock *Dest = GetPath(BB, 0, P, GetPathToExit | GetPathWithNewEdges);
+      Dest = P[0];
+      if (!Dest) continue;
+
+      if (getEdgeWeight(getEdge(Dest,0)) == MissingValue) {
+        // Calculate incoming flow.
+        double iw = 0;
+        std::set<const BasicBlock *> Processed;
+        for (pred_const_iterator NBB = pred_begin(BB), End = pred_end(BB);
+             NBB != End; ++NBB) {
+          if (Processed.insert(*NBB).second) {
+            Edge e = getEdge(*NBB, BB);
+            double ew = getEdgeWeight(e);
+            if (ew != MissingValue) {
+              iw += ew;
+            }
+          }
+        }
+        do {
+          Edge e = getEdge(P[Dest], Dest);
+          double w = getEdgeWeight(e);
+          if (w == MissingValue) {
+            setEdgeWeight(e,iw);
+            FoundPath = true;
+          } else {
+            assert(0 && "Edge should not have value already!");
+          }
+          Dest = P[Dest];
+        } while (Dest != BB);
+      }
+    }
+    if (FoundPath) continue;
+
+    // Speculatively set edges to zero.
+    FI = Unvisited.begin(), FE = Unvisited.end();
+    while(FI != FE && !FoundPath) {
+      const BasicBlock *BB = *FI; ++FI;
+
+      for (pred_const_iterator NBB = pred_begin(BB), End = pred_end(BB);
+           NBB != End; ++NBB) {
+        Edge e = getEdge(*NBB,BB);
+        double w = getEdgeWeight(e);
+        if (w == MissingValue) {
+          setEdgeWeight(e,0);
+          FoundPath = true;
+          break;
+        }
+      }
+    }
+    if (FoundPath) continue;
+
+    errs() << "{";
+    FI = Unvisited.begin(), FE = Unvisited.end();
+    while(FI != FE) {
+      const BasicBlock *BB = *FI; ++FI;
+      errs() << BB->getName();
+      if (FI != FE)
+        errs() << ",";
+    }
+    errs() << "}";
+
+    errs() << "ASSERT: could not repair function";
+    assert(0 && "could not repair function");
+  }
+
+  EdgeWeights J = EdgeInformation[F];
+  for (EdgeWeights::iterator EI = J.begin(), EE = J.end(); EI != EE; ++EI) {
+    Edge e = EI->first;
+
+    bool SuccFound = false;
+    if (e.first != 0) {
+      succ_const_iterator NBB = succ_begin(e.first), End = succ_end(e.first);
+      if (NBB == End) {
+        if (0 == e.second) {
+          SuccFound = true;
+        }
+      }
+      for (;NBB != End; ++NBB) {
+        if (*NBB == e.second) {
+          SuccFound = true;
+          break;
+        }
+      }
+      if (!SuccFound) {
+        removeEdge(e);
+      }
+    }
+  }
+}
+
+raw_ostream& operator<<(raw_ostream &O, const Function *F) {
+  return O << F->getName();
+}
+
+raw_ostream& operator<<(raw_ostream &O, const MachineFunction *MF) {
+  return O << MF->getFunction()->getName() << "(MF)";
+}
+
+raw_ostream& operator<<(raw_ostream &O, const BasicBlock *BB) {
+  return O << BB->getName();
+}
+
+raw_ostream& operator<<(raw_ostream &O, const MachineBasicBlock *MBB) {
+  return O << MBB->getBasicBlock()->getName() << "(MB)";
+}
+
+raw_ostream& operator<<(raw_ostream &O, std::pair<const BasicBlock *, const BasicBlock *> E) {
   O << "(";
-  O << (E.first ? E.first->getNameStr() : "0");
+
+  if (E.first)
+    O << E.first;
+  else
+    O << "0";
+
+  O << ",";
+
+  if (E.second)
+    O << E.second;
+  else
+    O << "0";
+
+  return O << ")";
+}
+
+raw_ostream& operator<<(raw_ostream &O, std::pair<const MachineBasicBlock *, const MachineBasicBlock *> E) {
+  O << "(";
+
+  if (E.first)
+    O << E.first;
+  else
+    O << "0";
+
   O << ",";
-  O << (E.second ? E.second->getNameStr() : "0");
+
+  if (E.second)
+    O << E.second;
+  else
+    O << "0";
+
   return O << ")";
 }
 
+} // namespace llvm
+
 //===----------------------------------------------------------------------===//
 //  NoProfile ProfileInfo implementation
 //
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp
index 9e1dfb6..cbd0430 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileInfoLoaderPass.cpp
@@ -74,6 +74,8 @@ X("profile-loader", "Load profile information from llvmprof.out", false, true);
 
 static RegisterAnalysisGroup<ProfileInfo> Y(X);
 
+const PassInfo *llvm::ProfileLoaderPassID = &X;
+
 ModulePass *llvm::createProfileLoaderPass() { return new LoaderPass(); }
 
 /// createProfileLoaderPass - This function returns a Pass that loads the
@@ -112,46 +114,9 @@ void LoaderPass::recurseBasicBlock(const BasicBlock *BB) {
     recurseBasicBlock(*bbi);
   }
 
-  Edge edgetocalc;
-  unsigned uncalculated = 0;
-
-  // collect weights of all incoming and outgoing edges, rememer edges that
-  // have no value
-  double incount = 0;
-  SmallSet<const BasicBlock*,8> pred_visited;
-  pred_const_iterator bbi = pred_begin(BB), bbe = pred_end(BB);
-  if (bbi==bbe) {
-    readEdgeOrRemember(getEdge(0, BB),edgetocalc,uncalculated,incount);
-  }
-  for (;bbi != bbe; ++bbi) {
-    if (pred_visited.insert(*bbi)) {
-      readEdgeOrRemember(getEdge(*bbi, BB),edgetocalc,uncalculated,incount);
-    }
-  }
-
-  double outcount = 0;
-  SmallSet<const BasicBlock*,8> succ_visited;
-  succ_const_iterator sbbi = succ_begin(BB), sbbe = succ_end(BB);
-  if (sbbi==sbbe) {
-    readEdgeOrRemember(getEdge(BB, 0),edgetocalc,uncalculated,outcount);
-  }
-  for (;sbbi != sbbe; ++sbbi) {
-    if (succ_visited.insert(*sbbi)) {
-      readEdgeOrRemember(getEdge(BB, *sbbi),edgetocalc,uncalculated,outcount);
-    }
-  }
-
-  // if exactly one edge weight was missing, calculate it and remove it from
-  // spanning tree
-  if (uncalculated == 1) {
-    if (incount < outcount) {
-      EdgeInformation[BB->getParent()][edgetocalc] = outcount-incount;
-    } else {
-      EdgeInformation[BB->getParent()][edgetocalc] = incount-outcount;
-    }
-    DEBUG(errs() << "--Calc Edge Counter for " << edgetocalc << ": "
-                 << format("%g", getEdgeWeight(edgetocalc)) << "\n");
-    SpanningTree.erase(edgetocalc);
+  Edge tocalc;
+  if (CalculateMissingEdge(BB, tocalc)) {
+    SpanningTree.erase(tocalc);
   }
 }
 
@@ -219,9 +184,9 @@ bool LoaderPass::runOnModule(Module &M) {
         }
       }
       while (SpanningTree.size() > 0) {
-#if 0
+
         unsigned size = SpanningTree.size();
-#endif
+
         BBisUnvisited.clear();
         for (std::set<Edge>::iterator ei = SpanningTree.begin(),
              ee = SpanningTree.end(); ei != ee; ++ei) {
@@ -231,17 +196,16 @@ bool LoaderPass::runOnModule(Module &M) {
         while (BBisUnvisited.size() > 0) {
           recurseBasicBlock(*BBisUnvisited.begin());
         }
-#if 0
+
         if (SpanningTree.size() == size) {
           DEBUG(errs()<<"{");
           for (std::set<Edge>::iterator ei = SpanningTree.begin(),
                ee = SpanningTree.end(); ei != ee; ++ei) {
-            DEBUG(errs()<<"("<<(ei->first?ei->first->getName():"0")<<","
-                        <<(ei->second?ei->second->getName():"0")<<"),");
+            DEBUG(errs()<< *ei <<",");
           }
           assert(0 && "No edge calculated!");
         }
-#endif
+
       }
     }
     if (ReadCount != Counters.size()) {
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp
index 5f36294..36a80ba 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileVerifierPass.cpp
@@ -21,6 +21,7 @@
 #include "llvm/Support/CFG.h"
 #include "llvm/Support/InstIterator.h"
 #include "llvm/Support/raw_ostream.h"
+#include "llvm/Support/Format.h"
 #include "llvm/Support/Debug.h"
 #include <set>
 using namespace llvm;
@@ -29,44 +30,45 @@ static cl::opt<bool,false>
 ProfileVerifierDisableAssertions("profile-verifier-noassert",
      cl::desc("Disable assertions"));
 
-namespace {
-  class ProfileVerifierPass : public FunctionPass {
+namespace llvm {
+  template<class FType, class BType>
+  class ProfileVerifierPassT : public FunctionPass {
 
     struct DetailedBlockInfo {
-      const BasicBlock *BB;
-      double            BBWeight;
-      double            inWeight;
-      int               inCount;
-      double            outWeight;
-      int               outCount;
+      const BType *BB;
+      double      BBWeight;
+      double      inWeight;
+      int         inCount;
+      double      outWeight;
+      int         outCount;
     };
 
-    ProfileInfo *PI;
-    std::set<const BasicBlock*> BBisVisited;
-    std::set<const Function*>   FisVisited;
+    ProfileInfoT<FType, BType> *PI;
+    std::set<const BType*> BBisVisited;
+    std::set<const FType*>   FisVisited;
     bool DisableAssertions;
 
     // When debugging is enabled, the verifier prints a whole slew of debug
     // information, otherwise its just the assert. These are all the helper
     // functions.
     bool PrintedDebugTree;
-    std::set<const BasicBlock*> BBisPrinted;
+    std::set<const BType*> BBisPrinted;
     void debugEntry(DetailedBlockInfo*);
-    void printDebugInfo(const BasicBlock *BB);
+    void printDebugInfo(const BType *BB);
 
   public:
     static char ID; // Class identification, replacement for typeinfo
 
-    explicit ProfileVerifierPass () : FunctionPass(&ID) {
+    explicit ProfileVerifierPassT () : FunctionPass(&ID) {
       DisableAssertions = ProfileVerifierDisableAssertions;
     }
-    explicit ProfileVerifierPass (bool da) : FunctionPass(&ID), 
-                                             DisableAssertions(da) {
+    explicit ProfileVerifierPassT (bool da) : FunctionPass(&ID), 
+                                              DisableAssertions(da) {
     }
 
     void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesAll();
-      AU.addRequired<ProfileInfo>();
+      AU.addRequired<ProfileInfoT<FType, BType> >();
     }
 
     const char *getPassName() const {
@@ -74,271 +76,302 @@ namespace {
     }
 
     /// run - Verify the profile information.
-    bool runOnFunction(Function &F);
-    void recurseBasicBlock(const BasicBlock*);
+    bool runOnFunction(FType &F);
+    void recurseBasicBlock(const BType*);
 
-    bool   exitReachable(const Function*);
-    double ReadOrAssert(ProfileInfo::Edge);
+    bool   exitReachable(const FType*);
+    double ReadOrAssert(typename ProfileInfoT<FType, BType>::Edge);
     void   CheckValue(bool, const char*, DetailedBlockInfo*);
   };
-}  // End of anonymous namespace
-
-char ProfileVerifierPass::ID = 0;
-static RegisterPass<ProfileVerifierPass>
-X("profile-verifier", "Verify profiling information", false, true);
 
-namespace llvm {
-  FunctionPass *createProfileVerifierPass() {
-    return new ProfileVerifierPass(ProfileVerifierDisableAssertions); 
-  }
-}
-
-void ProfileVerifierPass::printDebugInfo(const BasicBlock *BB) {
-
-  if (BBisPrinted.find(BB) != BBisPrinted.end()) return;
-
-  double BBWeight = PI->getExecutionCount(BB);
-  if (BBWeight == ProfileInfo::MissingValue) { BBWeight = 0; }
-  double inWeight = 0;
-  int inCount = 0;
-  std::set<const BasicBlock*> ProcessedPreds;
-  for ( pred_const_iterator bbi = pred_begin(BB), bbe = pred_end(BB);
-        bbi != bbe; ++bbi ) {
-    if (ProcessedPreds.insert(*bbi).second) {
-      ProfileInfo::Edge E = PI->getEdge(*bbi,BB);
-      double EdgeWeight = PI->getEdgeWeight(E);
-      if (EdgeWeight == ProfileInfo::MissingValue) { EdgeWeight = 0; }
-      errs() << "calculated in-edge " << E << ": " << EdgeWeight << "\n";
-      inWeight += EdgeWeight;
-      inCount++;
+  typedef ProfileVerifierPassT<Function, BasicBlock> ProfileVerifierPass;
+
+  template<class FType, class BType>
+  void ProfileVerifierPassT<FType, BType>::printDebugInfo(const BType *BB) {
+
+    if (BBisPrinted.find(BB) != BBisPrinted.end()) return;
+
+    double BBWeight = PI->getExecutionCount(BB);
+    if (BBWeight == ProfileInfoT<FType, BType>::MissingValue) { BBWeight = 0; }
+    double inWeight = 0;
+    int inCount = 0;
+    std::set<const BType*> ProcessedPreds;
+    for ( pred_const_iterator bbi = pred_begin(BB), bbe = pred_end(BB);
+          bbi != bbe; ++bbi ) {
+      if (ProcessedPreds.insert(*bbi).second) {
+        typename ProfileInfoT<FType, BType>::Edge E = PI->getEdge(*bbi,BB);
+        double EdgeWeight = PI->getEdgeWeight(E);
+        if (EdgeWeight == ProfileInfoT<FType, BType>::MissingValue) { EdgeWeight = 0; }
+        errs() << "calculated in-edge " << E << ": " 
+               << format("%20.20g",EdgeWeight) << "\n";
+        inWeight += EdgeWeight;
+        inCount++;
+      }
     }
-  }
-  double outWeight = 0;
-  int outCount = 0;
-  std::set<const BasicBlock*> ProcessedSuccs;
-  for ( succ_const_iterator bbi = succ_begin(BB), bbe = succ_end(BB);
-        bbi != bbe; ++bbi ) {
-    if (ProcessedSuccs.insert(*bbi).second) {
-      ProfileInfo::Edge E = PI->getEdge(BB,*bbi);
-      double EdgeWeight = PI->getEdgeWeight(E);
-      if (EdgeWeight == ProfileInfo::MissingValue) { EdgeWeight = 0; }
-      errs() << "calculated out-edge " << E << ": " << EdgeWeight << "\n";
-      outWeight += EdgeWeight;
-      outCount++;
+    double outWeight = 0;
+    int outCount = 0;
+    std::set<const BType*> ProcessedSuccs;
+    for ( succ_const_iterator bbi = succ_begin(BB), bbe = succ_end(BB);
+          bbi != bbe; ++bbi ) {
+      if (ProcessedSuccs.insert(*bbi).second) {
+        typename ProfileInfoT<FType, BType>::Edge E = PI->getEdge(BB,*bbi);
+        double EdgeWeight = PI->getEdgeWeight(E);
+        if (EdgeWeight == ProfileInfoT<FType, BType>::MissingValue) { EdgeWeight = 0; }
+        errs() << "calculated out-edge " << E << ": " 
+               << format("%20.20g",EdgeWeight) << "\n";
+        outWeight += EdgeWeight;
+        outCount++;
+      }
+    }
+    errs() << "Block " << BB->getNameStr()                << " in " 
+           << BB->getParent()->getNameStr()               << ":"
+           << "BBWeight="  << format("%20.20g",BBWeight)  << ","
+           << "inWeight="  << format("%20.20g",inWeight)  << ","
+           << "inCount="   << inCount                     << ","
+           << "outWeight=" << format("%20.20g",outWeight) << ","
+           << "outCount"   << outCount                    << "\n";
+
+    // mark as visited and recurse into subnodes
+    BBisPrinted.insert(BB);
+    for ( succ_const_iterator bbi = succ_begin(BB), bbe = succ_end(BB); 
+          bbi != bbe; ++bbi ) {
+      printDebugInfo(*bbi);
     }
   }
-  errs()<<"Block "<<BB->getNameStr()<<" in "<<BB->getParent()->getNameStr()
-        <<",BBWeight="<<BBWeight<<",inWeight="<<inWeight<<",inCount="<<inCount
-        <<",outWeight="<<outWeight<<",outCount"<<outCount<<"\n";
-
-  // mark as visited and recurse into subnodes
-  BBisPrinted.insert(BB);
-  for ( succ_const_iterator bbi = succ_begin(BB), bbe = succ_end(BB); 
-        bbi != bbe; ++bbi ) {
-    printDebugInfo(*bbi);
-  }
-}
 
-void ProfileVerifierPass::debugEntry (DetailedBlockInfo *DI) {
-  errs() << "TROUBLE: Block " << DI->BB->getNameStr() << " in "
-         << DI->BB->getParent()->getNameStr()  << ":";
-  errs() << "BBWeight="  << DI->BBWeight   << ",";
-  errs() << "inWeight="  << DI->inWeight   << ",";
-  errs() << "inCount="   << DI->inCount    << ",";
-  errs() << "outWeight=" << DI->outWeight  << ",";
-  errs() << "outCount="  << DI->outCount   << "\n";
-  if (!PrintedDebugTree) {
-    PrintedDebugTree = true;
-    printDebugInfo(&(DI->BB->getParent()->getEntryBlock()));
+  template<class FType, class BType>
+  void ProfileVerifierPassT<FType, BType>::debugEntry (DetailedBlockInfo *DI) {
+    errs() << "TROUBLE: Block " << DI->BB->getNameStr()       << " in "
+           << DI->BB->getParent()->getNameStr()               << ":"
+           << "BBWeight="  << format("%20.20g",DI->BBWeight)  << ","
+           << "inWeight="  << format("%20.20g",DI->inWeight)  << ","
+           << "inCount="   << DI->inCount                     << ","
+           << "outWeight=" << format("%20.20g",DI->outWeight) << ","
+           << "outCount="  << DI->outCount                    << "\n";
+    if (!PrintedDebugTree) {
+      PrintedDebugTree = true;
+      printDebugInfo(&(DI->BB->getParent()->getEntryBlock()));
+    }
   }
-}
 
-// This compares A and B but considering maybe small differences.
-static bool Equals(double A, double B) { 
-  double maxRelativeError = 0.0000001;
-  if (A == B)
-    return true;
-  double relativeError;
-  if (fabs(B) > fabs(A)) 
-    relativeError = fabs((A - B) / B);
-  else 
-    relativeError = fabs((A - B) / A);
-  if (relativeError <= maxRelativeError) return true; 
-  return false; 
-}
+  // This compares A and B for equality.
+  static bool Equals(double A, double B) {
+    return A == B;
+  }
 
-// This checks if the function "exit" is reachable from an given function
-// via calls, this is necessary to check if a profile is valid despite the
-// counts not fitting exactly.
-bool ProfileVerifierPass::exitReachable(const Function *F) {
-  if (!F) return false;
+  // This checks if the function "exit" is reachable from an given function
+  // via calls, this is necessary to check if a profile is valid despite the
+  // counts not fitting exactly.
+  template<class FType, class BType>
+  bool ProfileVerifierPassT<FType, BType>::exitReachable(const FType *F) {
+    if (!F) return false;
 
-  if (FisVisited.count(F)) return false;
+    if (FisVisited.count(F)) return false;
 
-  Function *Exit = F->getParent()->getFunction("exit");
-  if (Exit == F) {
-    return true;
-  }
+    FType *Exit = F->getParent()->getFunction("exit");
+    if (Exit == F) {
+      return true;
+    }
 
-  FisVisited.insert(F);
-  bool exits = false;
-  for (const_inst_iterator I = inst_begin(F), E = inst_end(F); I != E; ++I) {
-    if (const CallInst *CI = dyn_cast<CallInst>(&*I)) {
-      exits |= exitReachable(CI->getCalledFunction());
-      if (exits) break;
+    FisVisited.insert(F);
+    bool exits = false;
+    for (const_inst_iterator I = inst_begin(F), E = inst_end(F); I != E; ++I) {
+      if (const CallInst *CI = dyn_cast<CallInst>(&*I)) {
+        FType *F = CI->getCalledFunction();
+        if (F) {
+          exits |= exitReachable(F);
+        } else {
+          // This is a call to a pointer, all bets are off...
+          exits = true;
+        }
+        if (exits) break;
+      }
     }
+    return exits;
   }
-  return exits;
-}
 
-#define ASSERTMESSAGE(M) \
-    errs() << (M) << "\n"; \
-    if (!DisableAssertions) assert(0 && (M));
-
-double ProfileVerifierPass::ReadOrAssert(ProfileInfo::Edge E) {
-  double EdgeWeight = PI->getEdgeWeight(E);
-  if (EdgeWeight == ProfileInfo::MissingValue) {
-    errs() << "Edge " << E << " in Function " 
-           << ProfileInfo::getFunction(E)->getNameStr() << ": ";
-    ASSERTMESSAGE("ASSERT:Edge has missing value");
-    return 0;
-  } else {
-    return EdgeWeight;
+  #define ASSERTMESSAGE(M) \
+    { errs() << "ASSERT:" << (M) << "\n"; \
+      if (!DisableAssertions) assert(0 && (M)); }
+
+  template<class FType, class BType>
+  double ProfileVerifierPassT<FType, BType>::ReadOrAssert(typename ProfileInfoT<FType, BType>::Edge E) {
+    double EdgeWeight = PI->getEdgeWeight(E);
+    if (EdgeWeight == ProfileInfoT<FType, BType>::MissingValue) {
+      errs() << "Edge " << E << " in Function " 
+             << ProfileInfoT<FType, BType>::getFunction(E)->getNameStr() << ": ";
+      ASSERTMESSAGE("Edge has missing value");
+      return 0;
+    } else {
+      if (EdgeWeight < 0) {
+        errs() << "Edge " << E << " in Function " 
+               << ProfileInfoT<FType, BType>::getFunction(E)->getNameStr() << ": ";
+        ASSERTMESSAGE("Edge has negative value");
+      }
+      return EdgeWeight;
+    }
   }
-}
 
-void ProfileVerifierPass::CheckValue(bool Error, const char *Message,
-                                     DetailedBlockInfo *DI) {
-  if (Error) {
-    DEBUG(debugEntry(DI));
-    errs() << "Block " << DI->BB->getNameStr() << " in Function " 
-           << DI->BB->getParent()->getNameStr() << ": ";
-    ASSERTMESSAGE(Message);
+  template<class FType, class BType>
+  void ProfileVerifierPassT<FType, BType>::CheckValue(bool Error, 
+                                                      const char *Message,
+                                                      DetailedBlockInfo *DI) {
+    if (Error) {
+      DEBUG(debugEntry(DI));
+      errs() << "Block " << DI->BB->getNameStr() << " in Function " 
+             << DI->BB->getParent()->getNameStr() << ": ";
+      ASSERTMESSAGE(Message);
+    }
+    return;
   }
-  return;
-}
 
-// This calculates the Information for a block and then recurses into the
-// successors.
-void ProfileVerifierPass::recurseBasicBlock(const BasicBlock *BB) {
-
-  // Break the recursion by remembering all visited blocks.
-  if (BBisVisited.find(BB) != BBisVisited.end()) return;
-
-  // Use a data structure to store all the information, this can then be handed
-  // to debug printers.
-  DetailedBlockInfo DI;
-  DI.BB = BB;
-  DI.outCount = DI.inCount = 0;
-  DI.inWeight = DI.outWeight = 0.0;
-
-  // Read predecessors.
-  std::set<const BasicBlock*> ProcessedPreds;
-  pred_const_iterator bpi = pred_begin(BB), bpe = pred_end(BB);
-  // If there are none, check for (0,BB) edge.
-  if (bpi == bpe) {
-    DI.inWeight += ReadOrAssert(PI->getEdge(0,BB));
-    DI.inCount++;
-  }
-  for (;bpi != bpe; ++bpi) {
-    if (ProcessedPreds.insert(*bpi).second) {
-      DI.inWeight += ReadOrAssert(PI->getEdge(*bpi,BB));
+  // This calculates the Information for a block and then recurses into the
+  // successors.
+  template<class FType, class BType>
+  void ProfileVerifierPassT<FType, BType>::recurseBasicBlock(const BType *BB) {
+
+    // Break the recursion by remembering all visited blocks.
+    if (BBisVisited.find(BB) != BBisVisited.end()) return;
+
+    // Use a data structure to store all the information, this can then be handed
+    // to debug printers.
+    DetailedBlockInfo DI;
+    DI.BB = BB;
+    DI.outCount = DI.inCount = 0;
+    DI.inWeight = DI.outWeight = 0;
+
+    // Read predecessors.
+    std::set<const BType*> ProcessedPreds;
+    pred_const_iterator bpi = pred_begin(BB), bpe = pred_end(BB);
+    // If there are none, check for (0,BB) edge.
+    if (bpi == bpe) {
+      DI.inWeight += ReadOrAssert(PI->getEdge(0,BB));
       DI.inCount++;
     }
-  }
+    for (;bpi != bpe; ++bpi) {
+      if (ProcessedPreds.insert(*bpi).second) {
+        DI.inWeight += ReadOrAssert(PI->getEdge(*bpi,BB));
+        DI.inCount++;
+      }
+    }
 
-  // Read successors.
-  std::set<const BasicBlock*> ProcessedSuccs;
-  succ_const_iterator bbi = succ_begin(BB), bbe = succ_end(BB);
-  // If there is an (0,BB) edge, consider it too. (This is done not only when
-  // there are no successors, but every time; not every function contains
-  // return blocks with no successors (think loop latch as return block)).
-  double w = PI->getEdgeWeight(PI->getEdge(BB,0));
-  if (w != ProfileInfo::MissingValue) {
-    DI.outWeight += w;
-    DI.outCount++;
-  }
-  for (;bbi != bbe; ++bbi) {
-    if (ProcessedSuccs.insert(*bbi).second) {
-      DI.outWeight += ReadOrAssert(PI->getEdge(BB,*bbi));
+    // Read successors.
+    std::set<const BType*> ProcessedSuccs;
+    succ_const_iterator bbi = succ_begin(BB), bbe = succ_end(BB);
+    // If there is an (0,BB) edge, consider it too. (This is done not only when
+    // there are no successors, but every time; not every function contains
+    // return blocks with no successors (think loop latch as return block)).
+    double w = PI->getEdgeWeight(PI->getEdge(BB,0));
+    if (w != ProfileInfoT<FType, BType>::MissingValue) {
+      DI.outWeight += w;
       DI.outCount++;
     }
-  }
+    for (;bbi != bbe; ++bbi) {
+      if (ProcessedSuccs.insert(*bbi).second) {
+        DI.outWeight += ReadOrAssert(PI->getEdge(BB,*bbi));
+        DI.outCount++;
+      }
+    }
 
-  // Read block weight.
-  DI.BBWeight = PI->getExecutionCount(BB);
-  CheckValue(DI.BBWeight == ProfileInfo::MissingValue,
-             "ASSERT:BasicBlock has missing value", &DI);
-
-  // Check if this block is a setjmp target.
-  bool isSetJmpTarget = false;
-  if (DI.outWeight > DI.inWeight) {
-    for (BasicBlock::const_iterator i = BB->begin(), ie = BB->end();
-         i != ie; ++i) {
-      if (const CallInst *CI = dyn_cast<CallInst>(&*i)) {
-        Function *F = CI->getCalledFunction();
-        if (F && (F->getNameStr() == "_setjmp")) {
-          isSetJmpTarget = true; break;
+    // Read block weight.
+    DI.BBWeight = PI->getExecutionCount(BB);
+    CheckValue(DI.BBWeight == ProfileInfoT<FType, BType>::MissingValue,
+               "BasicBlock has missing value", &DI);
+    CheckValue(DI.BBWeight < 0,
+               "BasicBlock has negative value", &DI);
+
+    // Check if this block is a setjmp target.
+    bool isSetJmpTarget = false;
+    if (DI.outWeight > DI.inWeight) {
+      for (typename BType::const_iterator i = BB->begin(), ie = BB->end();
+           i != ie; ++i) {
+        if (const CallInst *CI = dyn_cast<CallInst>(&*i)) {
+          FType *F = CI->getCalledFunction();
+          if (F && (F->getNameStr() == "_setjmp")) {
+            isSetJmpTarget = true; break;
+          }
         }
       }
     }
-  }
-  // Check if this block is eventually reaching exit.
-  bool isExitReachable = false;
-  if (DI.inWeight > DI.outWeight) {
-    for (BasicBlock::const_iterator i = BB->begin(), ie = BB->end();
-         i != ie; ++i) {
-      if (const CallInst *CI = dyn_cast<CallInst>(&*i)) {
-        FisVisited.clear();
-        isExitReachable |= exitReachable(CI->getCalledFunction());
-        if (isExitReachable) break;
+    // Check if this block is eventually reaching exit.
+    bool isExitReachable = false;
+    if (DI.inWeight > DI.outWeight) {
+      for (typename BType::const_iterator i = BB->begin(), ie = BB->end();
+           i != ie; ++i) {
+        if (const CallInst *CI = dyn_cast<CallInst>(&*i)) {
+          FType *F = CI->getCalledFunction();
+          if (F) {
+            FisVisited.clear();
+            isExitReachable |= exitReachable(F);
+          } else {
+            // This is a call to a pointer, all bets are off...
+            isExitReachable = true;
+          }
+          if (isExitReachable) break;
+        }
       }
     }
-  }
 
-  if (DI.inCount > 0 && DI.outCount == 0) {
-     // If this is a block with no successors.
-    if (!isSetJmpTarget) {
-      CheckValue(!Equals(DI.inWeight,DI.BBWeight), 
-                 "ASSERT:inWeight and BBWeight do not match", &DI);
+    if (DI.inCount > 0 && DI.outCount == 0) {
+       // If this is a block with no successors.
+      if (!isSetJmpTarget) {
+        CheckValue(!Equals(DI.inWeight,DI.BBWeight), 
+                   "inWeight and BBWeight do not match", &DI);
+      }
+    } else if (DI.inCount == 0 && DI.outCount > 0) {
+      // If this is a block with no predecessors.
+      if (!isExitReachable)
+        CheckValue(!Equals(DI.BBWeight,DI.outWeight), 
+                   "BBWeight and outWeight do not match", &DI);
+    } else {
+      // If this block has successors and predecessors.
+      if (DI.inWeight > DI.outWeight && !isExitReachable)
+        CheckValue(!Equals(DI.inWeight,DI.outWeight), 
+                   "inWeight and outWeight do not match", &DI);
+      if (DI.inWeight < DI.outWeight && !isSetJmpTarget)
+        CheckValue(!Equals(DI.inWeight,DI.outWeight), 
+                   "inWeight and outWeight do not match", &DI);
     }
-  } else if (DI.inCount == 0 && DI.outCount > 0) {
-    // If this is a block with no predecessors.
-    if (!isExitReachable)
-      CheckValue(!Equals(DI.BBWeight,DI.outWeight), 
-                 "ASSERT:BBWeight and outWeight do not match", &DI);
-  } else {
-    // If this block has successors and predecessors.
-    if (DI.inWeight > DI.outWeight && !isExitReachable)
-      CheckValue(!Equals(DI.inWeight,DI.outWeight), 
-                 "ASSERT:inWeight and outWeight do not match", &DI);
-    if (DI.inWeight < DI.outWeight && !isSetJmpTarget)
-      CheckValue(!Equals(DI.inWeight,DI.outWeight), 
-                 "ASSERT:inWeight and outWeight do not match", &DI);
-  }
 
 
-  // Mark this block as visited, rescurse into successors.
-  BBisVisited.insert(BB);
-  for ( succ_const_iterator bbi = succ_begin(BB), bbe = succ_end(BB); 
-        bbi != bbe; ++bbi ) {
-    recurseBasicBlock(*bbi);
+    // Mark this block as visited, rescurse into successors.
+    BBisVisited.insert(BB);
+    for ( succ_const_iterator bbi = succ_begin(BB), bbe = succ_end(BB); 
+          bbi != bbe; ++bbi ) {
+      recurseBasicBlock(*bbi);
+    }
   }
-}
 
-bool ProfileVerifierPass::runOnFunction(Function &F) {
-  PI = &getAnalysis<ProfileInfo>();
+  template<class FType, class BType>
+  bool ProfileVerifierPassT<FType, BType>::runOnFunction(FType &F) {
+    PI = getAnalysisIfAvailable<ProfileInfoT<FType, BType> >();
+    if (!PI)
+      ASSERTMESSAGE("No ProfileInfo available");
+
+    // Prepare global variables.
+    PrintedDebugTree = false;
+    BBisVisited.clear();
+
+    // Fetch entry block and recurse into it.
+    const BType *entry = &F.getEntryBlock();
+    recurseBasicBlock(entry);
+
+    if (PI->getExecutionCount(&F) != PI->getExecutionCount(entry))
+      ASSERTMESSAGE("Function count and entry block count do not match");
 
-  // Prepare global variables.
-  PrintedDebugTree = false;
-  BBisVisited.clear();
+    return false;
+  }
+
+  template<class FType, class BType>
+  char ProfileVerifierPassT<FType, BType>::ID = 0;
+}
 
-  // Fetch entry block and recurse into it.
-  const BasicBlock *entry = &F.getEntryBlock();
-  recurseBasicBlock(entry);
+static RegisterPass<ProfileVerifierPass>
+X("profile-verifier", "Verify profiling information", false, true);
 
-  if (!DisableAssertions)
-    assert((PI->getExecutionCount(&F)==PI->getExecutionCount(entry)) &&
-           "Function count and entry block count do not match");
-  return false;
+namespace llvm {
+  FunctionPass *createProfileVerifierPass() {
+    return new ProfileVerifierPass(ProfileVerifierDisableAssertions); 
+  }
 }
+
diff --git a/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp b/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp
index d674ee8..7157d47 100644
--- a/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp
@@ -357,7 +357,7 @@ Value *SCEVExpander::expandAddToGEP(const SCEV *const *op_begin,
   // without the other.
   SplitAddRecs(Ops, Ty, SE);
 
-  // Decend down the pointer's type and attempt to convert the other
+  // Descend down the pointer's type and attempt to convert the other
   // operands into GEP indices, at each level. The first index in a GEP
   // indexes into the array implied by the pointer operand; the rest of
   // the indices index into the element or field type selected by the
@@ -628,7 +628,7 @@ Value *SCEVExpander::visitAddRecExpr(const SCEVAddRecExpr *S) {
     BasicBlock *SaveInsertBB = Builder.GetInsertBlock();
     BasicBlock::iterator SaveInsertPt = Builder.GetInsertPoint();
     BasicBlock::iterator NewInsertPt =
-      next(BasicBlock::iterator(cast<Instruction>(V)));
+      llvm::next(BasicBlock::iterator(cast<Instruction>(V)));
     while (isa<PHINode>(NewInsertPt)) ++NewInsertPt;
     V = expandCodeFor(SE.getTruncateExpr(SE.getUnknown(V), Ty), 0,
                       NewInsertPt);
@@ -844,7 +844,7 @@ Value *SCEVExpander::expand(const SCEV *S) {
       if (L && S->hasComputableLoopEvolution(L))
         InsertPt = L->getHeader()->getFirstNonPHI();
       while (isInsertedInstruction(InsertPt))
-        InsertPt = next(BasicBlock::iterator(InsertPt));
+        InsertPt = llvm::next(BasicBlock::iterator(InsertPt));
       break;
     }
 
diff --git a/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp b/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
index 3e6af58..22c6e3b 100644
--- a/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
@@ -659,7 +659,7 @@ unsigned llvm::ComputeNumSignBits(Value *V, const TargetData *TD,
   switch (Operator::getOpcode(V)) {
   default: break;
   case Instruction::SExt:
-    Tmp = TyBits-cast<IntegerType>(U->getOperand(0)->getType())->getBitWidth();
+    Tmp = TyBits - U->getOperand(0)->getType()->getScalarSizeInBits();
     return ComputeNumSignBits(U->getOperand(0), TD, Depth+1) + Tmp;
     
   case Instruction::AShr:
@@ -1028,9 +1028,11 @@ static Value *GetLinearExpression(Value *V, APInt &Scale, APInt &Offset,
 const Value *llvm::DecomposeGEPExpression(const Value *V, int64_t &BaseOffs,
                  SmallVectorImpl<std::pair<const Value*, int64_t> > &VarIndices,
                                           const TargetData *TD) {
-  // FIXME: Should limit depth like getUnderlyingObject?
+  // Limit recursion depth to limit compile time in crazy cases.
+  unsigned MaxLookup = 6;
+  
   BaseOffs = 0;
-  while (1) {
+  do {
     // See if this is a bitcast or GEP.
     const Operator *Op = dyn_cast<Operator>(V);
     if (Op == 0) {
@@ -1128,7 +1130,10 @@ const Value *llvm::DecomposeGEPExpression(const Value *V, int64_t &BaseOffs,
     
     // Analyze the base pointer next.
     V = GEPOp->getOperand(0);
-  }
+  } while (--MaxLookup);
+  
+  // If the chain of expressions is too deep, just return early.
+  return V;
 }
 
 
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp b/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp
index 1b7c9c6..cad1d3b 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp
+++ b/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp
@@ -540,6 +540,7 @@ lltok::Kind LLLexer::LexIdentifier() {
   KEYWORD(arm_apcscc);
   KEYWORD(arm_aapcscc);
   KEYWORD(arm_aapcs_vfpcc);
+  KEYWORD(msp430_intrcc);
 
   KEYWORD(cc);
   KEYWORD(c);
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp b/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
index a92dbf8..0333eed 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
+++ b/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
@@ -581,6 +581,37 @@ bool LLParser::ParseStandaloneMetadata() {
   return false;
 }
 
+/// ParseInlineMetadata:
+///   !{type %instr}
+///   !{...} MDNode
+///   !"foo" MDString
+bool LLParser::ParseInlineMetadata(Value *&V, PerFunctionState &PFS) {
+  assert(Lex.getKind() == lltok::Metadata && "Only for Metadata");
+  V = 0;
+
+  Lex.Lex();
+  if (Lex.getKind() == lltok::lbrace) {
+    Lex.Lex();
+    if (ParseTypeAndValue(V, PFS) ||
+        ParseToken(lltok::rbrace, "expected end of metadata node"))
+      return true;
+
+    Value *Vals[] = { V };
+    V = MDNode::get(Context, Vals, 1);
+    return false;
+  }
+
+  // Standalone metadata reference
+  // !{ ..., !42, ... }
+  if (!ParseMDNode((MetadataBase *&)V))
+    return false;
+
+  // MDString:
+  // '!' STRINGCONSTANT
+  if (ParseMDString((MetadataBase *&)V)) return true;
+  return false;
+}
+
 /// ParseAlias:
 ///   ::= GlobalVar '=' OptionalVisibility 'alias' OptionalLinkage Aliasee
 /// Aliasee
@@ -1043,6 +1074,7 @@ bool LLParser::ParseOptionalVisibility(unsigned &Res) {
 ///   ::= 'arm_apcscc'
 ///   ::= 'arm_aapcscc'
 ///   ::= 'arm_aapcs_vfpcc'
+///   ::= 'msp430_intrcc'
 ///   ::= 'cc' UINT
 ///
 bool LLParser::ParseOptionalCallingConv(CallingConv::ID &CC) {
@@ -1056,6 +1088,7 @@ bool LLParser::ParseOptionalCallingConv(CallingConv::ID &CC) {
   case lltok::kw_arm_apcscc:     CC = CallingConv::ARM_APCS; break;
   case lltok::kw_arm_aapcscc:    CC = CallingConv::ARM_AAPCS; break;
   case lltok::kw_arm_aapcs_vfpcc:CC = CallingConv::ARM_AAPCS_VFP; break;
+  case lltok::kw_msp430_intrcc:  CC = CallingConv::MSP430_INTR; break;
   case lltok::kw_cc: {
       unsigned ArbitraryCC;
       Lex.Lex();
@@ -1377,15 +1410,23 @@ bool LLParser::ParseParameterList(SmallVectorImpl<ParamInfo> &ArgList,
     // Parse the argument.
     LocTy ArgLoc;
     PATypeHolder ArgTy(Type::getVoidTy(Context));
-    unsigned ArgAttrs1, ArgAttrs2;
+    unsigned ArgAttrs1 = Attribute::None;
+    unsigned ArgAttrs2 = Attribute::None;
     Value *V;
-    if (ParseType(ArgTy, ArgLoc) ||
-        ParseOptionalAttrs(ArgAttrs1, 0) ||
-        ParseValue(ArgTy, V, PFS) ||
-        // FIXME: Should not allow attributes after the argument, remove this in
-        // LLVM 3.0.
-        ParseOptionalAttrs(ArgAttrs2, 3))
+    if (ParseType(ArgTy, ArgLoc))
       return true;
+
+    if (Lex.getKind() == lltok::Metadata) {
+      if (ParseInlineMetadata(V, PFS))
+        return true;
+    } else {
+      if (ParseOptionalAttrs(ArgAttrs1, 0) ||
+          ParseValue(ArgTy, V, PFS) ||
+          // FIXME: Should not allow attributes after the argument, remove this
+          // in LLVM 3.0.
+          ParseOptionalAttrs(ArgAttrs2, 3))
+        return true;
+    }
     ArgList.push_back(ParamInfo(ArgLoc, V, ArgAttrs1|ArgAttrs2));
   }
 
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLParser.h b/libclamav/c++/llvm/lib/AsmParser/LLParser.h
index 1112dc4..d14b1cb 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLParser.h
+++ b/libclamav/c++/llvm/lib/AsmParser/LLParser.h
@@ -279,7 +279,9 @@ namespace llvm {
       LocTy Loc;
       return ParseTypeAndBasicBlock(BB, Loc, PFS);
     }
-  
+
+    bool ParseInlineMetadata(Value *&V, PerFunctionState &PFS);
+
     struct ParamInfo {
       LocTy Loc;
       Value *V;
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLToken.h b/libclamav/c++/llvm/lib/AsmParser/LLToken.h
index 797c32e..1165766 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLToken.h
+++ b/libclamav/c++/llvm/lib/AsmParser/LLToken.h
@@ -69,6 +69,7 @@ namespace lltok {
     kw_cc, kw_ccc, kw_fastcc, kw_coldcc,
     kw_x86_stdcallcc, kw_x86_fastcallcc,
     kw_arm_apcscc, kw_arm_aapcscc, kw_arm_aapcs_vfpcc,
+    kw_msp430_intrcc,
 
     kw_signext,
     kw_zeroext,
diff --git a/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
index 8e3f8e7..bb61682 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
@@ -38,16 +38,19 @@ DebugMod("agg-antidep-debugmod",
                       cl::desc("Debug control for aggressive anti-dep breaker"),
                       cl::init(0), cl::Hidden);
 
-AggressiveAntiDepState::AggressiveAntiDepState(MachineBasicBlock *BB) :
-  GroupNodes(TargetRegisterInfo::FirstVirtualRegister, 0) {
-  // Initialize all registers to be in their own group. Initially we
-  // assign the register to the same-indexed GroupNode.
-  for (unsigned i = 0; i < TargetRegisterInfo::FirstVirtualRegister; ++i)
+AggressiveAntiDepState::AggressiveAntiDepState(const unsigned TargetRegs,
+                                               MachineBasicBlock *BB) :
+  NumTargetRegs(TargetRegs), GroupNodes(TargetRegs, 0) {
+
+  const unsigned BBSize = BB->size();
+  for (unsigned i = 0; i < NumTargetRegs; ++i) {
+    // Initialize all registers to be in their own group. Initially we
+    // assign the register to the same-indexed GroupNode.
     GroupNodeIndices[i] = i;
-
-  // Initialize the indices to indicate that no registers are live.
-  std::fill(KillIndices, array_endof(KillIndices), ~0u);
-  std::fill(DefIndices, array_endof(DefIndices), BB->size());
+    // Initialize the indices to indicate that no registers are live.
+    KillIndices[i] = ~0u;
+    DefIndices[i] = BBSize;
+  }
 }
 
 unsigned AggressiveAntiDepState::GetGroup(unsigned Reg)
@@ -64,7 +67,7 @@ void AggressiveAntiDepState::GetGroupRegs(
   std::vector<unsigned> &Regs,
   std::multimap<unsigned, AggressiveAntiDepState::RegisterReference> *RegRefs)
 {
-  for (unsigned Reg = 0; Reg != TargetRegisterInfo::FirstVirtualRegister; ++Reg) {
+  for (unsigned Reg = 0; Reg != NumTargetRegs; ++Reg) {
     if ((GetGroup(Reg) == Group) && (RegRefs->count(Reg) > 0))
       Regs.push_back(Reg);
   }
@@ -137,7 +140,7 @@ AggressiveAntiDepBreaker::~AggressiveAntiDepBreaker() {
 
 void AggressiveAntiDepBreaker::StartBlock(MachineBasicBlock *BB) {
   assert(State == NULL);
-  State = new AggressiveAntiDepState(BB);
+  State = new AggressiveAntiDepState(TRI->getNumRegs(), BB);
 
   bool IsReturnBlock = (!BB->empty() && BB->back().getDesc().isReturn());
   unsigned *KillIndices = State->GetKillIndices();
@@ -220,7 +223,7 @@ void AggressiveAntiDepBreaker::Observe(MachineInstr *MI, unsigned Count,
   DEBUG(errs() << "\tRegs:");
 
   unsigned *DefIndices = State->GetDefIndices();
-  for (unsigned Reg = 0; Reg != TargetRegisterInfo::FirstVirtualRegister; ++Reg) {
+  for (unsigned Reg = 0; Reg != TRI->getNumRegs(); ++Reg) {
     // If Reg is current live, then mark that it can't be renamed as
     // we don't know the extent of its live-range anymore (now that it
     // has been scheduled). If it is not live but was defined in the
diff --git a/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h
index 8154d2d..d385a21 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h
@@ -44,6 +44,10 @@ namespace llvm {
     } RegisterReference;
 
   private:
+    /// NumTargetRegs - Number of non-virtual target registers
+    /// (i.e. TRI->getNumRegs()).
+    const unsigned NumTargetRegs;
+
     /// GroupNodes - Implements a disjoint-union data structure to
     /// form register groups. A node is represented by an index into
     /// the vector. A node can "point to" itself to indicate that it
@@ -69,7 +73,7 @@ namespace llvm {
     unsigned DefIndices[TargetRegisterInfo::FirstVirtualRegister];
 
   public:
-    AggressiveAntiDepState(MachineBasicBlock *BB);
+    AggressiveAntiDepState(const unsigned TargetRegs, MachineBasicBlock *BB);
     
     /// GetKillIndices - Return the kill indices.
     unsigned *GetKillIndices() { return KillIndices; }
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
index 993cdbf..44fd176 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
@@ -1374,6 +1374,7 @@ void AsmPrinter::processDebugLoc(const MachineInstr *MI,
       unsigned L = DW->RecordSourceLine(CurDLT.Line, CurDLT.Col,
                                         CurDLT.Scope);
       printLabel(L);
+      O << '\n';
       DW->BeginScope(MI, L);
       PrevDLT = CurDLT;
     }
@@ -1837,15 +1838,16 @@ void AsmPrinter::EmitComments(const MachineInstr &MI) const {
 
     // Print source line info.
     O.PadToColumn(MAI->getCommentColumn());
-    O << MAI->getCommentString() << " SrcLine ";
-    if (DLT.Scope) {
-      DICompileUnit CU(DLT.Scope);
-      if (!CU.isNull())
-        O << CU.getFilename() << " ";
-    }
-    O << DLT.Line;
+    O << MAI->getCommentString() << ' ';
+    DIScope Scope(DLT.Scope);
+    // Omit the directory, because it's likely to be long and uninteresting.
+    if (!Scope.isNull())
+      O << Scope.getFilename();
+    else
+      O << "<unknown>";
+    O << ':' << DLT.Line;
     if (DLT.Col != 0)
-      O << ":" << DLT.Col;
+      O << ':' << DLT.Col;
     Newline = true;
   }
 
@@ -1857,35 +1859,40 @@ void AsmPrinter::EmitComments(const MachineInstr &MI) const {
 
   // We assume a single instruction only has a spill or reload, not
   // both.
+  const MachineMemOperand *MMO;
   if (TM.getInstrInfo()->isLoadFromStackSlotPostFE(&MI, FI)) {
     if (FrameInfo->isSpillSlotObjectIndex(FI)) {
+      MMO = *MI.memoperands_begin();
       if (Newline) O << '\n';
       O.PadToColumn(MAI->getCommentColumn());
-      O << MAI->getCommentString() << " Reload";
+      O << MAI->getCommentString() << ' ' << MMO->getSize() << "-byte Reload";
       Newline = true;
     }
   }
-  else if (TM.getInstrInfo()->hasLoadFromStackSlot(&MI, FI)) {
+  else if (TM.getInstrInfo()->hasLoadFromStackSlot(&MI, MMO, FI)) {
     if (FrameInfo->isSpillSlotObjectIndex(FI)) {
       if (Newline) O << '\n';
       O.PadToColumn(MAI->getCommentColumn());
-      O << MAI->getCommentString() << " Folded Reload";
+      O << MAI->getCommentString() << ' '
+        << MMO->getSize() << "-byte Folded Reload";
       Newline = true;
     }
   }
   else if (TM.getInstrInfo()->isStoreToStackSlotPostFE(&MI, FI)) {
     if (FrameInfo->isSpillSlotObjectIndex(FI)) {
+      MMO = *MI.memoperands_begin();
       if (Newline) O << '\n';
       O.PadToColumn(MAI->getCommentColumn());
-      O << MAI->getCommentString() << " Spill";
+      O << MAI->getCommentString() << ' ' << MMO->getSize() << "-byte Spill";
       Newline = true;
     }
   }
-  else if (TM.getInstrInfo()->hasStoreToStackSlot(&MI, FI)) {
+  else if (TM.getInstrInfo()->hasStoreToStackSlot(&MI, MMO, FI)) {
     if (FrameInfo->isSpillSlotObjectIndex(FI)) {
       if (Newline) O << '\n';
       O.PadToColumn(MAI->getCommentColumn());
-      O << MAI->getCommentString() << " Folded Spill";
+      O << MAI->getCommentString() << ' '
+        << MMO->getSize() << "-byte Folded Spill";
       Newline = true;
     }
   }
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h
index dc6a70a..cad8b89 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.h
@@ -274,7 +274,7 @@ namespace llvm {
   };
 
   //===--------------------------------------------------------------------===//
-  /// DIEString - A string value DIE.
+  /// DIEString - A string value DIE. This DIE keeps string reference only.
   ///
   class DIEString : public DIEValue {
     const StringRef Str;
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
index 9dad574..0b1a196 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
@@ -330,8 +330,8 @@ void DwarfDebug::addSInt(DIE *Die, unsigned Attribute,
   Die->addValue(Attribute, Form, Value);
 }
 
-/// addString - Add a string attribute data and value.
-///
+/// addString - Add a string attribute data and value. DIEString only
+/// keeps string reference. 
 void DwarfDebug::addString(DIE *Die, unsigned Attribute, unsigned Form,
                            const StringRef String) {
   DIEValue *Value = new DIEString(String);
@@ -393,7 +393,7 @@ void DwarfDebug::addSourceLine(DIE *Die, const DIVariable *V) {
     return;
 
   unsigned Line = V->getLineNumber();
-  unsigned FileID = findCompileUnit(V->getCompileUnit()).getID();
+  unsigned FileID = findCompileUnit(V->getCompileUnit())->getID();
   assert(FileID && "Invalid file id");
   addUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
   addUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
@@ -407,7 +407,7 @@ void DwarfDebug::addSourceLine(DIE *Die, const DIGlobal *G) {
     return;
 
   unsigned Line = G->getLineNumber();
-  unsigned FileID = findCompileUnit(G->getCompileUnit()).getID();
+  unsigned FileID = findCompileUnit(G->getCompileUnit())->getID();
   assert(FileID && "Invalid file id");
   addUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
   addUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
@@ -425,7 +425,7 @@ void DwarfDebug::addSourceLine(DIE *Die, const DISubprogram *SP) {
 
 
   unsigned Line = SP->getLineNumber();
-  unsigned FileID = findCompileUnit(SP->getCompileUnit()).getID();
+  unsigned FileID = findCompileUnit(SP->getCompileUnit())->getID();
   assert(FileID && "Invalid file id");
   addUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
   addUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
@@ -440,7 +440,7 @@ void DwarfDebug::addSourceLine(DIE *Die, const DIType *Ty) {
     return;
 
   unsigned Line = Ty->getLineNumber();
-  unsigned FileID = findCompileUnit(CU).getID();
+  unsigned FileID = findCompileUnit(CU)->getID();
   assert(FileID && "Invalid file id");
   addUInt(Die, dwarf::DW_AT_decl_file, 0, FileID);
   addUInt(Die, dwarf::DW_AT_decl_line, 0, Line);
@@ -738,13 +738,49 @@ void DwarfDebug::addAddress(DIE *Die, unsigned Attribute,
   addBlock(Die, Attribute, 0, Block);
 }
 
+/// addToContextOwner - Add Die into the list of its context owner's children.
+void DwarfDebug::addToContextOwner(DIE *Die, DIDescriptor Context) {
+  if (Context.isNull())
+    ModuleCU->addDie(Die);
+  else if (Context.isType()) {
+    DIE *ContextDIE = getOrCreateTypeDIE(DIType(Context.getNode()));
+    ContextDIE->addChild(Die);
+  } else if (DIE *ContextDIE = ModuleCU->getDIE(Context.getNode()))
+    ContextDIE->addChild(Die);
+  else 
+    ModuleCU->addDie(Die);
+}
+
+/// getOrCreateTypeDIE - Find existing DIE or create new DIE for the
+/// given DIType.
+DIE *DwarfDebug::getOrCreateTypeDIE(DIType Ty) {
+  DIE *TyDIE = ModuleCU->getDIE(Ty.getNode());
+  if (TyDIE)
+    return TyDIE;
+
+  // Create new type.
+  TyDIE = new DIE(dwarf::DW_TAG_base_type);
+  ModuleCU->insertDIE(Ty.getNode(), TyDIE);
+  if (Ty.isBasicType())
+    constructTypeDIE(*TyDIE, DIBasicType(Ty.getNode()));
+  else if (Ty.isCompositeType())
+    constructTypeDIE(*TyDIE, DICompositeType(Ty.getNode()));
+  else {
+    assert(Ty.isDerivedType() && "Unknown kind of DIType");
+    constructTypeDIE(*TyDIE, DIDerivedType(Ty.getNode()));
+  }
+
+  addToContextOwner(TyDIE, Ty.getContext());
+  return TyDIE;
+}
+
 /// addType - Add a new type attribute to the specified entity.
-void DwarfDebug::addType(CompileUnit *DW_Unit, DIE *Entity, DIType Ty) {
+void DwarfDebug::addType(DIE *Entity, DIType Ty) {
   if (Ty.isNull())
     return;
 
   // Check for pre-existence.
-  DIEEntry *Entry = DW_Unit->getDIEEntry(Ty.getNode());
+  DIEEntry *Entry = ModuleCU->getDIEEntry(Ty.getNode());
 
   // If it exists then use the existing value.
   if (Entry) {
@@ -754,36 +790,17 @@ void DwarfDebug::addType(CompileUnit *DW_Unit, DIE *Entity, DIType Ty) {
 
   // Set up proxy.
   Entry = createDIEEntry();
-  DW_Unit->insertDIEEntry(Ty.getNode(), Entry);
+  ModuleCU->insertDIEEntry(Ty.getNode(), Entry);
 
   // Construct type.
-  DIE *Buffer = new DIE(dwarf::DW_TAG_base_type);
-  if (Ty.isBasicType())
-    constructTypeDIE(DW_Unit, *Buffer, DIBasicType(Ty.getNode()));
-  else if (Ty.isCompositeType())
-    constructTypeDIE(DW_Unit, *Buffer, DICompositeType(Ty.getNode()));
-  else {
-    assert(Ty.isDerivedType() && "Unknown kind of DIType");
-    constructTypeDIE(DW_Unit, *Buffer, DIDerivedType(Ty.getNode()));
-  }
+  DIE *Buffer = getOrCreateTypeDIE(Ty);
 
-  // Add debug information entry to entity and appropriate context.
-  DIE *Die = NULL;
-  DIDescriptor Context = Ty.getContext();
-  if (!Context.isNull())
-    Die = DW_Unit->getDIE(Context.getNode());
-
-  if (Die)
-    Die->addChild(Buffer);
-  else
-    DW_Unit->addDie(Buffer);
   Entry->setEntry(Buffer);
   Entity->addValue(dwarf::DW_AT_type, dwarf::DW_FORM_ref4, Entry);
 }
 
 /// constructTypeDIE - Construct basic type die from DIBasicType.
-void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
-                                  DIBasicType BTy) {
+void DwarfDebug::constructTypeDIE(DIE &Buffer, DIBasicType BTy) {
   // Get core information.
   StringRef Name = BTy.getName();
   Buffer.setTag(dwarf::DW_TAG_base_type);
@@ -798,8 +815,7 @@ void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
 }
 
 /// constructTypeDIE - Construct derived type die from DIDerivedType.
-void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
-                                  DIDerivedType DTy) {
+void DwarfDebug::constructTypeDIE(DIE &Buffer, DIDerivedType DTy) {
   // Get core information.
   StringRef Name = DTy.getName();
   uint64_t Size = DTy.getSizeInBits() >> 3;
@@ -812,10 +828,10 @@ void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
 
   // Map to main type, void will not have a type.
   DIType FromTy = DTy.getTypeDerivedFrom();
-  addType(DW_Unit, &Buffer, FromTy);
+  addType(&Buffer, FromTy);
 
   // Add name if not anonymous or intermediate type.
-  if (!Name.empty() && Tag != dwarf::DW_TAG_pointer_type)
+  if (!Name.empty())
     addString(&Buffer, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
 
   // Add size if non-zero (derived types might be zero-sized.)
@@ -828,8 +844,7 @@ void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
 }
 
 /// constructTypeDIE - Construct type DIE from DICompositeType.
-void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
-                                  DICompositeType CTy) {
+void DwarfDebug::constructTypeDIE(DIE &Buffer, DICompositeType CTy) {
   // Get core information.
   StringRef Name = CTy.getName();
 
@@ -840,7 +855,7 @@ void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
   switch (Tag) {
   case dwarf::DW_TAG_vector_type:
   case dwarf::DW_TAG_array_type:
-    constructArrayTypeDIE(DW_Unit, Buffer, &CTy);
+    constructArrayTypeDIE(Buffer, &CTy);
     break;
   case dwarf::DW_TAG_enumeration_type: {
     DIArray Elements = CTy.getTypeArray();
@@ -850,7 +865,7 @@ void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
       DIE *ElemDie = NULL;
       DIEnumerator Enum(Elements.getElement(i).getNode());
       if (!Enum.isNull()) {
-        ElemDie = constructEnumTypeDIE(DW_Unit, &Enum);
+        ElemDie = constructEnumTypeDIE(&Enum);
         Buffer.addChild(ElemDie);
       }
     }
@@ -860,7 +875,7 @@ void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
     // Add return type.
     DIArray Elements = CTy.getTypeArray();
     DIDescriptor RTy = Elements.getElement(0);
-    addType(DW_Unit, &Buffer, DIType(RTy.getNode()));
+    addType(&Buffer, DIType(RTy.getNode()));
 
     // Add prototype flag.
     addUInt(&Buffer, dwarf::DW_AT_prototyped, dwarf::DW_FORM_flag, 1);
@@ -869,7 +884,7 @@ void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
     for (unsigned i = 1, N = Elements.getNumElements(); i < N; ++i) {
       DIE *Arg = new DIE(dwarf::DW_TAG_formal_parameter);
       DIDescriptor Ty = Elements.getElement(i);
-      addType(DW_Unit, Arg, DIType(Ty.getNode()));
+      addType(Arg, DIType(Ty.getNode()));
       Buffer.addChild(Arg);
     }
   }
@@ -891,11 +906,9 @@ void DwarfDebug::constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
         continue;
       DIE *ElemDie = NULL;
       if (Element.getTag() == dwarf::DW_TAG_subprogram)
-        ElemDie = createSubprogramDIE(DW_Unit,
-                                      DISubprogram(Element.getNode()));
+        ElemDie = createMemberSubprogramDIE(DISubprogram(Element.getNode()));
       else
-        ElemDie = createMemberDIE(DW_Unit,
-                                  DIDerivedType(Element.getNode()));
+        ElemDie = createMemberDIE(DIDerivedType(Element.getNode()));
       Buffer.addChild(ElemDie);
     }
 
@@ -944,33 +957,32 @@ void DwarfDebug::constructSubrangeDIE(DIE &Buffer, DISubrange SR, DIE *IndexTy){
   addDIEEntry(DW_Subrange, dwarf::DW_AT_type, dwarf::DW_FORM_ref4, IndexTy);
   if (L)
     addSInt(DW_Subrange, dwarf::DW_AT_lower_bound, 0, L);
-  if (H)
-    addSInt(DW_Subrange, dwarf::DW_AT_upper_bound, 0, H);
+  addSInt(DW_Subrange, dwarf::DW_AT_upper_bound, 0, H);
 
   Buffer.addChild(DW_Subrange);
 }
 
 /// constructArrayTypeDIE - Construct array type DIE from DICompositeType.
-void DwarfDebug::constructArrayTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+void DwarfDebug::constructArrayTypeDIE(DIE &Buffer,
                                        DICompositeType *CTy) {
   Buffer.setTag(dwarf::DW_TAG_array_type);
   if (CTy->getTag() == dwarf::DW_TAG_vector_type)
     addUInt(&Buffer, dwarf::DW_AT_GNU_vector, dwarf::DW_FORM_flag, 1);
 
   // Emit derived type.
-  addType(DW_Unit, &Buffer, CTy->getTypeDerivedFrom());
+  addType(&Buffer, CTy->getTypeDerivedFrom());
   DIArray Elements = CTy->getTypeArray();
 
   // Get an anonymous type for index type.
-  DIE *IdxTy = DW_Unit->getIndexTyDie();
+  DIE *IdxTy = ModuleCU->getIndexTyDie();
   if (!IdxTy) {
     // Construct an anonymous type for index type.
     IdxTy = new DIE(dwarf::DW_TAG_base_type);
     addUInt(IdxTy, dwarf::DW_AT_byte_size, 0, sizeof(int32_t));
     addUInt(IdxTy, dwarf::DW_AT_encoding, dwarf::DW_FORM_data1,
             dwarf::DW_ATE_signed);
-    DW_Unit->addDie(IdxTy);
-    DW_Unit->setIndexTyDie(IdxTy);
+    ModuleCU->addDie(IdxTy);
+    ModuleCU->setIndexTyDie(IdxTy);
   }
 
   // Add subranges to array type.
@@ -982,7 +994,7 @@ void DwarfDebug::constructArrayTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
 }
 
 /// constructEnumTypeDIE - Construct enum type DIE from DIEnumerator.
-DIE *DwarfDebug::constructEnumTypeDIE(CompileUnit *DW_Unit, DIEnumerator *ETy) {
+DIE *DwarfDebug::constructEnumTypeDIE(DIEnumerator *ETy) {
   DIE *Enumerator = new DIE(dwarf::DW_TAG_enumerator);
   StringRef Name = ETy->getName();
   addString(Enumerator, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
@@ -992,8 +1004,7 @@ DIE *DwarfDebug::constructEnumTypeDIE(CompileUnit *DW_Unit, DIEnumerator *ETy) {
 }
 
 /// createGlobalVariableDIE - Create new DIE using GV.
-DIE *DwarfDebug::createGlobalVariableDIE(CompileUnit *DW_Unit,
-                                         const DIGlobalVariable &GV) {
+DIE *DwarfDebug::createGlobalVariableDIE(const DIGlobalVariable &GV) {
   // If the global variable was optmized out then no need to create debug info
   // entry.
   if (!GV.getGlobal()) return NULL;
@@ -1014,7 +1025,7 @@ DIE *DwarfDebug::createGlobalVariableDIE(CompileUnit *DW_Unit,
     addString(GVDie, dwarf::DW_AT_MIPS_linkage_name, dwarf::DW_FORM_string,
               LinkageName);
   }
-  addType(DW_Unit, GVDie, GV.getType());
+  addType(GVDie, GV.getType());
   if (!GV.isLocalToUnit())
     addUInt(GVDie, dwarf::DW_AT_external, dwarf::DW_FORM_flag, 1);
   addSourceLine(GVDie, &GV);
@@ -1030,13 +1041,13 @@ DIE *DwarfDebug::createGlobalVariableDIE(CompileUnit *DW_Unit,
 }
 
 /// createMemberDIE - Create new member DIE.
-DIE *DwarfDebug::createMemberDIE(CompileUnit *DW_Unit, const DIDerivedType &DT){
+DIE *DwarfDebug::createMemberDIE(const DIDerivedType &DT) {
   DIE *MemberDie = new DIE(DT.getTag());
   StringRef Name = DT.getName();
   if (!Name.empty())
     addString(MemberDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
   
-  addType(DW_Unit, MemberDie, DT.getTypeDerivedFrom());
+  addType(MemberDie, DT.getTypeDerivedFrom());
 
   addSourceLine(MemberDie, &DT);
 
@@ -1073,20 +1084,24 @@ DIE *DwarfDebug::createMemberDIE(CompileUnit *DW_Unit, const DIDerivedType &DT){
   addBlock(MemberDie, dwarf::DW_AT_data_member_location, 0, MemLocationDie);
 
   if (DT.isProtected())
-    addUInt(MemberDie, dwarf::DW_AT_accessibility, 0,
+    addUInt(MemberDie, dwarf::DW_AT_accessibility, dwarf::DW_FORM_flag,
             dwarf::DW_ACCESS_protected);
   else if (DT.isPrivate())
-    addUInt(MemberDie, dwarf::DW_AT_accessibility, 0,
+    addUInt(MemberDie, dwarf::DW_AT_accessibility, dwarf::DW_FORM_flag,
             dwarf::DW_ACCESS_private);
-
+  else if (DT.getTag() == dwarf::DW_TAG_inheritance)
+    addUInt(MemberDie, dwarf::DW_AT_accessibility, dwarf::DW_FORM_flag,
+            dwarf::DW_ACCESS_public);
+  if (DT.isVirtual())
+    addUInt(MemberDie, dwarf::DW_AT_virtuality, dwarf::DW_FORM_flag,
+            dwarf::DW_VIRTUALITY_virtual);
   return MemberDie;
 }
 
-/// createSubprogramDIE - Create new DIE using SP.
-DIE *DwarfDebug::createSubprogramDIE(CompileUnit *DW_Unit,
-                                     const DISubprogram &SP,
-                                     bool IsConstructor,
-                                     bool IsInlined) {
+/// createRawSubprogramDIE - Create new partially incomplete DIE. This is
+/// a helper routine used by createMemberSubprogramDIE and 
+/// createSubprogramDIE.
+DIE *DwarfDebug::createRawSubprogramDIE(const DISubprogram &SP) {
   DIE *SPDie = new DIE(dwarf::DW_TAG_subprogram);
   addString(SPDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, SP.getName());
 
@@ -1103,9 +1118,6 @@ DIE *DwarfDebug::createSubprogramDIE(CompileUnit *DW_Unit,
   }
   addSourceLine(SPDie, &SP);
 
-  DICompositeType SPTy = SP.getType();
-  DIArray Args = SPTy.getTypeArray();
-
   // Add prototyped tag, if C or ObjC.
   unsigned Lang = SP.getCompileUnit().getLanguage();
   if (Lang == dwarf::DW_LANG_C99 || Lang == dwarf::DW_LANG_C89 ||
@@ -1113,98 +1125,101 @@ DIE *DwarfDebug::createSubprogramDIE(CompileUnit *DW_Unit,
     addUInt(SPDie, dwarf::DW_AT_prototyped, dwarf::DW_FORM_flag, 1);
 
   // Add Return Type.
+  DICompositeType SPTy = SP.getType();
+  DIArray Args = SPTy.getTypeArray();
   unsigned SPTag = SPTy.getTag();
-  if (!IsConstructor) {
-    if (Args.isNull() || SPTag != dwarf::DW_TAG_subroutine_type)
-      addType(DW_Unit, SPDie, SPTy);
-    else
-      addType(DW_Unit, SPDie, DIType(Args.getElement(0).getNode()));
+
+  if (Args.isNull() || SPTag != dwarf::DW_TAG_subroutine_type)
+    addType(SPDie, SPTy);
+  else
+    addType(SPDie, DIType(Args.getElement(0).getNode()));
+
+  unsigned VK = SP.getVirtuality();
+  if (VK) {
+    addUInt(SPDie, dwarf::DW_AT_virtuality, dwarf::DW_FORM_flag, VK);
+    DIEBlock *Block = new DIEBlock();
+    addUInt(Block, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_constu);
+    addUInt(Block, 0, dwarf::DW_FORM_data1, SP.getVirtualIndex());
+    addBlock(SPDie, dwarf::DW_AT_vtable_elem_location, 0, Block);
+    ContainingTypeMap.insert(std::make_pair(SPDie, WeakVH(SP.getContainingType().getNode())));
   }
 
+  return SPDie;
+}
+
+/// createMemberSubprogramDIE - Create new member DIE using SP. This routine
+/// always returns a die with DW_AT_declaration attribute.
+DIE *DwarfDebug::createMemberSubprogramDIE(const DISubprogram &SP) {
+  DIE *SPDie = ModuleCU->getDIE(SP.getNode());
+  if (!SPDie)
+    SPDie = createSubprogramDIE(SP);
+
+  // If SPDie has DW_AT_declaration then reuse it.
+  if (!SP.isDefinition())
+    return SPDie;
+
+  // Otherwise create new DIE for the declaration. First push definition
+  // DIE at the top level.
+  if (TopLevelDIEs.insert(SPDie))
+    TopLevelDIEsVector.push_back(SPDie);
+
+  SPDie = createRawSubprogramDIE(SP);
+
+  // Add arguments. 
+  DICompositeType SPTy = SP.getType();
+  DIArray Args = SPTy.getTypeArray();
+  unsigned SPTag = SPTy.getTag();
+  if (SPTag == dwarf::DW_TAG_subroutine_type)
+    for (unsigned i = 1, N =  Args.getNumElements(); i < N; ++i) {
+      DIE *Arg = new DIE(dwarf::DW_TAG_formal_parameter);
+      addType(Arg, DIType(Args.getElement(i).getNode()));
+      addUInt(Arg, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1); // ??
+      SPDie->addChild(Arg);
+    }
+
+  addUInt(SPDie, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
+  return SPDie;
+}
+
+/// createSubprogramDIE - Create new DIE using SP.
+DIE *DwarfDebug::createSubprogramDIE(const DISubprogram &SP) {
+  DIE *SPDie = ModuleCU->getDIE(SP.getNode());
+  if (SPDie)
+    return SPDie;
+
+  SPDie = createRawSubprogramDIE(SP);
+
   if (!SP.isDefinition()) {
     addUInt(SPDie, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
 
     // Add arguments. Do not add arguments for subprogram definition. They will
-    // be handled through RecordVariable.
+    // be handled while processing variables.
+    DICompositeType SPTy = SP.getType();
+    DIArray Args = SPTy.getTypeArray();
+    unsigned SPTag = SPTy.getTag();
+
     if (SPTag == dwarf::DW_TAG_subroutine_type)
       for (unsigned i = 1, N =  Args.getNumElements(); i < N; ++i) {
         DIE *Arg = new DIE(dwarf::DW_TAG_formal_parameter);
-        addType(DW_Unit, Arg, DIType(Args.getElement(i).getNode()));
+        addType(Arg, DIType(Args.getElement(i).getNode()));
         addUInt(Arg, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1); // ??
         SPDie->addChild(Arg);
       }
   }
 
   // DW_TAG_inlined_subroutine may refer to this DIE.
-  DW_Unit->insertDIE(SP.getNode(), SPDie);
+  ModuleCU->insertDIE(SP.getNode(), SPDie);
   return SPDie;
 }
 
 /// findCompileUnit - Get the compile unit for the given descriptor.
 ///
-CompileUnit &DwarfDebug::findCompileUnit(DICompileUnit Unit) const {
+CompileUnit *DwarfDebug::findCompileUnit(DICompileUnit Unit) {
   DenseMap<Value *, CompileUnit *>::const_iterator I =
     CompileUnitMap.find(Unit.getNode());
-  assert(I != CompileUnitMap.end() && "Missing compile unit.");
-  return *I->second;
-}
-
-/// createDbgScopeVariable - Create a new scope variable.
-///
-DIE *DwarfDebug::createDbgScopeVariable(DbgVariable *DV, CompileUnit *Unit) {
-  // Get the descriptor.
-  const DIVariable &VD = DV->getVariable();
-  StringRef Name = VD.getName();
-  if (Name.empty())
-    return NULL;
-
-  // Translate tag to proper Dwarf tag.  The result variable is dropped for
-  // now.
-  unsigned Tag;
-  switch (VD.getTag()) {
-  case dwarf::DW_TAG_return_variable:
-    return NULL;
-  case dwarf::DW_TAG_arg_variable:
-    Tag = dwarf::DW_TAG_formal_parameter;
-    break;
-  case dwarf::DW_TAG_auto_variable:    // fall thru
-  default:
-    Tag = dwarf::DW_TAG_variable;
-    break;
-  }
-
-  // Define variable debug information entry.
-  DIE *VariableDie = new DIE(Tag);
-  addString(VariableDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
-
-  // Add source line info if available.
-  addSourceLine(VariableDie, &VD);
-
-  // Add variable type.
-  // FIXME: isBlockByrefVariable should be reformulated in terms of complex
-  // addresses instead.
-  if (VD.isBlockByrefVariable())
-    addType(Unit, VariableDie, getBlockByrefType(VD.getType(), Name));
-  else
-    addType(Unit, VariableDie, VD.getType());
-
-  // Add variable address.
-  // Variables for abstract instances of inlined functions don't get a
-  // location.
-  MachineLocation Location;
-  unsigned FrameReg;
-  int Offset = RI->getFrameIndexReference(*MF, DV->getFrameIndex(), FrameReg);
-  Location.set(FrameReg, Offset);
-
-
-  if (VD.hasComplexAddress())
-    addComplexAddress(DV, VariableDie, dwarf::DW_AT_location, Location);
-  else if (VD.isBlockByrefVariable())
-    addBlockByrefAddress(DV, VariableDie, dwarf::DW_AT_location, Location);
-  else
-    addAddress(VariableDie, dwarf::DW_AT_location, Location);
-
-  return VariableDie;
+  if (I == CompileUnitMap.end())
+    return constructCompileUnit(Unit.getNode());
+  return I->second;
 }
 
 /// getUpdatedDbgScope - Find or create DbgScope assicated with the instruction.
@@ -1305,19 +1320,6 @@ DIE *DwarfDebug::updateSubprogramScopeDIE(MDNode *SPNode) {
  if (!DISubprogram(SPNode).isLocalToUnit())
    addUInt(SPDie, dwarf::DW_AT_external, dwarf::DW_FORM_flag, 1);
 
- // If there are global variables at this scope then add their dies.
- for (SmallVector<WeakVH, 4>::iterator SGI = ScopedGVs.begin(),
-        SGE = ScopedGVs.end(); SGI != SGE; ++SGI) {
-   MDNode *N = dyn_cast_or_null<MDNode>(*SGI);
-   if (!N) continue;
-   DIGlobalVariable GV(N);
-   if (GV.getContext().getNode() == SPNode) {
-     DIE *ScopedGVDie = createGlobalVariableDIE(ModuleCU, GV);
-     if (ScopedGVDie)
-       SPDie->addChild(ScopedGVDie);
-   }
- }
- 
  return SPDie;
 }
 
@@ -1401,8 +1403,7 @@ DIE *DwarfDebug::constructInlinedScopeDIE(DbgScope *Scope) {
 
 
 /// constructVariableDIE - Construct a DIE for the given DbgVariable.
-DIE *DwarfDebug::constructVariableDIE(DbgVariable *DV,
-                                      DbgScope *Scope, CompileUnit *Unit) {
+DIE *DwarfDebug::constructVariableDIE(DbgVariable *DV, DbgScope *Scope) {
   // Get the descriptor.
   const DIVariable &VD = DV->getVariable();
   StringRef Name = VD.getName();
@@ -1451,9 +1452,9 @@ DIE *DwarfDebug::constructVariableDIE(DbgVariable *DV,
     // FIXME: isBlockByrefVariable should be reformulated in terms of complex
     // addresses instead.
     if (VD.isBlockByrefVariable())
-      addType(Unit, VariableDie, getBlockByrefType(VD.getType(), Name));
+      addType(VariableDie, getBlockByrefType(VD.getType(), Name));
     else
-      addType(Unit, VariableDie, VD.getType());
+      addType(VariableDie, VD.getType());
   }
 
   // Add variable address.
@@ -1522,7 +1523,7 @@ DIE *DwarfDebug::constructScopeDIE(DbgScope *Scope) {
   // Add variables to scope.
   SmallVector<DbgVariable *, 8> &Variables = Scope->getVariables();
   for (unsigned i = 0, N = Variables.size(); i < N; ++i) {
-    DIE *VariableDIE = constructVariableDIE(Variables[i], Scope, ModuleCU);
+    DIE *VariableDIE = constructVariableDIE(Variables[i], Scope);
     if (VariableDIE)
       ScopeDIE->addChild(VariableDIE);
   }
@@ -1579,7 +1580,7 @@ unsigned DwarfDebug::GetOrCreateSourceID(StringRef DirName, StringRef FileName)
   return SrcId;
 }
 
-void DwarfDebug::constructCompileUnit(MDNode *N) {
+CompileUnit *DwarfDebug::constructCompileUnit(MDNode *N) {
   DICompileUnit DIUnit(N);
   StringRef FN = DIUnit.getFilename();
   StringRef Dir = DIUnit.getDirectory();
@@ -1618,6 +1619,7 @@ void DwarfDebug::constructCompileUnit(MDNode *N) {
 
   CompileUnitMap[DIUnit.getNode()] = Unit;
   CompileUnits.push_back(Unit);
+  return Unit;
 }
 
 void DwarfDebug::constructGlobalVariableDIE(MDNode *N) {
@@ -1631,14 +1633,16 @@ void DwarfDebug::constructGlobalVariableDIE(MDNode *N) {
   if (ModuleCU->getDIE(DI_GV.getNode()))
     return;
 
-  DIE *VariableDie = createGlobalVariableDIE(ModuleCU, DI_GV);
+  DIE *VariableDie = createGlobalVariableDIE(DI_GV);
+  if (!VariableDie)
+    return;
 
   // Add to map.
   ModuleCU->insertDIE(N, VariableDie);
 
   // Add to context owner.
-  ModuleCU->getCUDie()->addChild(VariableDie);
-
+  addToContextOwner(VariableDie, DI_GV.getContext());
+  
   // Expose as global. FIXME - need to check external flag.
   ModuleCU->addGlobal(DI_GV.getName(), VariableDie);
 
@@ -1663,13 +1667,15 @@ void DwarfDebug::constructSubprogramDIE(MDNode *N) {
     // class type.
     return;
 
-  DIE *SubprogramDie = createSubprogramDIE(ModuleCU, SP);
+  DIE *SubprogramDie = createSubprogramDIE(SP);
 
   // Add to map.
   ModuleCU->insertDIE(N, SubprogramDie);
 
   // Add to context owner.
-  ModuleCU->getCUDie()->addChild(SubprogramDie);
+  if (SP.getContext().getNode() == SP.getCompileUnit().getNode())
+    if (TopLevelDIEs.insert(SubprogramDie))
+      TopLevelDIEsVector.push_back(SubprogramDie);
 
   // Expose as global.
   ModuleCU->addGlobal(SP.getName(), SubprogramDie);
@@ -1709,21 +1715,16 @@ void DwarfDebug::beginModule(Module *M, MachineModuleInfo *mmi) {
   if (!ModuleCU)
     ModuleCU = CompileUnits[0];
 
-  // Create DIEs for each of the externally visible global variables.
-  for (DebugInfoFinder::iterator I = DbgFinder.global_variable_begin(),
-         E = DbgFinder.global_variable_end(); I != E; ++I) {
-    DIGlobalVariable GV(*I);
-    if (GV.getContext().getNode() != GV.getCompileUnit().getNode())
-      ScopedGVs.push_back(*I);
-    else
-      constructGlobalVariableDIE(*I);
-  }
-
   // Create DIEs for each subprogram.
   for (DebugInfoFinder::iterator I = DbgFinder.subprogram_begin(),
          E = DbgFinder.subprogram_end(); I != E; ++I)
     constructSubprogramDIE(*I);
 
+  // Create DIEs for each global variable.
+  for (DebugInfoFinder::iterator I = DbgFinder.global_variable_begin(),
+         E = DbgFinder.global_variable_end(); I != E; ++I)
+    constructGlobalVariableDIE(*I);
+
   MMI = mmi;
   shouldEmit = true;
   MMI->setDebugInfoAvailability(true);
@@ -1770,6 +1771,22 @@ void DwarfDebug::endModule() {
     addUInt(ISP, dwarf::DW_AT_inline, 0, dwarf::DW_INL_inlined);
   }
 
+  // Insert top level DIEs.
+  for (SmallVector<DIE *, 4>::iterator TI = TopLevelDIEsVector.begin(),
+         TE = TopLevelDIEsVector.end(); TI != TE; ++TI)
+    ModuleCU->getCUDie()->addChild(*TI);
+
+  for (DenseMap<DIE *, WeakVH>::iterator CI = ContainingTypeMap.begin(),
+         CE = ContainingTypeMap.end(); CI != CE; ++CI) {
+    DIE *SPDie = CI->first;
+    MDNode *N = dyn_cast_or_null<MDNode>(CI->second);
+    if (!N) continue;
+    DIE *NDie = ModuleCU->getDIE(N);
+    if (!NDie) continue;
+    addDIEEntry(SPDie, dwarf::DW_AT_containing_type, dwarf::DW_FORM_ref4, NDie);
+    addDIEEntry(NDie, dwarf::DW_AT_containing_type, dwarf::DW_FORM_ref4, NDie);
+  }
+
   // Standard sections final addresses.
   Asm->OutStreamer.SwitchSection(Asm->getObjFileLowering().getTextSection());
   EmitLabel("text_end", 0);
@@ -1898,6 +1915,7 @@ void DwarfDebug::endScope(const MachineInstr *MI) {
 
   unsigned Label = MMI->NextLabelID();
   Asm->printLabel(Label);
+  O << '\n';
 
   SmallVector<DbgScope *, 2> &SD = I->second;
   for (SmallVector<DbgScope *, 2>::iterator SDI = SD.begin(), SDE = SD.end();
@@ -2092,17 +2110,15 @@ void DwarfDebug::endFunction(MachineFunction *MF) {
                                                MMI->getFrameMoves()));
 
   // Clear debug info
-  if (CurrentFnDbgScope) {
-    CurrentFnDbgScope = NULL;
-    DbgScopeMap.clear();
-    DbgScopeBeginMap.clear();
-    DbgScopeEndMap.clear();
-    ConcreteScopes.clear();
-    AbstractScopesList.clear();
-  }
+  CurrentFnDbgScope = NULL;
+  DbgScopeMap.clear();
+  DbgScopeBeginMap.clear();
+  DbgScopeEndMap.clear();
+  ConcreteScopes.clear();
+  AbstractScopesList.clear();
 
   Lines.clear();
-
+  
   if (TimePassesIsEnabled)
     DebugTimer->stopTimer();
 }
@@ -2337,13 +2353,16 @@ void DwarfDebug::emitDIE(DIE *Die) {
   }
 }
 
-/// emitDebugInfo / emitDebugInfoPerCU - Emit the debug info section.
+/// emitDebugInfo - Emit the debug info section.
 ///
-void DwarfDebug::emitDebugInfoPerCU(CompileUnit *Unit) {
-  DIE *Die = Unit->getCUDie();
+void DwarfDebug::emitDebugInfo() {
+  // Start debug info section.
+  Asm->OutStreamer.SwitchSection(
+                            Asm->getObjFileLowering().getDwarfInfoSection());
+  DIE *Die = ModuleCU->getCUDie();
 
   // Emit the compile units header.
-  EmitLabel("info_begin", Unit->getID());
+  EmitLabel("info_begin", ModuleCU->getID());
 
   // Emit size of content not including length itself
   unsigned ContentSize = Die->getSize() +
@@ -2364,17 +2383,10 @@ void DwarfDebug::emitDebugInfoPerCU(CompileUnit *Unit) {
   Asm->EmitInt8(0); Asm->EOL("Extra Pad For GDB");
   Asm->EmitInt8(0); Asm->EOL("Extra Pad For GDB");
   Asm->EmitInt8(0); Asm->EOL("Extra Pad For GDB");
-  EmitLabel("info_end", Unit->getID());
+  EmitLabel("info_end", ModuleCU->getID());
 
   Asm->EOL();
-}
-
-void DwarfDebug::emitDebugInfo() {
-  // Start debug info section.
-  Asm->OutStreamer.SwitchSection(
-                            Asm->getObjFileLowering().getDwarfInfoSection());
 
-  emitDebugInfoPerCU(ModuleCU);
 }
 
 /// emitAbbreviations - Emit the abbreviation section.
@@ -2534,9 +2546,9 @@ void DwarfDebug::emitDebugLines() {
         std::pair<unsigned, unsigned> SourceID =
           getSourceDirectoryAndFileIds(LineInfo.getSourceID());
         O << '\t' << MAI->getCommentString() << ' '
-          << getSourceDirectoryName(SourceID.first) << ' '
+          << getSourceDirectoryName(SourceID.first) << '/'
           << getSourceFileName(SourceID.second)
-          <<" :" << utostr_32(LineInfo.getLine()) << '\n';
+          << ':' << utostr_32(LineInfo.getLine()) << '\n';
       }
 
       // Define the line address.
@@ -2672,24 +2684,30 @@ DwarfDebug::emitFunctionDebugFrame(const FunctionDebugFrameInfo&DebugFrameInfo){
   Asm->EOL();
 }
 
-void DwarfDebug::emitDebugPubNamesPerCU(CompileUnit *Unit) {
-  EmitDifference("pubnames_end", Unit->getID(),
-                 "pubnames_begin", Unit->getID(), true);
+/// emitDebugPubNames - Emit visible names into a debug pubnames section.
+///
+void DwarfDebug::emitDebugPubNames() {
+  // Start the dwarf pubnames section.
+  Asm->OutStreamer.SwitchSection(
+                          Asm->getObjFileLowering().getDwarfPubNamesSection());
+
+  EmitDifference("pubnames_end", ModuleCU->getID(),
+                 "pubnames_begin", ModuleCU->getID(), true);
   Asm->EOL("Length of Public Names Info");
 
-  EmitLabel("pubnames_begin", Unit->getID());
+  EmitLabel("pubnames_begin", ModuleCU->getID());
 
   Asm->EmitInt16(dwarf::DWARF_VERSION); Asm->EOL("DWARF Version");
 
   EmitSectionOffset("info_begin", "section_info",
-                    Unit->getID(), 0, true, false);
+                    ModuleCU->getID(), 0, true, false);
   Asm->EOL("Offset of Compilation Unit Info");
 
-  EmitDifference("info_end", Unit->getID(), "info_begin", Unit->getID(),
+  EmitDifference("info_end", ModuleCU->getID(), "info_begin", ModuleCU->getID(),
                  true);
   Asm->EOL("Compilation Unit Length");
 
-  const StringMap<DIE*> &Globals = Unit->getGlobals();
+  const StringMap<DIE*> &Globals = ModuleCU->getGlobals();
   for (StringMap<DIE*>::const_iterator
          GI = Globals.begin(), GE = Globals.end(); GI != GE; ++GI) {
     const char *Name = GI->getKeyData();
@@ -2700,21 +2718,11 @@ void DwarfDebug::emitDebugPubNamesPerCU(CompileUnit *Unit) {
   }
 
   Asm->EmitInt32(0); Asm->EOL("End Mark");
-  EmitLabel("pubnames_end", Unit->getID());
+  EmitLabel("pubnames_end", ModuleCU->getID());
 
   Asm->EOL();
 }
 
-/// emitDebugPubNames - Emit visible names into a debug pubnames section.
-///
-void DwarfDebug::emitDebugPubNames() {
-  // Start the dwarf pubnames section.
-  Asm->OutStreamer.SwitchSection(
-                          Asm->getObjFileLowering().getDwarfPubNamesSection());
-
-  emitDebugPubNamesPerCU(ModuleCU);
-}
-
 void DwarfDebug::emitDebugPubTypes() {
   // Start the dwarf pubnames section.
   Asm->OutStreamer.SwitchSection(
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
index 679d9b9..0e0064f 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
@@ -154,12 +154,14 @@ class DwarfDebug : public Dwarf {
   /// (at the end of the module) as DW_AT_inline.
   SmallPtrSet<DIE *, 4> InlinedSubprogramDIEs;
 
+  DenseMap<DIE *, WeakVH> ContainingTypeMap;
+
   /// AbstractSubprogramDIEs - Collection of abstruct subprogram DIEs.
   SmallPtrSet<DIE *, 4> AbstractSubprogramDIEs;
 
-  /// ScopedGVs - Tracks global variables that are not at file scope.
-  /// For example void f() { static int b = 42; }
-  SmallVector<WeakVH, 4> ScopedGVs;
+  /// TopLevelDIEs - Collection of top level DIEs. 
+  SmallPtrSet<DIE *, 4> TopLevelDIEs;
+  SmallVector<DIE *, 4> TopLevelDIEsVector;
 
   typedef SmallVector<DbgScope *, 2> ScopeVector;
   typedef DenseMap<const MachineInstr *, ScopeVector>
@@ -307,53 +309,62 @@ class DwarfDebug : public Dwarf {
   void addBlockByrefAddress(DbgVariable *&DV, DIE *Die, unsigned Attribute,
                             const MachineLocation &Location);
 
+  /// addToContextOwner - Add Die into the list of its context owner's children.
+  void addToContextOwner(DIE *Die, DIDescriptor Context);
+
   /// addType - Add a new type attribute to the specified entity.
-  void addType(CompileUnit *DW_Unit, DIE *Entity, DIType Ty);
+  void addType(DIE *Entity, DIType Ty);
+
+  /// getOrCreateTypeDIE - Find existing DIE or create new DIE for the
+  /// given DIType.
+  DIE *getOrCreateTypeDIE(DIType Ty);
 
   void addPubTypes(DISubprogram SP);
 
   /// constructTypeDIE - Construct basic type die from DIBasicType.
-  void constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+  void constructTypeDIE(DIE &Buffer,
                         DIBasicType BTy);
 
   /// constructTypeDIE - Construct derived type die from DIDerivedType.
-  void constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+  void constructTypeDIE(DIE &Buffer,
                         DIDerivedType DTy);
 
   /// constructTypeDIE - Construct type DIE from DICompositeType.
-  void constructTypeDIE(CompileUnit *DW_Unit, DIE &Buffer,
+  void constructTypeDIE(DIE &Buffer,
                         DICompositeType CTy);
 
   /// constructSubrangeDIE - Construct subrange DIE from DISubrange.
   void constructSubrangeDIE(DIE &Buffer, DISubrange SR, DIE *IndexTy);
 
   /// constructArrayTypeDIE - Construct array type DIE from DICompositeType.
-  void constructArrayTypeDIE(CompileUnit *DW_Unit, DIE &Buffer, 
+  void constructArrayTypeDIE(DIE &Buffer, 
                              DICompositeType *CTy);
 
   /// constructEnumTypeDIE - Construct enum type DIE from DIEnumerator.
-  DIE *constructEnumTypeDIE(CompileUnit *DW_Unit, DIEnumerator *ETy);
+  DIE *constructEnumTypeDIE(DIEnumerator *ETy);
 
   /// createGlobalVariableDIE - Create new DIE using GV.
-  DIE *createGlobalVariableDIE(CompileUnit *DW_Unit,
-                               const DIGlobalVariable &GV);
+  DIE *createGlobalVariableDIE(const DIGlobalVariable &GV);
 
   /// createMemberDIE - Create new member DIE.
-  DIE *createMemberDIE(CompileUnit *DW_Unit, const DIDerivedType &DT);
+  DIE *createMemberDIE(const DIDerivedType &DT);
 
   /// createSubprogramDIE - Create new DIE using SP.
-  DIE *createSubprogramDIE(CompileUnit *DW_Unit,
-                           const DISubprogram &SP,
-                           bool IsConstructor = false,
-                           bool IsInlined = false);
+  DIE *createSubprogramDIE(const DISubprogram &SP);
 
-  /// findCompileUnit - Get the compile unit for the given descriptor. 
-  ///
-  CompileUnit &findCompileUnit(DICompileUnit Unit) const;
+  /// createMemberSubprogramDIE - Create new member DIE using SP. This
+  /// routine always returns a die with DW_AT_declaration attribute.
 
-  /// createDbgScopeVariable - Create a new scope variable.
+  DIE *createMemberSubprogramDIE(const DISubprogram &SP);
+
+  /// createRawSubprogramDIE - Create new partially incomplete DIE. This is
+  /// a helper routine used by createMemberSubprogramDIE and 
+  /// createSubprogramDIE.
+  DIE *createRawSubprogramDIE(const DISubprogram &SP);
+
+  /// findCompileUnit - Get the compile unit for the given descriptor. 
   ///
-  DIE *createDbgScopeVariable(DbgVariable *DV, CompileUnit *Unit);
+  CompileUnit *findCompileUnit(DICompileUnit Unit);
 
   /// getUpdatedDbgScope - Find or create DbgScope assicated with 
   /// the instruction. Initialize scope and update scope hierarchy.
@@ -384,7 +395,7 @@ class DwarfDebug : public Dwarf {
   DIE *constructInlinedScopeDIE(DbgScope *Scope);
 
   /// constructVariableDIE - Construct a DIE for the given DbgVariable.
-  DIE *constructVariableDIE(DbgVariable *DV, DbgScope *S, CompileUnit *Unit);
+  DIE *constructVariableDIE(DbgVariable *DV, DbgScope *S);
 
   /// constructScopeDIE - Construct a DIE for this scope.
   DIE *constructScopeDIE(DbgScope *Scope);
@@ -405,10 +416,8 @@ class DwarfDebug : public Dwarf {
   ///
   void computeSizeAndOffsets();
 
-  /// EmitDebugInfo / emitDebugInfoPerCU - Emit the debug info section.
+  /// EmitDebugInfo - Emit the debug info section.
   ///
-  void emitDebugInfoPerCU(CompileUnit *Unit);
-
   void emitDebugInfo();
 
   /// emitAbbreviations - Emit the abbreviation section.
@@ -432,8 +441,6 @@ class DwarfDebug : public Dwarf {
   /// section.
   void emitFunctionDebugFrame(const FunctionDebugFrameInfo &DebugFrameInfo);
 
-  void emitDebugPubNamesPerCU(CompileUnit *Unit);
-
   /// emitDebugPubNames - Emit visible names into a debug pubnames section.
   ///
   void emitDebugPubNames();
@@ -488,7 +495,7 @@ class DwarfDebug : public Dwarf {
   /// as well.
   unsigned GetOrCreateSourceID(StringRef DirName, StringRef FileName);
 
-  void constructCompileUnit(MDNode *N);
+  CompileUnit *constructCompileUnit(MDNode *N);
 
   void constructGlobalVariableDIE(MDNode *N);
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
index 8a62eb2..7ac8bda 100644
--- a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
@@ -427,7 +427,7 @@ static unsigned EstimateRuntime(MachineBasicBlock::iterator I,
 static void FixTail(MachineBasicBlock *CurMBB, MachineBasicBlock *SuccBB,
                     const TargetInstrInfo *TII) {
   MachineFunction *MF = CurMBB->getParent();
-  MachineFunction::iterator I = next(MachineFunction::iterator(CurMBB));
+  MachineFunction::iterator I = llvm::next(MachineFunction::iterator(CurMBB));
   MachineBasicBlock *TBB = 0, *FBB = 0;
   SmallVector<MachineOperand, 4> Cond;
   if (I != MF->end() &&
@@ -805,7 +805,7 @@ bool BranchFolder::TailMergeBlocks(MachineFunction &MF) {
   // a compile-time infinite loop repeatedly doing and undoing the same
   // transformations.)
 
-  for (MachineFunction::iterator I = next(MF.begin()), E = MF.end();
+  for (MachineFunction::iterator I = llvm::next(MF.begin()), E = MF.end();
        I != E; ++I) {
     if (I->pred_size() >= 2 && I->pred_size() < TailMergeThreshold) {
       SmallPtrSet<MachineBasicBlock *, 8> UniquePreds;
@@ -833,7 +833,7 @@ bool BranchFolder::TailMergeBlocks(MachineFunction &MF) {
               continue;
             // This is the QBB case described above
             if (!FBB)
-              FBB = next(MachineFunction::iterator(PBB));
+              FBB = llvm::next(MachineFunction::iterator(PBB));
           }
           // Failing case:  the only way IBB can be reached from PBB is via
           // exception handling.  Happens for landing pads.  Would be nice
@@ -1140,7 +1140,7 @@ ReoptimizeBlock:
       // falls through into MBB and we can't understand the prior block's branch
       // condition.
       if (MBB->empty()) {
-        bool PredHasNoFallThrough = TII->BlockHasNoFallThrough(PrevBB);
+        bool PredHasNoFallThrough = !PrevBB.canFallThrough();
         if (PredHasNoFallThrough || !PriorUnAnalyzable ||
             !PrevBB.isSuccessor(MBB)) {
           // If the prior block falls through into us, turn it into an
@@ -1205,11 +1205,11 @@ ReoptimizeBlock:
     }
   }
 
-  // If the prior block doesn't fall through into this block, and if this
-  // block doesn't fall through into some other block, see if we can find a
-  // place to move this block where a fall-through will happen.
-  if (!PrevBB.canFallThrough()) {
-
+  // If the prior block doesn't fall through into this block and if this block
+  // doesn't fall through into some other block and it's not branching only to a
+  // landing pad, then see if we can find a place to move this block where a
+  // fall-through will happen.
+  if (!PrevBB.canFallThrough() && !MBB->BranchesToLandingPad(MBB)) {
     // Now we know that there was no fall-through into this block, check to
     // see if it has a fall-through into its successor.
     bool CurFallsThru = MBB->canFallThrough();
@@ -1221,28 +1221,32 @@ ReoptimizeBlock:
            E = MBB->pred_end(); PI != E; ++PI) {
         // Analyze the branch at the end of the pred.
         MachineBasicBlock *PredBB = *PI;
-        MachineFunction::iterator PredFallthrough = PredBB; ++PredFallthrough;
+        MachineFunction::iterator PredNextBB = PredBB; ++PredNextBB;
         MachineBasicBlock *PredTBB, *PredFBB;
         SmallVector<MachineOperand, 4> PredCond;
-        if (PredBB != MBB && !PredBB->canFallThrough() &&
-            !TII->AnalyzeBranch(*PredBB, PredTBB, PredFBB, PredCond, true)
+        if (PredBB != MBB && !PredBB->canFallThrough()
+            && !TII->AnalyzeBranch(*PredBB, PredTBB, PredFBB, PredCond, true)
             && (!CurFallsThru || !CurTBB || !CurFBB)
             && (!CurFallsThru || MBB->getNumber() >= PredBB->getNumber())) {
-          // If the current block doesn't fall through, just move it.
-          // If the current block can fall through and does not end with a
-          // conditional branch, we need to append an unconditional jump to
-          // the (current) next block.  To avoid a possible compile-time
-          // infinite loop, move blocks only backward in this case.
-          // Also, if there are already 2 branches here, we cannot add a third;
-          // this means we have the case
-          // Bcc next
-          // B elsewhere
-          // next:
+          // If the current block doesn't fall through, just move it.  If the
+          // current block can fall through and does not end with a conditional
+          // branch, we need to append an unconditional jump to the (current)
+          // next block.  To avoid a possible compile-time infinite loop, move
+          // blocks only backward in this case.
+          // 
+          // Also, if there are already 2 branches here, we cannot add a third.
+          // I.e. we have the case:
+          // 
+          //     Bcc next
+          //     B elsewhere
+          //   next:
           if (CurFallsThru) {
-            MachineBasicBlock *NextBB = next(MachineFunction::iterator(MBB));
+            MachineBasicBlock *NextBB =
+              llvm::next(MachineFunction::iterator(MBB));
             CurCond.clear();
             TII->InsertBranch(*MBB, NextBB, 0, CurCond);
           }
+
           MBB->moveAfter(PredBB);
           MadeChange = true;
           goto ReoptimizeBlock;
diff --git a/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt b/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
index 6f86614..1fac395 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
@@ -35,7 +35,9 @@ add_llvm_library(LLVMCodeGen
   MachinePassRegistry.cpp
   MachineRegisterInfo.cpp
   MachineSink.cpp
+  MachineSSAUpdater.cpp
   MachineVerifier.cpp
+  MaxStackAlignment.cpp
   ObjectCodeEmitter.cpp
   OcamlGC.cpp
   PHIElimination.cpp
diff --git a/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp b/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
index e9844d8..ff71f6b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
@@ -182,7 +182,7 @@ bool CodePlacementOpt::EliminateUnconditionalJumpsToTop(MachineFunction &MF,
       // Move it and all the blocks that can reach it via fallthrough edges
       // exclusively, to keep existing fallthrough edges intact.
       MachineFunction::iterator Begin = Pred;
-      MachineFunction::iterator End = next(Begin);
+      MachineFunction::iterator End = llvm::next(Begin);
       while (Begin != MF.begin()) {
         MachineFunction::iterator Prior = prior(Begin);
         if (Prior == MF.begin())
@@ -255,7 +255,8 @@ bool CodePlacementOpt::MoveDiscontiguousLoopBlocks(MachineFunction &MF,
   // to the top of the loop to avoid loosing that fallthrough. Otherwise append
   // them to the bottom, even if it previously had a fallthrough, on the theory
   // that it's worth an extra branch to keep the loop contiguous.
-  MachineFunction::iterator InsertPt = next(MachineFunction::iterator(BotMBB));
+  MachineFunction::iterator InsertPt =
+    llvm::next(MachineFunction::iterator(BotMBB));
   bool InsertAtTop = false;
   if (TopMBB != MF.begin() &&
       !HasFallthrough(prior(MachineFunction::iterator(TopMBB))) &&
@@ -268,7 +269,7 @@ bool CodePlacementOpt::MoveDiscontiguousLoopBlocks(MachineFunction &MF,
   // with the loop header.
   SmallPtrSet<MachineBasicBlock *, 8> ContiguousBlocks;
   for (MachineFunction::iterator I = TopMBB,
-       E = next(MachineFunction::iterator(BotMBB)); I != E; ++I)
+       E = llvm::next(MachineFunction::iterator(BotMBB)); I != E; ++I)
     ContiguousBlocks.insert(I);
 
   // Find non-contigous blocks and fix them.
@@ -301,7 +302,7 @@ bool CodePlacementOpt::MoveDiscontiguousLoopBlocks(MachineFunction &MF,
       // Process this block and all loop blocks contiguous with it, to keep
       // them in their relative order.
       MachineFunction::iterator Begin = BB;
-      MachineFunction::iterator End = next(MachineFunction::iterator(BB));
+      MachineFunction::iterator End = llvm::next(MachineFunction::iterator(BB));
       for (; End != MF.end(); ++End) {
         if (!L->contains(End)) break;
         if (!HasAnalyzableTerminator(End)) break;
diff --git a/libclamav/c++/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp b/libclamav/c++/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
index 1b39fec..3c7961c 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
@@ -43,8 +43,11 @@ void CriticalAntiDepBreaker::StartBlock(MachineBasicBlock *BB) {
             static_cast<const TargetRegisterClass *>(0));
 
   // Initialize the indices to indicate that no registers are live.
-  std::fill(KillIndices, array_endof(KillIndices), ~0u);
-  std::fill(DefIndices, array_endof(DefIndices), BB->size());
+  const unsigned BBSize = BB->size();
+  for (unsigned i = 0; i < TRI->getNumRegs(); ++i) {
+    KillIndices[i] = ~0u;
+    DefIndices[i] = BBSize;
+  }
 
   // Clear "do not change" set.
   KeepRegs.clear();
@@ -122,7 +125,7 @@ void CriticalAntiDepBreaker::Observe(MachineInstr *MI, unsigned Count,
   // may have been rescheduled and its lifetime may overlap with registers
   // in ways not reflected in our current liveness state. For each such
   // register, adjust the liveness state to be conservatively correct.
-  for (unsigned Reg = 0; Reg != TargetRegisterInfo::FirstVirtualRegister; ++Reg)
+  for (unsigned Reg = 0; Reg != TRI->getNumRegs(); ++Reg)
     if (DefIndices[Reg] < InsertPosIndex && DefIndices[Reg] >= Count) {
       assert(KillIndices[Reg] == ~0u && "Clobbered register is live!");
       // Mark this register to be non-renamable.
diff --git a/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp b/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
index 242cba5..297dd31 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
@@ -74,6 +74,9 @@ EnableFastISelOption("fast-isel", cl::Hidden,
 static cl::opt<bool> EnableSplitGEPGVN("split-gep-gvn", cl::Hidden,
     cl::desc("Split GEPs and run no-load GVN"));
 
+static cl::opt<bool> PreAllocTailDup("pre-regalloc-taildup", cl::Hidden,
+    cl::desc("Pre-register allocation tail duplication"));
+
 LLVMTargetMachine::LLVMTargetMachine(const Target &T,
                                      const std::string &TargetTriple)
   : TargetMachine(T) {
@@ -302,6 +305,13 @@ bool LLVMTargetMachine::addCommonCodeGenPasses(PassManagerBase &PM,
                    /* allowDoubleDefs= */ true);
   }
 
+  // Pre-ra tail duplication.
+  if (OptLevel != CodeGenOpt::None &&
+      !DisableTailDuplicate && PreAllocTailDup) {
+    PM.add(createTailDuplicatePass(true));
+    printAndVerify(PM, "After Pre-RegAlloc TailDuplicate");
+  }
+
   // Run pre-ra passes.
   if (addPreRegAlloc(PM, OptLevel))
     printAndVerify(PM, "After PreRegAlloc passes",
@@ -348,7 +358,7 @@ bool LLVMTargetMachine::addCommonCodeGenPasses(PassManagerBase &PM,
 
   // Tail duplication.
   if (OptLevel != CodeGenOpt::None && !DisableTailDuplicate) {
-    PM.add(createTailDuplicatePass());
+    PM.add(createTailDuplicatePass(false));
     printAndVerify(PM, "After TailDuplicate");
   }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/LiveInterval.cpp b/libclamav/c++/llvm/lib/CodeGen/LiveInterval.cpp
index 8d632cb..cc286aa 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LiveInterval.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LiveInterval.cpp
@@ -847,7 +847,7 @@ void LiveInterval::print(raw_ostream &OS, const TargetRegisterInfo *TRI) const {
       if (vni->isUnused()) {
         OS << "x";
       } else {
-        if (!vni->isDefAccurate())
+        if (!vni->isDefAccurate() && !vni->isPHIDef())
           OS << "?";
         else
           OS << vni->def;
diff --git a/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp b/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
index 4412c1b..8806439 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
@@ -149,44 +149,69 @@ void LiveIntervals::dumpInstrs() const {
   printInstrs(errs());
 }
 
-/// conflictsWithPhysRegDef - Returns true if the specified register
-/// is defined during the duration of the specified interval.
-bool LiveIntervals::conflictsWithPhysRegDef(const LiveInterval &li,
-                                            VirtRegMap &vrm, unsigned reg) {
-  for (LiveInterval::Ranges::const_iterator
-         I = li.ranges.begin(), E = li.ranges.end(); I != E; ++I) {
-    for (SlotIndex index = I->start.getBaseIndex(),
-           end = I->end.getPrevSlot().getBaseIndex().getNextIndex();
-           index != end;
-           index = index.getNextIndex()) {
-      // skip deleted instructions
-      while (index != end && !getInstructionFromIndex(index))
-        index = index.getNextIndex();
-      if (index == end) break;
+bool LiveIntervals::conflictsWithPhysReg(const LiveInterval &li,
+                                         VirtRegMap &vrm, unsigned reg) {
+  // We don't handle fancy stuff crossing basic block boundaries
+  if (li.ranges.size() != 1)
+    return true;
+  const LiveRange &range = li.ranges.front();
+  SlotIndex idx = range.start.getBaseIndex();
+  SlotIndex end = range.end.getPrevSlot().getBaseIndex().getNextIndex();
+
+  // Skip deleted instructions
+  MachineInstr *firstMI = getInstructionFromIndex(idx);
+  while (!firstMI && idx != end) {
+    idx = idx.getNextIndex();
+    firstMI = getInstructionFromIndex(idx);
+  }
+  if (!firstMI)
+    return false;
 
-      MachineInstr *MI = getInstructionFromIndex(index);
-      unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
-      if (tii_->isMoveInstr(*MI, SrcReg, DstReg, SrcSubReg, DstSubReg))
-        if (SrcReg == li.reg || DstReg == li.reg)
-          continue;
-      for (unsigned i = 0; i != MI->getNumOperands(); ++i) {
-        MachineOperand& mop = MI->getOperand(i);
-        if (!mop.isReg())
-          continue;
-        unsigned PhysReg = mop.getReg();
-        if (PhysReg == 0 || PhysReg == li.reg)
+  // Find last instruction in range
+  SlotIndex lastIdx = end.getPrevIndex();
+  MachineInstr *lastMI = getInstructionFromIndex(lastIdx);
+  while (!lastMI && lastIdx != idx) {
+    lastIdx = lastIdx.getPrevIndex();
+    lastMI = getInstructionFromIndex(lastIdx);
+  }
+  if (!lastMI)
+    return false;
+
+  // Range cannot cross basic block boundaries or terminators
+  MachineBasicBlock *MBB = firstMI->getParent();
+  if (MBB != lastMI->getParent() || lastMI->getDesc().isTerminator())
+    return true;
+
+  MachineBasicBlock::const_iterator E = lastMI;
+  ++E;
+  for (MachineBasicBlock::const_iterator I = firstMI; I != E; ++I) {
+    const MachineInstr &MI = *I;
+
+    // Allow copies to and from li.reg
+    unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
+    if (tii_->isMoveInstr(MI, SrcReg, DstReg, SrcSubReg, DstSubReg))
+      if (SrcReg == li.reg || DstReg == li.reg)
+        continue;
+
+    // Check for operands using reg
+    for (unsigned i = 0, e = MI.getNumOperands(); i != e;  ++i) {
+      const MachineOperand& mop = MI.getOperand(i);
+      if (!mop.isReg())
+        continue;
+      unsigned PhysReg = mop.getReg();
+      if (PhysReg == 0 || PhysReg == li.reg)
+        continue;
+      if (TargetRegisterInfo::isVirtualRegister(PhysReg)) {
+        if (!vrm.hasPhys(PhysReg))
           continue;
-        if (TargetRegisterInfo::isVirtualRegister(PhysReg)) {
-          if (!vrm.hasPhys(PhysReg))
-            continue;
-          PhysReg = vrm.getPhys(PhysReg);
-        }
-        if (PhysReg && tri_->regsOverlap(PhysReg, reg))
-          return true;
+        PhysReg = vrm.getPhys(PhysReg);
       }
+      if (PhysReg && tri_->regsOverlap(PhysReg, reg))
+        return true;
     }
   }
 
+  // No conflicts found.
   return false;
 }
 
@@ -201,15 +226,9 @@ bool LiveIntervals::conflictsWithPhysRegRef(LiveInterval &li,
            end = I->end.getPrevSlot().getBaseIndex().getNextIndex();
            index != end;
            index = index.getNextIndex()) {
-      // Skip deleted instructions.
-      MachineInstr *MI = 0;
-      while (index != end) {
-        MI = getInstructionFromIndex(index);
-        if (MI)
-          break;
-        index = index.getNextIndex();
-      }
-      if (index == end) break;
+      MachineInstr *MI = getInstructionFromIndex(index);
+      if (!MI)
+        continue;               // skip deleted instructions
 
       if (JoinedCopies.count(MI))
         continue;
@@ -374,8 +393,6 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
       // Value#0 is now defined by the 2-addr instruction.
       OldValNo->def  = RedefIndex;
       OldValNo->setCopy(0);
-      if (MO.isEarlyClobber())
-        OldValNo->setHasRedefByEC(true);
       
       // Add the new live interval which replaces the range for the input copy.
       LiveRange LR(DefIndex, RedefIndex, ValNo);
@@ -411,7 +428,7 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
         interval.removeRange(Start, End);        
         assert(interval.ranges.size() == 1 &&
                "Newly discovered PHI interval has >1 ranges.");
-        MachineBasicBlock *killMBB = getMBBFromIndex(interval.endIndex());
+        MachineBasicBlock *killMBB = getMBBFromIndex(VNI->def);
         VNI->addKill(indexes_->getTerminatorGap(killMBB));
         VNI->setHasPHIKill(true);
         DEBUG({
@@ -422,7 +439,7 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
         // Replace the interval with one of a NEW value number.  Note that this
         // value number isn't actually defined by an instruction, weird huh? :)
         LiveRange LR(Start, End,
-                     interval.getNextValue(SlotIndex(getMBBStartIdx(mbb), true),
+                     interval.getNextValue(SlotIndex(getMBBStartIdx(Killer->getParent()), true),
                        0, false, VNInfoAllocator));
         LR.valno->setIsPHIDef(true);
         DEBUG(errs() << " replace range with " << LR);
@@ -513,8 +530,6 @@ void LiveIntervals::handlePhysicalRegisterDef(MachineBasicBlock *MBB,
         if (mi->isRegTiedToUseOperand(DefIdx)) {
           // Two-address instruction.
           end = baseIndex.getDefIndex();
-          assert(!mi->getOperand(DefIdx).isEarlyClobber() &&
-                 "Two address instruction is an early clobber?"); 
         } else {
           // Another instruction redefines the register before it is ever read.
           // Then the register is essentially dead at the instruction that defines
@@ -730,8 +745,16 @@ unsigned LiveIntervals::getVNInfoSourceReg(const VNInfo *VNI) const {
   if (VNI->getCopy()->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
     // If it's extracting out of a physical register, return the sub-register.
     unsigned Reg = VNI->getCopy()->getOperand(1).getReg();
-    if (TargetRegisterInfo::isPhysicalRegister(Reg))
+    if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
+      unsigned SrcSubReg = VNI->getCopy()->getOperand(2).getImm();
+      unsigned DstSubReg = VNI->getCopy()->getOperand(0).getSubReg();
+      if (SrcSubReg == DstSubReg)
+        // %reg1034:3<def> = EXTRACT_SUBREG %EDX, 3
+        // reg1034 can still be coalesced to EDX.
+        return Reg;
+      assert(DstSubReg == 0);
       Reg = tri_->getSubReg(Reg, VNI->getCopy()->getOperand(2).getImm());
+    }
     return Reg;
   } else if (VNI->getCopy()->getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
              VNI->getCopy()->getOpcode() == TargetInstrInfo::SUBREG_TO_REG)
@@ -1095,6 +1118,12 @@ rewriteInstructionForSpills(const LiveInterval &li, const VNInfo *VNI,
       NewVReg = mri_->createVirtualRegister(rc);
       vrm.grow();
       CreatedNewVReg = true;
+
+      // The new virtual register should get the same allocation hints as the
+      // old one.
+      std::pair<unsigned, unsigned> Hint = mri_->getRegAllocationHint(Reg);
+      if (Hint.first || Hint.second)
+        mri_->setRegAllocationHint(NewVReg, Hint.first, Hint.second);
     }
 
     if (!TryFold)
diff --git a/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp b/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp
index bfc2d08..3c88e37 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp
@@ -279,6 +279,43 @@ void LiveVariables::HandlePhysRegUse(unsigned Reg, MachineInstr *MI) {
     PhysRegUse[SubReg] =  MI;
 }
 
+/// FindLastRefOrPartRef - Return the last reference or partial reference of
+/// the specified register.
+MachineInstr *LiveVariables::FindLastRefOrPartRef(unsigned Reg) {
+  MachineInstr *LastDef = PhysRegDef[Reg];
+  MachineInstr *LastUse = PhysRegUse[Reg];
+  if (!LastDef && !LastUse)
+    return false;
+
+  MachineInstr *LastRefOrPartRef = LastUse ? LastUse : LastDef;
+  unsigned LastRefOrPartRefDist = DistanceMap[LastRefOrPartRef];
+  MachineInstr *LastPartDef = 0;
+  unsigned LastPartDefDist = 0;
+  for (const unsigned *SubRegs = TRI->getSubRegisters(Reg);
+       unsigned SubReg = *SubRegs; ++SubRegs) {
+    MachineInstr *Def = PhysRegDef[SubReg];
+    if (Def && Def != LastDef) {
+      // There was a def of this sub-register in between. This is a partial
+      // def, keep track of the last one.
+      unsigned Dist = DistanceMap[Def];
+      if (Dist > LastPartDefDist) {
+        LastPartDefDist = Dist;
+        LastPartDef = Def;
+      }
+      continue;
+    }
+    if (MachineInstr *Use = PhysRegUse[SubReg]) {
+      unsigned Dist = DistanceMap[Use];
+      if (Dist > LastRefOrPartRefDist) {
+        LastRefOrPartRefDist = Dist;
+        LastRefOrPartRef = Use;
+      }
+    }
+  }
+
+  return LastRefOrPartRef;
+}
+
 bool LiveVariables::HandlePhysRegKill(unsigned Reg, MachineInstr *MI) {
   MachineInstr *LastDef = PhysRegDef[Reg];
   MachineInstr *LastUse = PhysRegUse[Reg];
@@ -373,7 +410,16 @@ bool LiveVariables::HandlePhysRegKill(unsigned Reg, MachineInstr *MI) {
       if (NeedDef)
         PhysRegDef[Reg]->addOperand(MachineOperand::CreateReg(SubReg,
                                                  true/*IsDef*/, true/*IsImp*/));
-      LastRefOrPartRef->addRegisterKilled(SubReg, TRI, true);
+      MachineInstr *LastSubRef = FindLastRefOrPartRef(SubReg);
+      if (LastSubRef)
+        LastSubRef->addRegisterKilled(SubReg, TRI, true);
+      else {
+        LastRefOrPartRef->addRegisterKilled(SubReg, TRI, true);
+        PhysRegUse[SubReg] = LastRefOrPartRef;
+        for (const unsigned *SSRegs = TRI->getSubRegisters(SubReg);
+             unsigned SSReg = *SSRegs; ++SSRegs)
+          PhysRegUse[SSReg] = LastRefOrPartRef;
+      }
       for (const unsigned *SS = TRI->getSubRegisters(SubReg); *SS; ++SS)
         PartUses.erase(*SS);
     }
@@ -674,6 +720,51 @@ bool LiveVariables::VarInfo::isLiveIn(const MachineBasicBlock &MBB,
   return findKill(&MBB);
 }
 
+bool LiveVariables::isLiveOut(unsigned Reg, const MachineBasicBlock &MBB) {
+  LiveVariables::VarInfo &VI = getVarInfo(Reg);
+
+  // Loop over all of the successors of the basic block, checking to see if
+  // the value is either live in the block, or if it is killed in the block.
+  std::vector<MachineBasicBlock*> OpSuccBlocks;
+  for (MachineBasicBlock::const_succ_iterator SI = MBB.succ_begin(),
+         E = MBB.succ_end(); SI != E; ++SI) {
+    MachineBasicBlock *SuccMBB = *SI;
+
+    // Is it alive in this successor?
+    unsigned SuccIdx = SuccMBB->getNumber();
+    if (VI.AliveBlocks.test(SuccIdx))
+      return true;
+    OpSuccBlocks.push_back(SuccMBB);
+  }
+
+  // Check to see if this value is live because there is a use in a successor
+  // that kills it.
+  switch (OpSuccBlocks.size()) {
+  case 1: {
+    MachineBasicBlock *SuccMBB = OpSuccBlocks[0];
+    for (unsigned i = 0, e = VI.Kills.size(); i != e; ++i)
+      if (VI.Kills[i]->getParent() == SuccMBB)
+        return true;
+    break;
+  }
+  case 2: {
+    MachineBasicBlock *SuccMBB1 = OpSuccBlocks[0], *SuccMBB2 = OpSuccBlocks[1];
+    for (unsigned i = 0, e = VI.Kills.size(); i != e; ++i)
+      if (VI.Kills[i]->getParent() == SuccMBB1 ||
+          VI.Kills[i]->getParent() == SuccMBB2)
+        return true;
+    break;
+  }
+  default:
+    std::sort(OpSuccBlocks.begin(), OpSuccBlocks.end());
+    for (unsigned i = 0, e = VI.Kills.size(); i != e; ++i)
+      if (std::binary_search(OpSuccBlocks.begin(), OpSuccBlocks.end(),
+                             VI.Kills[i]->getParent()))
+        return true;
+  }
+  return false;
+}
+
 /// addNewBlock - Add a new basic block BB as an empty succcessor to DomBB. All
 /// variables that are live out of DomBB will be marked as passing live through
 /// BB.
diff --git a/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp b/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp
index 30636a8..80eb6cd 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp
@@ -312,7 +312,7 @@ bool LowerSubregsInstructionPass::runOnMachineFunction(MachineFunction &MF) {
        mbbi != mbbe; ++mbbi) {
     for (MachineBasicBlock::iterator mi = mbbi->begin(), me = mbbi->end();
          mi != me;) {
-      MachineBasicBlock::iterator nmi = next(mi);
+      MachineBasicBlock::iterator nmi = llvm::next(mi);
       MachineInstr *MI = mi;
       if (MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
         MadeChange |= LowerExtract(MI);
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
index e55e369..80b4b0f 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
@@ -13,15 +13,16 @@
 
 #include "llvm/CodeGen/MachineBasicBlock.h"
 #include "llvm/BasicBlock.h"
+#include "llvm/ADT/SmallSet.h"
+#include "llvm/Assembly/Writer.h"
 #include "llvm/CodeGen/MachineFunction.h"
-#include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetInstrDesc.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
+#include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Support/LeakDetector.h"
 #include "llvm/Support/raw_ostream.h"
-#include "llvm/Assembly/Writer.h"
 #include <algorithm>
 using namespace llvm;
 
@@ -290,7 +291,7 @@ void MachineBasicBlock::updateTerminator() {
     } else {
       // The block has a fallthrough conditional branch.
       MachineBasicBlock *MBBA = *succ_begin();
-      MachineBasicBlock *MBBB = *next(succ_begin());
+      MachineBasicBlock *MBBB = *llvm::next(succ_begin());
       if (MBBA == TBB) std::swap(MBBB, MBBA);
       if (isLayoutSuccessor(TBB)) {
         if (TII->ReverseBranchCondition(Cond)) {
@@ -359,15 +360,10 @@ bool MachineBasicBlock::isSuccessor(const MachineBasicBlock *MBB) const {
 
 bool MachineBasicBlock::isLayoutSuccessor(const MachineBasicBlock *MBB) const {
   MachineFunction::const_iterator I(this);
-  return next(I) == MachineFunction::const_iterator(MBB);
+  return llvm::next(I) == MachineFunction::const_iterator(MBB);
 }
 
 bool MachineBasicBlock::canFallThrough() {
-  MachineBasicBlock *TBB = 0, *FBB = 0;
-  SmallVector<MachineOperand, 4> Cond;
-  const TargetInstrInfo *TII = getParent()->getTarget().getInstrInfo();
-  bool BranchUnAnalyzable = TII->AnalyzeBranch(*this, TBB, FBB, Cond, true);
-
   MachineFunction::iterator Fallthrough = this;
   ++Fallthrough;
   // If FallthroughBlock is off the end of the function, it can't fall through.
@@ -378,16 +374,21 @@ bool MachineBasicBlock::canFallThrough() {
   if (!isSuccessor(Fallthrough))
     return false;
 
-  // If we couldn't analyze the branch, examine the last instruction.
-  // If the block doesn't end in a known control barrier, assume fallthrough
-  // is possible. The isPredicable check is needed because this code can be
-  // called during IfConversion, where an instruction which is normally a
-  // Barrier is predicated and thus no longer an actual control barrier. This
-  // is over-conservative though, because if an instruction isn't actually
-  // predicated we could still treat it like a barrier.
-  if (BranchUnAnalyzable)
+  // Analyze the branches, if any, at the end of the block.
+  MachineBasicBlock *TBB = 0, *FBB = 0;
+  SmallVector<MachineOperand, 4> Cond;
+  const TargetInstrInfo *TII = getParent()->getTarget().getInstrInfo();
+  if (TII->AnalyzeBranch(*this, TBB, FBB, Cond, true)) {
+    // If we couldn't analyze the branch, examine the last instruction.
+    // If the block doesn't end in a known control barrier, assume fallthrough
+    // is possible. The isPredicable check is needed because this code can be
+    // called during IfConversion, where an instruction which is normally a
+    // Barrier is predicated and thus no longer an actual control barrier. This
+    // is over-conservative though, because if an instruction isn't actually
+    // predicated we could still treat it like a barrier.
     return empty() || !back().getDesc().isBarrier() ||
            back().getDesc().isPredicable();
+  }
 
   // If there is no branch, control always falls through.
   if (TBB == 0) return true;
@@ -448,10 +449,28 @@ void MachineBasicBlock::ReplaceUsesOfBlockWith(MachineBasicBlock *Old,
   addSuccessor(New);
 }
 
+/// BranchesToLandingPad - The basic block is a landing pad or branches only to
+/// a landing pad. No other instructions are present other than the
+/// unconditional branch.
+bool
+MachineBasicBlock::BranchesToLandingPad(const MachineBasicBlock *MBB) const {
+  SmallSet<const MachineBasicBlock*, 32> Visited;
+  const MachineBasicBlock *CurMBB = MBB;
+
+  while (!CurMBB->isLandingPad()) {
+    if (CurMBB->succ_size() != 1) break;
+    if (!Visited.insert(CurMBB)) break;
+    CurMBB = *CurMBB->succ_begin();
+  }
+
+  return CurMBB->isLandingPad();
+}
+
 /// CorrectExtraCFGEdges - Various pieces of code can cause excess edges in the
 /// CFG to be inserted.  If we have proven that MBB can only branch to DestA and
 /// DestB, remove any other MBB successors from the CFG.  DestA and DestB can
 /// be null.
+/// 
 /// Besides DestA and DestB, retain other edges leading to LandingPads
 /// (currently there can be only one; we don't check or require that here).
 /// Note it is possible that DestA and/or DestB are LandingPads.
@@ -461,7 +480,8 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
   bool MadeChange = false;
   bool AddedFallThrough = false;
 
-  MachineFunction::iterator FallThru = next(MachineFunction::iterator(this));
+  MachineFunction::iterator FallThru =
+    llvm::next(MachineFunction::iterator(this));
   
   // If this block ends with a conditional branch that falls through to its
   // successor, set DestB as the successor.
@@ -480,16 +500,17 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
   }
   
   MachineBasicBlock::succ_iterator SI = succ_begin();
-  MachineBasicBlock *OrigDestA = DestA, *OrigDestB = DestB;
+  const MachineBasicBlock *OrigDestA = DestA, *OrigDestB = DestB;
   while (SI != succ_end()) {
-    if (*SI == DestA) {
+    const MachineBasicBlock *MBB = *SI;
+    if (MBB == DestA) {
       DestA = 0;
       ++SI;
-    } else if (*SI == DestB) {
+    } else if (MBB == DestB) {
       DestB = 0;
       ++SI;
-    } else if ((*SI)->isLandingPad() && 
-               *SI!=OrigDestA && *SI!=OrigDestB) {
+    } else if (MBB != OrigDestA && MBB != OrigDestB &&
+               BranchesToLandingPad(MBB)) {
       ++SI;
     } else {
       // Otherwise, this is a superfluous edge, remove it.
@@ -497,12 +518,14 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
       MadeChange = true;
     }
   }
+
   if (!AddedFallThrough) {
     assert(DestA == 0 && DestB == 0 &&
            "MachineCFG is missing edges!");
   } else if (isCond) {
     assert(DestA == 0 && "MachineCFG is missing edges!");
   }
+
   return MadeChange;
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp
index 81d1301..dd6fd7e 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp
@@ -328,7 +328,7 @@ void MachineFunction::print(raw_ostream &OS) const {
       if (I->second)
         OS << " in reg%" << I->second;
 
-      if (next(I) != E)
+      if (llvm::next(I) != E)
         OS << ", ";
     }
     OS << '\n';
@@ -342,7 +342,7 @@ void MachineFunction::print(raw_ostream &OS) const {
       else
         OS << "%physreg" << *I;
 
-      if (next(I) != E)
+      if (llvm::next(I) != E)
         OS << " ";
     }
     OS << '\n';
@@ -359,14 +359,16 @@ void MachineFunction::print(raw_ostream &OS) const {
 namespace llvm {
   template<>
   struct DOTGraphTraits<const MachineFunction*> : public DefaultDOTGraphTraits {
+
+  DOTGraphTraits (bool isSimple=false) : DefaultDOTGraphTraits(isSimple) {}
+
     static std::string getGraphName(const MachineFunction *F) {
       return "CFG for '" + F->getFunction()->getNameStr() + "' function";
     }
 
-    static std::string getNodeLabel(const MachineBasicBlock *Node,
-                                    const MachineFunction *Graph,
-                                    bool ShortNames) {
-      if (ShortNames && Node->getBasicBlock() &&
+    std::string getNodeLabel(const MachineBasicBlock *Node,
+                             const MachineFunction *Graph) {
+      if (isSimple () && Node->getBasicBlock() &&
           !Node->getBasicBlock()->getName().empty())
         return Node->getBasicBlock()->getNameStr() + ":";
 
@@ -374,7 +376,7 @@ namespace llvm {
       {
         raw_string_ostream OSS(OutStr);
         
-        if (ShortNames)
+        if (isSimple())
           OSS << Node->getNumber() << ':';
         else
           Node->print(OSS);
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
index f11026f..12b974d 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
@@ -1058,6 +1058,22 @@ bool MachineInstr::isInvariantLoad(AliasAnalysis *AA) const {
   return true;
 }
 
+/// isConstantValuePHI - If the specified instruction is a PHI that always
+/// merges together the same virtual register, return the register, otherwise
+/// return 0.
+unsigned MachineInstr::isConstantValuePHI() const {
+  if (getOpcode() != TargetInstrInfo::PHI)
+    return 0;
+  assert(getNumOperands() >= 3 &&
+         "It's illegal to have a PHI without source operands");
+
+  unsigned Reg = getOperand(1).getReg();
+  for (unsigned i = 3, e = getNumOperands(); i < e; i += 2)
+    if (getOperand(i).getReg() != Reg)
+      return 0;
+  return Reg;
+}
+
 void MachineInstr::dump() const {
   errs() << "  " << *this;
 }
@@ -1148,11 +1164,16 @@ void MachineInstr::print(raw_ostream &OS, const TargetMachine *TM) const {
     // TODO: print InlinedAtLoc information
 
     DebugLocTuple DLT = MF->getDebugLocTuple(debugLoc);
-    DICompileUnit CU(DLT.Scope);
+    DIScope Scope(DLT.Scope);
     OS << " dbg:";
-    if (!CU.isNull())
-      OS << CU.getDirectory() << '/' << CU.getFilename() << ":";
-    OS << DLT.Line << ":" << DLT.Col;
+    // Omit the directory, since it's usually long and uninteresting.
+    if (!Scope.isNull())
+      OS << Scope.getFilename();
+    else
+      OS << "<unknown>";
+    OS << ':' << DLT.Line;
+    if (DLT.Col != 0)
+      OS << ':' << DLT.Col;
   }
 
   OS << "\n";
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp
index db77d19..63f4f18 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineLoopInfo.cpp
@@ -62,11 +62,11 @@ MachineBasicBlock *MachineLoop::getBottomBlock() {
   MachineBasicBlock *BotMBB = getHeader();
   MachineFunction::iterator End = BotMBB->getParent()->end();
   if (BotMBB != prior(End)) {
-    MachineBasicBlock *NextMBB = next(MachineFunction::iterator(BotMBB));
+    MachineBasicBlock *NextMBB = llvm::next(MachineFunction::iterator(BotMBB));
     while (contains(NextMBB)) {
       BotMBB = NextMBB;
-      if (BotMBB == next(MachineFunction::iterator(BotMBB))) break;
-      NextMBB = next(MachineFunction::iterator(BotMBB));
+      if (BotMBB == llvm::next(MachineFunction::iterator(BotMBB))) break;
+      NextMBB = llvm::next(MachineFunction::iterator(BotMBB));
     }
   }
   return BotMBB;
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineSSAUpdater.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineSSAUpdater.cpp
new file mode 100644
index 0000000..292096f
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineSSAUpdater.cpp
@@ -0,0 +1,393 @@
+//===- MachineSSAUpdater.cpp - Unstructured SSA Update Tool ---------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the MachineSSAUpdater class. It's based on SSAUpdater
+// class in lib/Transforms/Utils.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/CodeGen/MachineSSAUpdater.h"
+#include "llvm/CodeGen/MachineInstr.h"
+#include "llvm/CodeGen/MachineInstrBuilder.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Target/TargetMachine.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+#include "llvm/ADT/DenseMap.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/raw_ostream.h"
+using namespace llvm;
+
+typedef DenseMap<MachineBasicBlock*, unsigned> AvailableValsTy;
+typedef std::vector<std::pair<MachineBasicBlock*, unsigned> >
+                IncomingPredInfoTy;
+
+static AvailableValsTy &getAvailableVals(void *AV) {
+  return *static_cast<AvailableValsTy*>(AV);
+}
+
+static IncomingPredInfoTy &getIncomingPredInfo(void *IPI) {
+  return *static_cast<IncomingPredInfoTy*>(IPI);
+}
+
+
+MachineSSAUpdater::MachineSSAUpdater(MachineFunction &MF,
+                                     SmallVectorImpl<MachineInstr*> *NewPHI)
+  : AV(0), IPI(0), InsertedPHIs(NewPHI) {
+  TII = MF.getTarget().getInstrInfo();
+  MRI = &MF.getRegInfo();
+}
+
+MachineSSAUpdater::~MachineSSAUpdater() {
+  delete &getAvailableVals(AV);
+  delete &getIncomingPredInfo(IPI);
+}
+
+/// Initialize - Reset this object to get ready for a new set of SSA
+/// updates.  ProtoValue is the value used to name PHI nodes.
+void MachineSSAUpdater::Initialize(unsigned V) {
+  if (AV == 0)
+    AV = new AvailableValsTy();
+  else
+    getAvailableVals(AV).clear();
+
+  if (IPI == 0)
+    IPI = new IncomingPredInfoTy();
+  else
+    getIncomingPredInfo(IPI).clear();
+
+  VR = V;
+  VRC = MRI->getRegClass(VR);
+}
+
+/// HasValueForBlock - Return true if the MachineSSAUpdater already has a value for
+/// the specified block.
+bool MachineSSAUpdater::HasValueForBlock(MachineBasicBlock *BB) const {
+  return getAvailableVals(AV).count(BB);
+}
+
+/// AddAvailableValue - Indicate that a rewritten value is available in the
+/// specified block with the specified value.
+void MachineSSAUpdater::AddAvailableValue(MachineBasicBlock *BB, unsigned V) {
+  getAvailableVals(AV)[BB] = V;
+}
+
+/// GetValueAtEndOfBlock - Construct SSA form, materializing a value that is
+/// live at the end of the specified block.
+unsigned MachineSSAUpdater::GetValueAtEndOfBlock(MachineBasicBlock *BB) {
+  return GetValueAtEndOfBlockInternal(BB);
+}
+
+static
+unsigned LookForIdenticalPHI(MachineBasicBlock *BB,
+          SmallVector<std::pair<MachineBasicBlock*, unsigned>, 8> &PredValues) {
+  if (BB->empty())
+    return 0;
+
+  MachineBasicBlock::iterator I = BB->front();
+  if (I->getOpcode() != TargetInstrInfo::PHI)
+    return 0;
+
+  AvailableValsTy AVals;
+  for (unsigned i = 0, e = PredValues.size(); i != e; ++i)
+    AVals[PredValues[i].first] = PredValues[i].second;
+  while (I != BB->end() && I->getOpcode() == TargetInstrInfo::PHI) {
+    bool Same = true;
+    for (unsigned i = 1, e = I->getNumOperands(); i != e; i += 2) {
+      unsigned SrcReg = I->getOperand(i).getReg();
+      MachineBasicBlock *SrcBB = I->getOperand(i+1).getMBB();
+      if (AVals[SrcBB] != SrcReg) {
+        Same = false;
+        break;
+      }
+    }
+    if (Same)
+      return I->getOperand(0).getReg();
+    ++I;
+  }
+  return 0;
+}
+
+/// InsertNewDef - Insert an empty PHI or IMPLICIT_DEF instruction which define
+/// a value of the given register class at the start of the specified basic
+/// block. It returns the virtual register defined by the instruction.
+static
+MachineInstr *InsertNewDef(unsigned Opcode,
+                           MachineBasicBlock *BB, MachineBasicBlock::iterator I,
+                           const TargetRegisterClass *RC,
+                           MachineRegisterInfo *MRI, const TargetInstrInfo *TII) {
+  unsigned NewVR = MRI->createVirtualRegister(RC);
+  return BuildMI(*BB, I, DebugLoc::getUnknownLoc(), TII->get(Opcode), NewVR);
+}
+                          
+/// GetValueInMiddleOfBlock - Construct SSA form, materializing a value that
+/// is live in the middle of the specified block.
+///
+/// GetValueInMiddleOfBlock is the same as GetValueAtEndOfBlock except in one
+/// important case: if there is a definition of the rewritten value after the
+/// 'use' in BB.  Consider code like this:
+///
+///      X1 = ...
+///   SomeBB:
+///      use(X)
+///      X2 = ...
+///      br Cond, SomeBB, OutBB
+///
+/// In this case, there are two values (X1 and X2) added to the AvailableVals
+/// set by the client of the rewriter, and those values are both live out of
+/// their respective blocks.  However, the use of X happens in the *middle* of
+/// a block.  Because of this, we need to insert a new PHI node in SomeBB to
+/// merge the appropriate values, and this value isn't live out of the block.
+///
+unsigned MachineSSAUpdater::GetValueInMiddleOfBlock(MachineBasicBlock *BB) {
+  // If there is no definition of the renamed variable in this block, just use
+  // GetValueAtEndOfBlock to do our work.
+  if (!getAvailableVals(AV).count(BB))
+    return GetValueAtEndOfBlockInternal(BB);
+
+  // If there are no predecessors, just return undef.
+  if (BB->pred_empty()) {
+    // Insert an implicit_def to represent an undef value.
+    MachineInstr *NewDef = InsertNewDef(TargetInstrInfo::IMPLICIT_DEF,
+                                        BB, BB->getFirstTerminator(),
+                                        VRC, MRI, TII);
+    return NewDef->getOperand(0).getReg();
+  }
+
+  // Otherwise, we have the hard case.  Get the live-in values for each
+  // predecessor.
+  SmallVector<std::pair<MachineBasicBlock*, unsigned>, 8> PredValues;
+  unsigned SingularValue = 0;
+
+  bool isFirstPred = true;
+  for (MachineBasicBlock::pred_iterator PI = BB->pred_begin(),
+         E = BB->pred_end(); PI != E; ++PI) {
+    MachineBasicBlock *PredBB = *PI;
+    unsigned PredVal = GetValueAtEndOfBlockInternal(PredBB);
+    PredValues.push_back(std::make_pair(PredBB, PredVal));
+
+    // Compute SingularValue.
+    if (isFirstPred) {
+      SingularValue = PredVal;
+      isFirstPred = false;
+    } else if (PredVal != SingularValue)
+      SingularValue = 0;
+  }
+
+  // Otherwise, if all the merged values are the same, just use it.
+  if (SingularValue != 0)
+    return SingularValue;
+
+  // If an identical PHI is already in BB, just reuse it.
+  unsigned DupPHI = LookForIdenticalPHI(BB, PredValues);
+  if (DupPHI)
+    return DupPHI;
+
+  // Otherwise, we do need a PHI: insert one now.
+  MachineBasicBlock::iterator Loc = BB->empty() ? BB->end() : BB->front();
+  MachineInstr *InsertedPHI = InsertNewDef(TargetInstrInfo::PHI, BB,
+                                           Loc, VRC, MRI, TII);
+
+  // Fill in all the predecessors of the PHI.
+  MachineInstrBuilder MIB(InsertedPHI);
+  for (unsigned i = 0, e = PredValues.size(); i != e; ++i)
+    MIB.addReg(PredValues[i].second).addMBB(PredValues[i].first);
+
+  // See if the PHI node can be merged to a single value.  This can happen in
+  // loop cases when we get a PHI of itself and one other value.
+  if (unsigned ConstVal = InsertedPHI->isConstantValuePHI()) {
+    InsertedPHI->eraseFromParent();
+    return ConstVal;
+  }
+
+  // If the client wants to know about all new instructions, tell it.
+  if (InsertedPHIs) InsertedPHIs->push_back(InsertedPHI);
+
+  DEBUG(errs() << "  Inserted PHI: " << *InsertedPHI << "\n");
+  return InsertedPHI->getOperand(0).getReg();
+}
+
+static
+MachineBasicBlock *findCorrespondingPred(const MachineInstr *MI,
+                                         MachineOperand *U) {
+  for (unsigned i = 1, e = MI->getNumOperands(); i != e; i += 2) {
+    if (&MI->getOperand(i) == U)
+      return MI->getOperand(i+1).getMBB();
+  }
+
+  llvm_unreachable("MachineOperand::getParent() failure?");
+  return 0;
+}
+
+/// RewriteUse - Rewrite a use of the symbolic value.  This handles PHI nodes,
+/// which use their value in the corresponding predecessor.
+void MachineSSAUpdater::RewriteUse(MachineOperand &U) {
+  MachineInstr *UseMI = U.getParent();
+  unsigned NewVR = 0;
+  if (UseMI->getOpcode() == TargetInstrInfo::PHI) {
+    MachineBasicBlock *SourceBB = findCorrespondingPred(UseMI, &U);
+    NewVR = GetValueAtEndOfBlockInternal(SourceBB);
+  } else {
+    NewVR = GetValueInMiddleOfBlock(UseMI->getParent());
+  }
+
+  U.setReg(NewVR);
+}
+
+void MachineSSAUpdater::ReplaceRegWith(unsigned OldReg, unsigned NewReg) {
+  MRI->replaceRegWith(OldReg, NewReg);
+
+  AvailableValsTy &AvailableVals = getAvailableVals(AV);
+  for (DenseMap<MachineBasicBlock*, unsigned>::iterator
+         I = AvailableVals.begin(), E = AvailableVals.end(); I != E; ++I)
+    if (I->second == OldReg)
+      I->second = NewReg;
+}
+
+/// GetValueAtEndOfBlockInternal - Check to see if AvailableVals has an entry
+/// for the specified BB and if so, return it.  If not, construct SSA form by
+/// walking predecessors inserting PHI nodes as needed until we get to a block
+/// where the value is available.
+///
+unsigned MachineSSAUpdater::GetValueAtEndOfBlockInternal(MachineBasicBlock *BB){
+  AvailableValsTy &AvailableVals = getAvailableVals(AV);
+
+  // Query AvailableVals by doing an insertion of null.
+  std::pair<AvailableValsTy::iterator, bool> InsertRes =
+    AvailableVals.insert(std::make_pair(BB, 0));
+
+  // Handle the case when the insertion fails because we have already seen BB.
+  if (!InsertRes.second) {
+    // If the insertion failed, there are two cases.  The first case is that the
+    // value is already available for the specified block.  If we get this, just
+    // return the value.
+    if (InsertRes.first->second != 0)
+      return InsertRes.first->second;
+
+    // Otherwise, if the value we find is null, then this is the value is not
+    // known but it is being computed elsewhere in our recursion.  This means
+    // that we have a cycle.  Handle this by inserting a PHI node and returning
+    // it.  When we get back to the first instance of the recursion we will fill
+    // in the PHI node.
+    MachineBasicBlock::iterator Loc = BB->empty() ? BB->end() : BB->front();
+    MachineInstr *NewPHI = InsertNewDef(TargetInstrInfo::PHI, BB, Loc,
+                                        VRC, MRI,TII);
+    unsigned NewVR = NewPHI->getOperand(0).getReg();
+    InsertRes.first->second = NewVR;
+    return NewVR;
+  }
+
+  // If there are no predecessors, then we must have found an unreachable block
+  // just return 'undef'.  Since there are no predecessors, InsertRes must not
+  // be invalidated.
+  if (BB->pred_empty()) {
+    // Insert an implicit_def to represent an undef value.
+    MachineInstr *NewDef = InsertNewDef(TargetInstrInfo::IMPLICIT_DEF,
+                                        BB, BB->getFirstTerminator(),
+                                        VRC, MRI, TII);
+    return InsertRes.first->second = NewDef->getOperand(0).getReg();
+  }
+
+  // Okay, the value isn't in the map and we just inserted a null in the entry
+  // to indicate that we're processing the block.  Since we have no idea what
+  // value is in this block, we have to recurse through our predecessors.
+  //
+  // While we're walking our predecessors, we keep track of them in a vector,
+  // then insert a PHI node in the end if we actually need one.  We could use a
+  // smallvector here, but that would take a lot of stack space for every level
+  // of the recursion, just use IncomingPredInfo as an explicit stack.
+  IncomingPredInfoTy &IncomingPredInfo = getIncomingPredInfo(IPI);
+  unsigned FirstPredInfoEntry = IncomingPredInfo.size();
+
+  // As we're walking the predecessors, keep track of whether they are all
+  // producing the same value.  If so, this value will capture it, if not, it
+  // will get reset to null.  We distinguish the no-predecessor case explicitly
+  // below.
+  unsigned SingularValue = 0;
+  bool isFirstPred = true;
+  for (MachineBasicBlock::pred_iterator PI = BB->pred_begin(),
+         E = BB->pred_end(); PI != E; ++PI) {
+    MachineBasicBlock *PredBB = *PI;
+    unsigned PredVal = GetValueAtEndOfBlockInternal(PredBB);
+    IncomingPredInfo.push_back(std::make_pair(PredBB, PredVal));
+
+    // Compute SingularValue.
+    if (isFirstPred) {
+      SingularValue = PredVal;
+      isFirstPred = false;
+    } else if (PredVal != SingularValue)
+      SingularValue = 0;
+  }
+
+  /// Look up BB's entry in AvailableVals.  'InsertRes' may be invalidated.  If
+  /// this block is involved in a loop, a no-entry PHI node will have been
+  /// inserted as InsertedVal.  Otherwise, we'll still have the null we inserted
+  /// above.
+  unsigned &InsertedVal = AvailableVals[BB];
+
+  // If all the predecessor values are the same then we don't need to insert a
+  // PHI.  This is the simple and common case.
+  if (SingularValue) {
+    // If a PHI node got inserted, replace it with the singlar value and delete
+    // it.
+    if (InsertedVal) {
+      MachineInstr *OldVal = MRI->getVRegDef(InsertedVal);
+      // Be careful about dead loops.  These RAUW's also update InsertedVal.
+      assert(InsertedVal != SingularValue && "Dead loop?");
+      ReplaceRegWith(InsertedVal, SingularValue);
+      OldVal->eraseFromParent();
+    }
+
+    InsertedVal = SingularValue;
+
+    // Drop the entries we added in IncomingPredInfo to restore the stack.
+    IncomingPredInfo.erase(IncomingPredInfo.begin()+FirstPredInfoEntry,
+                           IncomingPredInfo.end());
+    return InsertedVal;
+  }
+
+
+  // Otherwise, we do need a PHI: insert one now if we don't already have one.
+  MachineInstr *InsertedPHI;
+  if (InsertedVal == 0) {
+    MachineBasicBlock::iterator Loc = BB->empty() ? BB->end() : BB->front();
+    InsertedPHI = InsertNewDef(TargetInstrInfo::PHI, BB, Loc,
+                               VRC, MRI, TII);
+    InsertedVal = InsertedPHI->getOperand(0).getReg();
+  } else {
+    InsertedPHI = MRI->getVRegDef(InsertedVal);
+  }
+
+  // Fill in all the predecessors of the PHI.
+  MachineInstrBuilder MIB(InsertedPHI);
+  for (IncomingPredInfoTy::iterator I =
+         IncomingPredInfo.begin()+FirstPredInfoEntry,
+         E = IncomingPredInfo.end(); I != E; ++I)
+    MIB.addReg(I->second).addMBB(I->first);
+
+  // Drop the entries we added in IncomingPredInfo to restore the stack.
+  IncomingPredInfo.erase(IncomingPredInfo.begin()+FirstPredInfoEntry,
+                         IncomingPredInfo.end());
+
+  // See if the PHI node can be merged to a single value.  This can happen in
+  // loop cases when we get a PHI of itself and one other value.
+  if (unsigned ConstVal = InsertedPHI->isConstantValuePHI()) {
+    MRI->replaceRegWith(InsertedVal, ConstVal);
+    InsertedPHI->eraseFromParent();
+    InsertedVal = ConstVal;
+  } else {
+    DEBUG(errs() << "  Inserted PHI: " << *InsertedPHI << "\n");
+
+    // If the client wants to know about all new instructions, tell it.
+    if (InsertedPHIs) InsertedPHIs->push_back(InsertedPHI);
+  }
+
+  return InsertedVal;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
index d9f4c99..917d053 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
@@ -376,15 +376,6 @@ MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB) {
         report("MBB doesn't fall through but is empty!", MBB);
       }
     }
-    if (TII->BlockHasNoFallThrough(*MBB)) {
-      if (MBB->empty()) {
-        report("TargetInstrInfo says the block has no fall through, but the "
-               "block is empty!", MBB);
-      } else if (!MBB->back().getDesc().isBarrier()) {
-        report("TargetInstrInfo says the block has no fall through, but the "
-               "block does not end in a barrier!", MBB);
-      }
-    }
   } else {
     // Block is last in function.
     if (MBB->empty()) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/MaxStackAlignment.cpp b/libclamav/c++/llvm/lib/CodeGen/MaxStackAlignment.cpp
new file mode 100644
index 0000000..d327cfa
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/MaxStackAlignment.cpp
@@ -0,0 +1,70 @@
+//===-- MaxStackAlignment.cpp - Compute the required stack alignment -- ---===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This pass looks for vector register usage and aligned local objects to
+// calculate the maximum required alignment for a function. This is used by
+// targets which support it to determine if dynamic stack realignment is
+// necessary.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/CodeGen/MachineFrameInfo.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/Passes.h"
+
+using namespace llvm;
+
+namespace {
+  struct MaximalStackAlignmentCalculator : public MachineFunctionPass {
+    static char ID;
+    MaximalStackAlignmentCalculator() : MachineFunctionPass(&ID) {}
+
+    virtual bool runOnMachineFunction(MachineFunction &MF) {
+      MachineFrameInfo *FFI = MF.getFrameInfo();
+      MachineRegisterInfo &RI = MF.getRegInfo();
+
+      // Calculate max stack alignment of all already allocated stack objects.
+      FFI->calculateMaxStackAlignment();
+      unsigned MaxAlign = FFI->getMaxAlignment();
+
+      // Be over-conservative: scan over all vreg defs and find whether vector
+      // registers are used. If yes, there is probability that vector registers
+      // will be spilled and thus the stack needs to be aligned properly.
+      // FIXME: It would be better to only do this if a spill actually
+      // happens rather than conseratively aligning the stack regardless.
+      for (unsigned RegNum = TargetRegisterInfo::FirstVirtualRegister;
+           RegNum < RI.getLastVirtReg(); ++RegNum)
+        MaxAlign = std::max(MaxAlign, RI.getRegClass(RegNum)->getAlignment());
+
+      if (FFI->getMaxAlignment() == MaxAlign)
+        return false;
+
+      FFI->setMaxAlignment(MaxAlign);
+      return true;
+    }
+
+    virtual const char *getPassName() const {
+      return "Stack Alignment Requirements Auto-Detector";
+    }
+
+    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
+      AU.setPreservesCFG();
+      MachineFunctionPass::getAnalysisUsage(AU);
+    }
+  };
+
+  char MaximalStackAlignmentCalculator::ID = 0;
+}
+
+FunctionPass*
+llvm::createMaxStackAlignmentCalculatorPass() {
+  return new MaximalStackAlignmentCalculator();
+}
+
diff --git a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
index 2e30cc6..c62d179 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
@@ -287,7 +287,7 @@ void llvm::PHIElimination::LowerAtomicPHINode(
 
     // Okay, if we now know that the value is not live out of the block, we can
     // add a kill marker in this block saying that it kills the incoming value!
-    if (!ValueIsUsed && !isLiveOut(SrcReg, opBlock, *LV)) {
+    if (!ValueIsUsed && !LV->isLiveOut(SrcReg, opBlock)) {
       // In our final twist, we have to decide which instruction kills the
       // register.  In most cases this is the copy, however, the first
       // terminator instruction at the end of the block may also use the value.
@@ -301,8 +301,8 @@ void llvm::PHIElimination::LowerAtomicPHINode(
 
         // Check that no other terminators use values.
 #ifndef NDEBUG
-        for (MachineBasicBlock::iterator TI = next(Term); TI != opBlock.end();
-             ++TI) {
+        for (MachineBasicBlock::iterator TI = llvm::next(Term);
+             TI != opBlock.end(); ++TI) {
           assert(!TI->readsRegister(SrcReg) &&
                  "Terminator instructions cannot use virtual registers unless"
                  "they are the first terminator in a block!");
@@ -353,59 +353,13 @@ bool llvm::PHIElimination::SplitPHIEdges(MachineFunction &MF,
       // We break edges when registers are live out from the predecessor block
       // (not considering PHI nodes). If the register is live in to this block
       // anyway, we would gain nothing from splitting.
-      if (!LV.isLiveIn(Reg, MBB) && isLiveOut(Reg, *PreMBB, LV))
+      if (!LV.isLiveIn(Reg, MBB) && LV.isLiveOut(Reg, *PreMBB))
         SplitCriticalEdge(PreMBB, &MBB);
     }
   }
   return true;
 }
 
-bool llvm::PHIElimination::isLiveOut(unsigned Reg, const MachineBasicBlock &MBB,
-                                     LiveVariables &LV) {
-  LiveVariables::VarInfo &VI = LV.getVarInfo(Reg);
-
-  // Loop over all of the successors of the basic block, checking to see if
-  // the value is either live in the block, or if it is killed in the block.
-  std::vector<MachineBasicBlock*> OpSuccBlocks;
-  for (MachineBasicBlock::const_succ_iterator SI = MBB.succ_begin(),
-         E = MBB.succ_end(); SI != E; ++SI) {
-    MachineBasicBlock *SuccMBB = *SI;
-
-    // Is it alive in this successor?
-    unsigned SuccIdx = SuccMBB->getNumber();
-    if (VI.AliveBlocks.test(SuccIdx))
-      return true;
-    OpSuccBlocks.push_back(SuccMBB);
-  }
-
-  // Check to see if this value is live because there is a use in a successor
-  // that kills it.
-  switch (OpSuccBlocks.size()) {
-  case 1: {
-    MachineBasicBlock *SuccMBB = OpSuccBlocks[0];
-    for (unsigned i = 0, e = VI.Kills.size(); i != e; ++i)
-      if (VI.Kills[i]->getParent() == SuccMBB)
-        return true;
-    break;
-  }
-  case 2: {
-    MachineBasicBlock *SuccMBB1 = OpSuccBlocks[0], *SuccMBB2 = OpSuccBlocks[1];
-    for (unsigned i = 0, e = VI.Kills.size(); i != e; ++i)
-      if (VI.Kills[i]->getParent() == SuccMBB1 ||
-          VI.Kills[i]->getParent() == SuccMBB2)
-        return true;
-    break;
-  }
-  default:
-    std::sort(OpSuccBlocks.begin(), OpSuccBlocks.end());
-    for (unsigned i = 0, e = VI.Kills.size(); i != e; ++i)
-      if (std::binary_search(OpSuccBlocks.begin(), OpSuccBlocks.end(),
-                             VI.Kills[i]->getParent()))
-        return true;
-  }
-  return false;
-}
-
 MachineBasicBlock *PHIElimination::SplitCriticalEdge(MachineBasicBlock *A,
                                                      MachineBasicBlock *B) {
   assert(A && B && "Missing MBB end point");
@@ -423,7 +377,7 @@ MachineBasicBlock *PHIElimination::SplitCriticalEdge(MachineBasicBlock *A,
   ++NumSplits;
 
   MachineBasicBlock *NMBB = MF->CreateMachineBasicBlock();
-  MF->insert(next(MachineFunction::iterator(A)), NMBB);
+  MF->insert(llvm::next(MachineFunction::iterator(A)), NMBB);
   DEBUG(errs() << "PHIElimination splitting critical edge:"
         " BB#" << A->getNumber()
         << " -- BB#" << NMBB->getNumber()
diff --git a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
index f5872cb..b0b71ce 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
@@ -93,12 +93,6 @@ namespace llvm {
     bool SplitPHIEdges(MachineFunction &MF, MachineBasicBlock &MBB,
                        LiveVariables &LV);
 
-    /// isLiveOut - Determine if Reg is live out from MBB, when not
-    /// considering PHI nodes. This means that Reg is either killed by
-    /// a successor block or passed through one.
-    bool isLiveOut(unsigned Reg, const MachineBasicBlock &MBB,
-                   LiveVariables &LV);
-
     /// SplitCriticalEdge - Split a critical edge from A to B by
     /// inserting a new MBB. Update branches in A and PHI instructions
     /// in B. Return the new block.
diff --git a/libclamav/c++/llvm/lib/CodeGen/PostRASchedulerList.cpp b/libclamav/c++/llvm/lib/CodeGen/PostRASchedulerList.cpp
index 9101fce..79be295 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PostRASchedulerList.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PostRASchedulerList.cpp
@@ -373,7 +373,8 @@ void SchedulePostRATDList::FinishBlock() {
 ///
 void SchedulePostRATDList::StartBlockForKills(MachineBasicBlock *BB) {
   // Initialize the indices to indicate that no registers are live.
-  std::fill(KillIndices, array_endof(KillIndices), ~0u);
+  for (unsigned i = 0; i < TRI->getNumRegs(); ++i)
+    KillIndices[i] = ~0u;
 
   // Determine the live-out physregs for this block.
   if (!BB->empty() && BB->back().getDesc().isReturn()) {
@@ -510,12 +511,9 @@ void SchedulePostRATDList::FixupKills(MachineBasicBlock *MBB) {
       }
       
       if (MO.isKill() != kill) {
-        bool removed = ToggleKillFlag(MI, MO);
-        if (removed) {
-          DEBUG(errs() << "Fixed <removed> in ");
-        } else {
-          DEBUG(errs() << "Fixed " << MO << " in ");
-        }
+        DEBUG(errs() << "Fixing " << MO << " in ");
+        // Warning: ToggleKillFlag may invalidate MO.
+        ToggleKillFlag(MI, MO);
         DEBUG(MI->dump());
       }
       
diff --git a/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp b/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
index 8f62345..afd7b88 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
@@ -876,7 +876,7 @@ bool PreAllocSplitting::Rematerialize(unsigned VReg, VNInfo* ValNo,
   if (!ValNo->isDefAccurate() || DefMI->getParent() == BarrierMBB)
     KillPt = findSpillPoint(BarrierMBB, Barrier, NULL, RefsInMBB);
   else
-    KillPt = next(MachineBasicBlock::iterator(DefMI));
+    KillPt = llvm::next(MachineBasicBlock::iterator(DefMI));
   
   if (KillPt == DefMI->getParent()->end())
     return false;
@@ -1118,7 +1118,7 @@ bool PreAllocSplitting::SplitRegLiveInterval(LiveInterval *LI) {
           return false; // No gap to insert spill.
         }
       } else {
-        SpillPt = next(MachineBasicBlock::iterator(DefMI));
+        SpillPt = llvm::next(MachineBasicBlock::iterator(DefMI));
         if (SpillPt == DefMBB->end()) {
           DEBUG(errs() << "FAILED (could not find a suitable spill point).\n");
           return false; // No gap to insert spill.
diff --git a/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp b/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
index 8905f75..e94247f 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
@@ -136,9 +136,10 @@ void PEI::getAnalysisUsage(AnalysisUsage &AU) const {
 /// pseudo instructions.
 void PEI::calculateCallsInformation(MachineFunction &Fn) {
   const TargetRegisterInfo *RegInfo = Fn.getTarget().getRegisterInfo();
+  MachineFrameInfo *FFI = Fn.getFrameInfo();
 
   unsigned MaxCallFrameSize = 0;
-  bool HasCalls = false;
+  bool HasCalls = FFI->hasCalls();
 
   // Get the function call frame set-up and tear-down instruction opcode
   int FrameSetupOpcode   = RegInfo->getCallFrameSetupOpcode();
@@ -166,7 +167,6 @@ void PEI::calculateCallsInformation(MachineFunction &Fn) {
         HasCalls = true;
       }
 
-  MachineFrameInfo *FFI = Fn.getFrameInfo();
   FFI->setHasCalls(HasCalls);
   FFI->setMaxCallFrameSize(MaxCallFrameSize);
 
@@ -674,7 +674,7 @@ void PEI::replaceFrameIndices(MachineFunction &Fn) {
         if (PrevI == BB->end())
           I = BB->begin();     // The replaced instr was the first in the block.
         else
-          I = next(PrevI);
+          I = llvm::next(PrevI);
         continue;
       }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp b/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
index 4ff5129..2a43811 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
@@ -59,6 +59,11 @@ PreSplitIntervals("pre-alloc-split",
                   cl::desc("Pre-register allocation live interval splitting"),
                   cl::init(false), cl::Hidden);
 
+static cl::opt<bool>
+TrivCoalesceEnds("trivial-coalesce-ends",
+                  cl::desc("Attempt trivial coalescing of interval ends"),
+                  cl::init(false), cl::Hidden);
+
 static RegisterRegAlloc
 linearscanRegAlloc("linearscan", "linear scan register allocator",
                    createLinearScanRegisterAllocator);
@@ -390,66 +395,71 @@ void RALinScan::ComputeRelatedRegClasses() {
         RelatedRegClasses.unionSets(I->second, OneClassForEachPhysReg[*AS]);
 }
 
-/// attemptTrivialCoalescing - If a simple interval is defined by a copy,
-/// try allocate the definition the same register as the source register
-/// if the register is not defined during live time of the interval. This
-/// eliminate a copy. This is used to coalesce copies which were not
-/// coalesced away before allocation either due to dest and src being in
-/// different register classes or because the coalescer was overly
-/// conservative.
+/// attemptTrivialCoalescing - If a simple interval is defined by a copy, try
+/// allocate the definition the same register as the source register if the
+/// register is not defined during live time of the interval. If the interval is
+/// killed by a copy, try to use the destination register. This eliminates a
+/// copy. This is used to coalesce copies which were not coalesced away before
+/// allocation either due to dest and src being in different register classes or
+/// because the coalescer was overly conservative.
 unsigned RALinScan::attemptTrivialCoalescing(LiveInterval &cur, unsigned Reg) {
   unsigned Preference = vrm_->getRegAllocPref(cur.reg);
   if ((Preference && Preference == Reg) || !cur.containsOneValue())
     return Reg;
 
-  VNInfo *vni = cur.begin()->valno;
-  if ((vni->def == SlotIndex()) ||
-      vni->isUnused() || !vni->isDefAccurate())
+  // We cannot handle complicated live ranges. Simple linear stuff only.
+  if (cur.ranges.size() != 1)
     return Reg;
-  MachineInstr *CopyMI = li_->getInstructionFromIndex(vni->def);
-  unsigned SrcReg, DstReg, SrcSubReg, DstSubReg, PhysReg;
-  if (!CopyMI ||
-      !tii_->isMoveInstr(*CopyMI, SrcReg, DstReg, SrcSubReg, DstSubReg))
+
+  const LiveRange &range = cur.ranges.front();
+
+  VNInfo *vni = range.valno;
+  if (vni->isUnused())
     return Reg;
-  PhysReg = SrcReg;
-  if (TargetRegisterInfo::isVirtualRegister(SrcReg)) {
-    if (!vrm_->isAssignedReg(SrcReg))
+
+  unsigned CandReg;
+  {
+    MachineInstr *CopyMI;
+    unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
+    if (vni->def != SlotIndex() && vni->isDefAccurate() &&
+        (CopyMI = li_->getInstructionFromIndex(vni->def)) &&
+        tii_->isMoveInstr(*CopyMI, SrcReg, DstReg, SrcSubReg, DstSubReg))
+      // Defined by a copy, try to extend SrcReg forward
+      CandReg = SrcReg;
+    else if (TrivCoalesceEnds &&
+             (CopyMI =
+              li_->getInstructionFromIndex(range.end.getBaseIndex())) &&
+             tii_->isMoveInstr(*CopyMI, SrcReg, DstReg, SrcSubReg, DstSubReg) &&
+             cur.reg == SrcReg)
+      // Only used by a copy, try to extend DstReg backwards
+      CandReg = DstReg;
+    else
+      return Reg;
+  }
+
+  if (TargetRegisterInfo::isVirtualRegister(CandReg)) {
+    if (!vrm_->isAssignedReg(CandReg))
       return Reg;
-    PhysReg = vrm_->getPhys(SrcReg);
+    CandReg = vrm_->getPhys(CandReg);
   }
-  if (Reg == PhysReg)
+  if (Reg == CandReg)
     return Reg;
 
   const TargetRegisterClass *RC = mri_->getRegClass(cur.reg);
-  if (!RC->contains(PhysReg))
+  if (!RC->contains(CandReg))
     return Reg;
 
-  // Try to coalesce.
-  if (!li_->conflictsWithPhysRegDef(cur, *vrm_, PhysReg)) {
-    DEBUG(errs() << "Coalescing: " << cur << " -> " << tri_->getName(PhysReg)
-                 << '\n');
-    vrm_->clearVirt(cur.reg);
-    vrm_->assignVirt2Phys(cur.reg, PhysReg);
-
-    // Remove unnecessary kills since a copy does not clobber the register.
-    if (li_->hasInterval(SrcReg)) {
-      LiveInterval &SrcLI = li_->getInterval(SrcReg);
-      for (MachineRegisterInfo::use_iterator I = mri_->use_begin(cur.reg),
-             E = mri_->use_end(); I != E; ++I) {
-        MachineOperand &O = I.getOperand();
-        if (!O.isKill())
-          continue;
-        MachineInstr *MI = &*I;
-        if (SrcLI.liveAt(li_->getInstructionIndex(MI).getDefIndex()))
-          O.setIsKill(false);
-      }
-    }
+  if (li_->conflictsWithPhysReg(cur, *vrm_, CandReg))
+    return Reg;
 
-    ++NumCoalesce;
-    return PhysReg;
-  }
+  // Try to coalesce.
+  DEBUG(errs() << "Coalescing: " << cur << " -> " << tri_->getName(CandReg)
+        << '\n');
+  vrm_->clearVirt(cur.reg);
+  vrm_->assignVirt2Phys(cur.reg, CandReg);
 
-  return Reg;
+  ++NumCoalesce;
+  return CandReg;
 }
 
 bool RALinScan::runOnMachineFunction(MachineFunction &fn) {
@@ -1261,9 +1271,9 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
 
   // The earliest start of a Spilled interval indicates up to where
   // in handled we need to roll back
+  assert(!spillIs.empty() && "No spill intervals?"); 
+  SlotIndex earliestStart = spillIs[0]->beginIndex();
   
-  LiveInterval *earliestStartInterval = cur;
-
   // Spill live intervals of virtual regs mapped to the physical register we
   // want to clear (and its aliases).  We only spill those that overlap with the
   // current interval as the rest do not affect its allocation. we also keep
@@ -1274,19 +1284,16 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
     LiveInterval *sli = spillIs.back();
     spillIs.pop_back();
     DEBUG(errs() << "\t\t\tspilling(a): " << *sli << '\n');
-    earliestStartInterval =
-      (earliestStartInterval->beginIndex() < sli->beginIndex()) ?
-         earliestStartInterval : sli;
+    if (sli->beginIndex() < earliestStart)
+      earliestStart = sli->beginIndex();
        
     std::vector<LiveInterval*> newIs;
-    newIs = spiller_->spill(sli, spillIs);
+    newIs = spiller_->spill(sli, spillIs, &earliestStart);
     addStackInterval(sli, ls_, li_, mri_, *vrm_);
     std::copy(newIs.begin(), newIs.end(), std::back_inserter(added));
     spilled.insert(sli->reg);
   }
 
-  SlotIndex earliestStart = earliestStartInterval->beginIndex();
-
   DEBUG(errs() << "\t\trolling back to: " << earliestStart << '\n');
 
   // Scan handled in reverse order up to the earliest start of a
@@ -1295,7 +1302,7 @@ void RALinScan::assignRegOrStackSlotAtInterval(LiveInterval* cur) {
   while (!handled_.empty()) {
     LiveInterval* i = handled_.back();
     // If this interval starts before t we are done.
-    if (i->beginIndex() < earliestStart)
+    if (!i->empty() && i->beginIndex() < earliestStart)
       break;
     DEBUG(errs() << "\t\t\tundo changes for: " << *i << '\n');
     handled_.pop_back();
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegisterScavenging.cpp b/libclamav/c++/llvm/lib/CodeGen/RegisterScavenging.cpp
index 94680ed..67bf209 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegisterScavenging.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegisterScavenging.cpp
@@ -125,7 +125,7 @@ void RegScavenger::forward() {
     Tracking = true;
   } else {
     assert(MBBI != MBB->end() && "Already at the end of the basic block!");
-    MBBI = next(MBBI);
+    MBBI = llvm::next(MBBI);
   }
 
   MachineInstr *MI = MBBI;
diff --git a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp
index 4851d49..027f615 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp
@@ -32,6 +32,9 @@ using namespace llvm;
 namespace llvm {
   template<>
   struct DOTGraphTraits<ScheduleDAG*> : public DefaultDOTGraphTraits {
+
+  DOTGraphTraits (bool isSimple=false) : DefaultDOTGraphTraits(isSimple) {}
+
     static std::string getGraphName(const ScheduleDAG *G) {
       return G->MF.getFunction()->getName();
     }
@@ -57,9 +60,7 @@ namespace llvm {
     }
     
 
-    static std::string getNodeLabel(const SUnit *Node,
-                                    const ScheduleDAG *Graph,
-                                    bool ShortNames);
+    std::string getNodeLabel(const SUnit *Node, const ScheduleDAG *Graph);
     static std::string getNodeAttributes(const SUnit *N,
                                          const ScheduleDAG *Graph) {
       return "shape=Mrecord";
@@ -73,8 +74,7 @@ namespace llvm {
 }
 
 std::string DOTGraphTraits<ScheduleDAG*>::getNodeLabel(const SUnit *SU,
-                                                       const ScheduleDAG *G,
-                                                       bool ShortNames) {
+                                                       const ScheduleDAG *G) {
   return G->getGraphNodeLabel(SU);
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index 06ffdd6..aee2f20 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -119,7 +119,8 @@ namespace {
     /// it can be simplified or if things it uses can be simplified by bit
     /// propagation.  If so, return true.
     bool SimplifyDemandedBits(SDValue Op) {
-      APInt Demanded = APInt::getAllOnesValue(Op.getValueSizeInBits());
+      unsigned BitWidth = Op.getValueType().getScalarType().getSizeInBits();
+      APInt Demanded = APInt::getAllOnesValue(BitWidth);
       return SimplifyDemandedBits(Op, Demanded);
     }
 
@@ -546,7 +547,8 @@ SDValue DAGCombiner::CombineTo(SDNode *N, const SDValue *To, unsigned NumTo,
         To[0].getNode()->dump(&DAG);
         errs() << " and " << NumTo-1 << " other values\n";
         for (unsigned i = 0, e = NumTo; i != e; ++i)
-          assert(N->getValueType(i) == To[i].getValueType() &&
+          assert((!To[i].getNode() ||
+                  N->getValueType(i) == To[i].getValueType()) &&
                  "Cannot combine value to value of different type!"));
   WorkListRemover DeadNodes(*this);
   DAG.ReplaceAllUsesWith(N, To, &DeadNodes);
@@ -1687,10 +1689,14 @@ SDValue DAGCombiner::SimplifyBinOpWithSameOpcodeHands(SDNode *N) {
   // fold (OP (sext x), (sext y)) -> (sext (OP x, y))
   // fold (OP (aext x), (aext y)) -> (aext (OP x, y))
   // fold (OP (trunc x), (trunc y)) -> (trunc (OP x, y)) (if trunc isn't free)
+  //
+  // do not sink logical op inside of a vector extend, since it may combine
+  // into a vsetcc.
   if ((N0.getOpcode() == ISD::ZERO_EXTEND || N0.getOpcode() == ISD::ANY_EXTEND||
        N0.getOpcode() == ISD::SIGN_EXTEND ||
        (N0.getOpcode() == ISD::TRUNCATE &&
         !TLI.isTruncateFree(N0.getOperand(0).getValueType(), VT))) &&
+      !VT.isVector() &&
       N0.getOperand(0).getValueType() == N1.getOperand(0).getValueType() &&
       (!LegalOperations ||
        TLI.isOperationLegal(N->getOpcode(), N0.getOperand(0).getValueType()))) {
@@ -1943,8 +1949,10 @@ SDValue DAGCombiner::visitOR(SDNode *N) {
   }
 
   // fold (or x, undef) -> -1
-  if (N0.getOpcode() == ISD::UNDEF || N1.getOpcode() == ISD::UNDEF)
-    return DAG.getConstant(APInt::getAllOnesValue(VT.getSizeInBits()), VT);
+  if (N0.getOpcode() == ISD::UNDEF || N1.getOpcode() == ISD::UNDEF) {
+    EVT EltVT = VT.isVector() ? VT.getVectorElementType() : VT;
+    return DAG.getConstant(APInt::getAllOnesValue(EltVT.getSizeInBits()), VT);
+  }
   // fold (or c1, c2) -> c1|c2
   if (N0C && N1C)
     return DAG.FoldConstantArithmetic(ISD::OR, VT, N0C, N1C);
@@ -2434,7 +2442,7 @@ SDValue DAGCombiner::visitSHL(SDNode *N) {
   ConstantSDNode *N0C = dyn_cast<ConstantSDNode>(N0);
   ConstantSDNode *N1C = dyn_cast<ConstantSDNode>(N1);
   EVT VT = N0.getValueType();
-  unsigned OpSizeInBits = VT.getSizeInBits();
+  unsigned OpSizeInBits = VT.getScalarType().getSizeInBits();
 
   // fold (shl c1, c2) -> c1<<c2
   if (N0C && N1C)
@@ -2450,7 +2458,7 @@ SDValue DAGCombiner::visitSHL(SDNode *N) {
     return N0;
   // if (shl x, c) is known to be zero, return 0
   if (DAG.MaskedValueIsZero(SDValue(N, 0),
-                            APInt::getAllOnesValue(VT.getSizeInBits())))
+                            APInt::getAllOnesValue(OpSizeInBits)))
     return DAG.getConstant(0, VT);
   // fold (shl x, (trunc (and y, c))) -> (shl x, (and (trunc y), (trunc c))).
   if (N1.getOpcode() == ISD::TRUNCATE &&
@@ -2526,6 +2534,7 @@ SDValue DAGCombiner::visitSRA(SDNode *N) {
   ConstantSDNode *N0C = dyn_cast<ConstantSDNode>(N0);
   ConstantSDNode *N1C = dyn_cast<ConstantSDNode>(N1);
   EVT VT = N0.getValueType();
+  unsigned OpSizeInBits = VT.getScalarType().getSizeInBits();
 
   // fold (sra c1, c2) -> (sra c1, c2)
   if (N0C && N1C)
@@ -2537,7 +2546,7 @@ SDValue DAGCombiner::visitSRA(SDNode *N) {
   if (N0C && N0C->isAllOnesValue())
     return N0;
   // fold (sra x, (setge c, size(x))) -> undef
-  if (N1C && N1C->getZExtValue() >= VT.getSizeInBits())
+  if (N1C && N1C->getZExtValue() >= OpSizeInBits)
     return DAG.getUNDEF(VT);
   // fold (sra x, 0) -> x
   if (N1C && N1C->isNullValue())
@@ -2545,7 +2554,7 @@ SDValue DAGCombiner::visitSRA(SDNode *N) {
   // fold (sra (shl x, c1), c1) -> sext_inreg for some c1 and target supports
   // sext_inreg.
   if (N1C && N0.getOpcode() == ISD::SHL && N1 == N0.getOperand(1)) {
-    unsigned LowBits = VT.getSizeInBits() - (unsigned)N1C->getZExtValue();
+    unsigned LowBits = OpSizeInBits - (unsigned)N1C->getZExtValue();
     EVT EVT = EVT::getIntegerVT(*DAG.getContext(), LowBits);
     if ((!LegalOperations || TLI.isOperationLegal(ISD::SIGN_EXTEND_INREG, EVT)))
       return DAG.getNode(ISD::SIGN_EXTEND_INREG, N->getDebugLoc(), VT,
@@ -2556,7 +2565,7 @@ SDValue DAGCombiner::visitSRA(SDNode *N) {
   if (N1C && N0.getOpcode() == ISD::SRA) {
     if (ConstantSDNode *C1 = dyn_cast<ConstantSDNode>(N0.getOperand(1))) {
       unsigned Sum = N1C->getZExtValue() + C1->getZExtValue();
-      if (Sum >= VT.getSizeInBits()) Sum = VT.getSizeInBits()-1;
+      if (Sum >= OpSizeInBits) Sum = OpSizeInBits-1;
       return DAG.getNode(ISD::SRA, N->getDebugLoc(), VT, N0.getOperand(0),
                          DAG.getConstant(Sum, N1C->getValueType(0)));
     }
@@ -2572,9 +2581,8 @@ SDValue DAGCombiner::visitSRA(SDNode *N) {
     const ConstantSDNode *N01C = dyn_cast<ConstantSDNode>(N0.getOperand(1));
     if (N01C && N1C) {
       // Determine what the truncate's result bitsize and type would be.
-      unsigned VTValSize = VT.getSizeInBits();
       EVT TruncVT =
-        EVT::getIntegerVT(*DAG.getContext(), VTValSize - N1C->getZExtValue());
+        EVT::getIntegerVT(*DAG.getContext(), OpSizeInBits - N1C->getZExtValue());
       // Determine the residual right-shift amount.
       signed ShiftAmt = N1C->getZExtValue() - N01C->getZExtValue();
 
@@ -2607,7 +2615,7 @@ SDValue DAGCombiner::visitSRA(SDNode *N) {
       EVT TruncVT = N1.getValueType();
       SDValue N100 = N1.getOperand(0).getOperand(0);
       APInt TruncC = N101C->getAPIntValue();
-      TruncC.trunc(TruncVT.getSizeInBits());
+      TruncC.trunc(TruncVT.getScalarType().getSizeInBits());
       return DAG.getNode(ISD::SRA, N->getDebugLoc(), VT, N0,
                          DAG.getNode(ISD::AND, N->getDebugLoc(),
                                      TruncVT,
@@ -2636,7 +2644,7 @@ SDValue DAGCombiner::visitSRL(SDNode *N) {
   ConstantSDNode *N0C = dyn_cast<ConstantSDNode>(N0);
   ConstantSDNode *N1C = dyn_cast<ConstantSDNode>(N1);
   EVT VT = N0.getValueType();
-  unsigned OpSizeInBits = VT.getSizeInBits();
+  unsigned OpSizeInBits = VT.getScalarType().getSizeInBits();
 
   // fold (srl c1, c2) -> c1 >>u c2
   if (N0C && N1C)
@@ -3029,7 +3037,7 @@ SDValue DAGCombiner::visitSIGN_EXTEND(SDNode *N) {
       else if (Op.getValueType().bitsGT(VT))
         Op = DAG.getNode(ISD::TRUNCATE, N0.getDebugLoc(), VT, Op);
       return DAG.getNode(ISD::SIGN_EXTEND_INREG, N->getDebugLoc(), VT, Op,
-                         DAG.getValueType(N0.getValueType()));
+                         DAG.getValueType(N0.getValueType().getScalarType()));
     }
   }
 
@@ -3170,7 +3178,8 @@ SDValue DAGCombiner::visitZERO_EXTEND(SDNode *N) {
     } else if (Op.getValueType().bitsGT(VT)) {
       Op = DAG.getNode(ISD::TRUNCATE, N->getDebugLoc(), VT, Op);
     }
-    return DAG.getZeroExtendInReg(Op, N->getDebugLoc(), N0.getValueType());
+    return DAG.getZeroExtendInReg(Op, N->getDebugLoc(),
+                                  N0.getValueType().getScalarType());
   }
 
   // Fold (zext (and (trunc x), cst)) -> (and x, cst),
@@ -3529,7 +3538,7 @@ SDValue DAGCombiner::visitSIGN_EXTEND_INREG(SDNode *N) {
   SDValue N1 = N->getOperand(1);
   EVT VT = N->getValueType(0);
   EVT EVT = cast<VTSDNode>(N1)->getVT();
-  unsigned VTBits = VT.getSizeInBits();
+  unsigned VTBits = VT.getScalarType().getSizeInBits();
   unsigned EVTBits = EVT.getSizeInBits();
 
   // fold (sext_in_reg c1) -> c1
@@ -3537,7 +3546,7 @@ SDValue DAGCombiner::visitSIGN_EXTEND_INREG(SDNode *N) {
     return DAG.getNode(ISD::SIGN_EXTEND_INREG, N->getDebugLoc(), VT, N0, N1);
 
   // If the input is already sign extended, just drop the extension.
-  if (DAG.ComputeNumSignBits(N0) >= VT.getSizeInBits()-EVTBits+1)
+  if (DAG.ComputeNumSignBits(N0) >= VTBits-EVTBits+1)
     return N0;
 
   // fold (sext_in_reg (sext_in_reg x, VT2), VT1) -> (sext_in_reg x, minVT) pt2
@@ -3552,7 +3561,7 @@ SDValue DAGCombiner::visitSIGN_EXTEND_INREG(SDNode *N) {
   // if x is small enough.
   if (N0.getOpcode() == ISD::SIGN_EXTEND || N0.getOpcode() == ISD::ANY_EXTEND) {
     SDValue N00 = N0.getOperand(0);
-    if (N00.getValueType().getSizeInBits() < EVTBits)
+    if (N00.getValueType().getScalarType().getSizeInBits() < EVTBits)
       return DAG.getNode(ISD::SIGN_EXTEND, N->getDebugLoc(), VT, N00, N1);
   }
 
@@ -3576,11 +3585,11 @@ SDValue DAGCombiner::visitSIGN_EXTEND_INREG(SDNode *N) {
   // We already fold "(sext_in_reg (srl X, 25), i8) -> srl X, 25" above.
   if (N0.getOpcode() == ISD::SRL) {
     if (ConstantSDNode *ShAmt = dyn_cast<ConstantSDNode>(N0.getOperand(1)))
-      if (ShAmt->getZExtValue()+EVTBits <= VT.getSizeInBits()) {
+      if (ShAmt->getZExtValue()+EVTBits <= VTBits) {
         // We can turn this into an SRA iff the input to the SRL is already sign
         // extended enough.
         unsigned InSignBits = DAG.ComputeNumSignBits(N0.getOperand(0));
-        if (VT.getSizeInBits()-(ShAmt->getZExtValue()+EVTBits) < InSignBits)
+        if (VTBits-(ShAmt->getZExtValue()+EVTBits) < InSignBits)
           return DAG.getNode(ISD::SRA, N->getDebugLoc(), VT,
                              N0.getOperand(0), N0.getOperand(1));
       }
@@ -3681,7 +3690,6 @@ SDValue DAGCombiner::CombineConsecutiveLoads(SDNode *N, EVT VT) {
   if (!LD1 || !LD2 || !ISD::isNON_EXTLoad(LD1) || !LD1->hasOneUse())
     return SDValue();
   EVT LD1VT = LD1->getValueType(0);
-  const MachineFrameInfo *MFI = DAG.getMachineFunction().getFrameInfo();
 
   if (ISD::isNON_EXTLoad(LD2) &&
       LD2->hasOneUse() &&
@@ -3689,7 +3697,7 @@ SDValue DAGCombiner::CombineConsecutiveLoads(SDNode *N, EVT VT) {
       // If one is volatile it might be ok, but play conservative and bail out.
       !LD1->isVolatile() &&
       !LD2->isVolatile() &&
-      TLI.isConsecutiveLoad(LD2, LD1, LD1VT.getSizeInBits()/8, 1, MFI)) {
+      DAG.isConsecutiveLoad(LD2, LD1, LD1VT.getSizeInBits()/8, 1)) {
     unsigned Align = LD1->getAlignment();
     unsigned NewAlign = TLI.getTargetData()->
       getABITypeAlignment(VT.getTypeForEVT(*DAG.getContext()));
@@ -4804,49 +4812,6 @@ bool DAGCombiner::CombineToPostIndexedLoadStore(SDNode *N) {
   return false;
 }
 
-/// InferAlignment - If we can infer some alignment information from this
-/// pointer, return it.
-static unsigned InferAlignment(SDValue Ptr, SelectionDAG &DAG) {
-  // If this is a direct reference to a stack slot, use information about the
-  // stack slot's alignment.
-  int FrameIdx = 1 << 31;
-  int64_t FrameOffset = 0;
-  if (FrameIndexSDNode *FI = dyn_cast<FrameIndexSDNode>(Ptr)) {
-    FrameIdx = FI->getIndex();
-  } else if (Ptr.getOpcode() == ISD::ADD &&
-             isa<ConstantSDNode>(Ptr.getOperand(1)) &&
-             isa<FrameIndexSDNode>(Ptr.getOperand(0))) {
-    FrameIdx = cast<FrameIndexSDNode>(Ptr.getOperand(0))->getIndex();
-    FrameOffset = Ptr.getConstantOperandVal(1);
-  }
-
-  if (FrameIdx != (1 << 31)) {
-    // FIXME: Handle FI+CST.
-    const MachineFrameInfo &MFI = *DAG.getMachineFunction().getFrameInfo();
-    if (MFI.isFixedObjectIndex(FrameIdx)) {
-      int64_t ObjectOffset = MFI.getObjectOffset(FrameIdx) + FrameOffset;
-
-      // The alignment of the frame index can be determined from its offset from
-      // the incoming frame position.  If the frame object is at offset 32 and
-      // the stack is guaranteed to be 16-byte aligned, then we know that the
-      // object is 16-byte aligned.
-      unsigned StackAlign = DAG.getTarget().getFrameInfo()->getStackAlignment();
-      unsigned Align = MinAlign(ObjectOffset, StackAlign);
-
-      // Finally, the frame object itself may have a known alignment.  Factor
-      // the alignment + offset into a new alignment.  For example, if we know
-      // the  FI is 8 byte aligned, but the pointer is 4 off, we really have a
-      // 4-byte alignment of the resultant pointer.  Likewise align 4 + 4-byte
-      // offset = 4-byte alignment, align 4 + 1-byte offset = align 1, etc.
-      unsigned FIInfoAlign = MinAlign(MFI.getObjectAlignment(FrameIdx),
-                                      FrameOffset);
-      return std::max(Align, FIInfoAlign);
-    }
-  }
-
-  return 0;
-}
-
 SDValue DAGCombiner::visitLOAD(SDNode *N) {
   LoadSDNode *LD  = cast<LoadSDNode>(N);
   SDValue Chain = LD->getChain();
@@ -4854,7 +4819,7 @@ SDValue DAGCombiner::visitLOAD(SDNode *N) {
 
   // Try to infer better alignment information than the load already has.
   if (OptLevel != CodeGenOpt::None && LD->isUnindexed()) {
-    if (unsigned Align = InferAlignment(Ptr, DAG)) {
+    if (unsigned Align = DAG.InferPtrAlignment(Ptr)) {
       if (Align > LD->getAlignment())
         return DAG.getExtLoad(LD->getExtensionType(), N->getDebugLoc(),
                               LD->getValueType(0),
@@ -5079,7 +5044,7 @@ SDValue DAGCombiner::visitSTORE(SDNode *N) {
 
   // Try to infer better alignment information than the store already has.
   if (OptLevel != CodeGenOpt::None && ST->isUnindexed()) {
-    if (unsigned Align = InferAlignment(Ptr, DAG)) {
+    if (unsigned Align = DAG.InferPtrAlignment(Ptr)) {
       if (Align > ST->getAlignment())
         return DAG.getTruncStore(Chain, N->getDebugLoc(), Value,
                                  Ptr, ST->getSrcValue(),
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
index 5eb9ca1..4ead9c9 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
@@ -532,7 +532,15 @@ bool FastISel::SelectBitCast(User *I) {
 
 bool
 FastISel::SelectInstruction(Instruction *I) {
-  return SelectOperator(I, I->getOpcode());
+  // First, try doing target-independent selection.
+  if (SelectOperator(I, I->getOpcode()))
+    return true;
+
+  // Next, try calling the target to attempt to handle the instruction.
+  if (TargetSelectInstruction(I))
+    return true;
+
+  return false;
 }
 
 /// FastEmitBranch - Emit an unconditional branch to the given block,
@@ -541,7 +549,7 @@ FastISel::SelectInstruction(Instruction *I) {
 void
 FastISel::FastEmitBranch(MachineBasicBlock *MSucc) {
   MachineFunction::iterator NextMBB =
-     next(MachineFunction::iterator(MBB));
+     llvm::next(MachineFunction::iterator(MBB));
 
   if (MBB->isLayoutSuccessor(MSucc)) {
     // The unconditional fall-through case, which needs no instructions.
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
index 273dbf0..f9c05d0 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
@@ -232,7 +232,7 @@ void SelectionDAGLegalize::LegalizeDAG() {
   // node is only legalized after all of its operands are legalized.
   DAG.AssignTopologicalOrder();
   for (SelectionDAG::allnodes_iterator I = DAG.allnodes_begin(),
-       E = prior(DAG.allnodes_end()); I != next(E); ++I)
+       E = prior(DAG.allnodes_end()); I != llvm::next(E); ++I)
     LegalizeOp(SDValue(I, 0));
 
   // Finally, it's possible the root changed.  Get the new root.
@@ -2294,9 +2294,15 @@ void SelectionDAGLegalize::ExpandNode(SDNode *Node,
     // NOTE: we could fall back on load/store here too for targets without
     // SAR.  However, it is doubtful that any exist.
     EVT ExtraVT = cast<VTSDNode>(Node->getOperand(1))->getVT();
-    unsigned BitsDiff = Node->getValueType(0).getSizeInBits() -
+    EVT VT = Node->getValueType(0);
+    EVT ShiftAmountTy = TLI.getShiftAmountTy();
+    if (VT.isVector()) {
+      ShiftAmountTy = VT;
+      VT = VT.getVectorElementType();
+    }
+    unsigned BitsDiff = VT.getSizeInBits() -
                         ExtraVT.getSizeInBits();
-    SDValue ShiftCst = DAG.getConstant(BitsDiff, TLI.getShiftAmountTy());
+    SDValue ShiftCst = DAG.getConstant(BitsDiff, ShiftAmountTy);
     Tmp1 = DAG.getNode(ISD::SHL, dl, Node->getValueType(0),
                        Node->getOperand(0), ShiftCst);
     Tmp1 = DAG.getNode(ISD::SRA, dl, Node->getValueType(0), Tmp1, ShiftCst);
@@ -3059,8 +3065,7 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node,
 
 // SelectionDAG::Legalize - This is the entry point for the file.
 //
-void SelectionDAG::Legalize(bool TypesNeedLegalizing,
-                            CodeGenOpt::Level OptLevel) {
+void SelectionDAG::Legalize(CodeGenOpt::Level OptLevel) {
   /// run - This is the main entry point to this class.
   ///
   SelectionDAGLegalize(*this, OptLevel).LegalizeDAG();
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index 8ac8063..2f4457e 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -1167,55 +1167,62 @@ ExpandShiftWithUnknownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
   GetExpandedInteger(N->getOperand(0), InL, InH);
 
   SDValue NVBitsNode = DAG.getConstant(NVTBits, ShTy);
-  SDValue Amt2 = DAG.getNode(ISD::SUB, dl, ShTy, NVBitsNode, Amt);
-  SDValue Cmp = DAG.getSetCC(dl, TLI.getSetCCResultType(ShTy),
-                             Amt, NVBitsNode, ISD::SETULT);
+  SDValue AmtExcess = DAG.getNode(ISD::SUB, dl, ShTy, Amt, NVBitsNode);
+  SDValue AmtLack = DAG.getNode(ISD::SUB, dl, ShTy, NVBitsNode, Amt);
+  SDValue isShort = DAG.getSetCC(dl, TLI.getSetCCResultType(ShTy),
+                                 Amt, NVBitsNode, ISD::SETULT);
 
-  SDValue Lo1, Hi1, Lo2, Hi2;
+  SDValue LoS, HiS, LoL, HiL;
   switch (N->getOpcode()) {
   default: llvm_unreachable("Unknown shift");
   case ISD::SHL:
-    // ShAmt < NVTBits
-    Lo1 = DAG.getConstant(0, NVT);                  // Low part is zero.
-    Hi1 = DAG.getNode(ISD::SHL, dl, NVT, InL, Amt); // High part from Lo part.
-
-    // ShAmt >= NVTBits
-    Lo2 = DAG.getNode(ISD::SHL, dl, NVT, InL, Amt);
-    Hi2 = DAG.getNode(ISD::OR, dl, NVT,
+    // Short: ShAmt < NVTBits
+    LoS = DAG.getNode(ISD::SHL, dl, NVT, InL, Amt);
+    HiS = DAG.getNode(ISD::OR, dl, NVT,
                       DAG.getNode(ISD::SHL, dl, NVT, InH, Amt),
-                      DAG.getNode(ISD::SRL, dl, NVT, InL, Amt2));
+    // FIXME: If Amt is zero, the following shift generates an undefined result
+    // on some architectures.
+                      DAG.getNode(ISD::SRL, dl, NVT, InL, AmtLack));
+
+    // Long: ShAmt >= NVTBits
+    LoL = DAG.getConstant(0, NVT);                        // Lo part is zero.
+    HiL = DAG.getNode(ISD::SHL, dl, NVT, InL, AmtExcess); // Hi from Lo part.
 
-    Lo = DAG.getNode(ISD::SELECT, dl, NVT, Cmp, Lo1, Lo2);
-    Hi = DAG.getNode(ISD::SELECT, dl, NVT, Cmp, Hi1, Hi2);
+    Lo = DAG.getNode(ISD::SELECT, dl, NVT, isShort, LoS, LoL);
+    Hi = DAG.getNode(ISD::SELECT, dl, NVT, isShort, HiS, HiL);
     return true;
   case ISD::SRL:
-    // ShAmt < NVTBits
-    Hi1 = DAG.getConstant(0, NVT);                  // Hi part is zero.
-    Lo1 = DAG.getNode(ISD::SRL, dl, NVT, InH, Amt); // Lo part from Hi part.
-
-    // ShAmt >= NVTBits
-    Hi2 = DAG.getNode(ISD::SRL, dl, NVT, InH, Amt);
-    Lo2 = DAG.getNode(ISD::OR, dl, NVT,
-                     DAG.getNode(ISD::SRL, dl, NVT, InL, Amt),
-                     DAG.getNode(ISD::SHL, dl, NVT, InH, Amt2));
-
-    Lo = DAG.getNode(ISD::SELECT, dl, NVT, Cmp, Lo1, Lo2);
-    Hi = DAG.getNode(ISD::SELECT, dl, NVT, Cmp, Hi1, Hi2);
+    // Short: ShAmt < NVTBits
+    HiS = DAG.getNode(ISD::SRL, dl, NVT, InH, Amt);
+    LoS = DAG.getNode(ISD::OR, dl, NVT,
+                      DAG.getNode(ISD::SRL, dl, NVT, InL, Amt),
+    // FIXME: If Amt is zero, the following shift generates an undefined result
+    // on some architectures.
+                      DAG.getNode(ISD::SHL, dl, NVT, InH, AmtLack));
+
+    // Long: ShAmt >= NVTBits
+    HiL = DAG.getConstant(0, NVT);                        // Hi part is zero.
+    LoL = DAG.getNode(ISD::SRL, dl, NVT, InH, AmtExcess); // Lo from Hi part.
+
+    Lo = DAG.getNode(ISD::SELECT, dl, NVT, isShort, LoS, LoL);
+    Hi = DAG.getNode(ISD::SELECT, dl, NVT, isShort, HiS, HiL);
     return true;
   case ISD::SRA:
-    // ShAmt < NVTBits
-    Hi1 = DAG.getNode(ISD::SRA, dl, NVT, InH,       // Sign extend high part.
-                       DAG.getConstant(NVTBits-1, ShTy));
-    Lo1 = DAG.getNode(ISD::SRA, dl, NVT, InH, Amt); // Lo part from Hi part.
-
-    // ShAmt >= NVTBits
-    Hi2 = DAG.getNode(ISD::SRA, dl, NVT, InH, Amt);
-    Lo2 = DAG.getNode(ISD::OR, dl, NVT,
+    // Short: ShAmt < NVTBits
+    HiS = DAG.getNode(ISD::SRA, dl, NVT, InH, Amt);
+    LoS = DAG.getNode(ISD::OR, dl, NVT,
                       DAG.getNode(ISD::SRL, dl, NVT, InL, Amt),
-                      DAG.getNode(ISD::SHL, dl, NVT, InH, Amt2));
+    // FIXME: If Amt is zero, the following shift generates an undefined result
+    // on some architectures.
+                      DAG.getNode(ISD::SHL, dl, NVT, InH, AmtLack));
+
+    // Long: ShAmt >= NVTBits
+    HiL = DAG.getNode(ISD::SRA, dl, NVT, InH,             // Sign of Hi part.
+                      DAG.getConstant(NVTBits-1, ShTy));
+    LoL = DAG.getNode(ISD::SRA, dl, NVT, InH, AmtExcess); // Lo from Hi part.
 
-    Lo = DAG.getNode(ISD::SELECT, dl, NVT, Cmp, Lo1, Lo2);
-    Hi = DAG.getNode(ISD::SELECT, dl, NVT, Cmp, Hi1, Hi2);
+    Lo = DAG.getNode(ISD::SELECT, dl, NVT, isShort, LoS, LoL);
+    Hi = DAG.getNode(ISD::SELECT, dl, NVT, isShort, HiS, HiL);
     return true;
   }
 
@@ -1989,7 +1996,9 @@ bool DAGTypeLegalizer::ExpandIntegerOperand(SDNode *N, unsigned OpNo) {
   case ISD::SRA:
   case ISD::SRL:
   case ISD::ROTL:
-  case ISD::ROTR: Res = ExpandIntOp_Shift(N); break;
+  case ISD::ROTR:              Res = ExpandIntOp_Shift(N); break;
+  case ISD::RETURNADDR:
+  case ISD::FRAMEADDR:         Res = ExpandIntOp_RETURNADDR(N); break;
   }
 
   // If the result is null, the sub-method took care of registering results etc.
@@ -2173,6 +2182,15 @@ SDValue DAGTypeLegalizer::ExpandIntOp_Shift(SDNode *N) {
   return DAG.UpdateNodeOperands(SDValue(N, 0), N->getOperand(0), Lo);
 }
 
+SDValue DAGTypeLegalizer::ExpandIntOp_RETURNADDR(SDNode *N) {
+  // The argument of RETURNADDR / FRAMEADDR builtin is 32 bit contant.  This
+  // surely makes pretty nice problems on 8/16 bit targets. Just truncate this
+  // constant to valid type.
+  SDValue Lo, Hi;
+  GetExpandedInteger(N->getOperand(0), Lo, Hi);
+  return DAG.UpdateNodeOperands(SDValue(N, 0), Lo);
+}
+
 SDValue DAGTypeLegalizer::ExpandIntOp_SINT_TO_FP(SDNode *N) {
   SDValue Op = N->getOperand(0);
   EVT DstVT = N->getValueType(0);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
index e298649..003cea7 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
@@ -907,6 +907,29 @@ bool DAGTypeLegalizer::CustomLowerNode(SDNode *N, EVT VT, bool LegalizeResult) {
   return true;
 }
 
+
+/// CustomWidenLowerNode - Widen the node's results with custom code provided
+/// by the target and return "true", or do nothing and return "false".
+bool DAGTypeLegalizer::CustomWidenLowerNode(SDNode *N, EVT VT) {
+  // See if the target wants to custom lower this node.
+  if (TLI.getOperationAction(N->getOpcode(), VT) != TargetLowering::Custom)
+    return false;
+
+  SmallVector<SDValue, 8> Results;
+  TLI.ReplaceNodeResults(N, Results, DAG);
+
+  if (Results.empty())
+    // The target didn't want to custom widen lower its result  after all.
+    return false;
+
+  // Update the widening map.
+  assert(Results.size() == N->getNumValues() &&
+         "Custom lowering returned the wrong number of results!");
+  for (unsigned i = 0, e = Results.size(); i != e; ++i)
+    SetWidenedVector(SDValue(N, i), Results[i]);
+  return true;
+}
+
 /// GetSplitDestVTs - Compute the VTs needed for the low/hi parts of a type
 /// which is split into two not necessarily identical pieces.
 void DAGTypeLegalizer::GetSplitDestVTs(EVT InVT, EVT &LoVT, EVT &HiVT) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index 7b9b010..c35f7ad 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -188,6 +188,7 @@ private:
   SDValue BitConvertVectorToIntegerVector(SDValue Op);
   SDValue CreateStackStoreLoad(SDValue Op, EVT DestVT);
   bool CustomLowerNode(SDNode *N, EVT VT, bool LegalizeResult);
+  bool CustomWidenLowerNode(SDNode *N, EVT VT);
   SDValue GetVectorElementPointer(SDValue VecPtr, EVT EltVT, SDValue Index);
   SDValue JoinIntegers(SDValue Lo, SDValue Hi);
   SDValue LibCallify(RTLIB::Libcall LC, SDNode *N, bool isSigned);
@@ -361,6 +362,7 @@ private:
   SDValue ExpandIntOp_STORE(StoreSDNode *N, unsigned OpNo);
   SDValue ExpandIntOp_TRUNCATE(SDNode *N);
   SDValue ExpandIntOp_UINT_TO_FP(SDNode *N);
+  SDValue ExpandIntOp_RETURNADDR(SDNode *N);
 
   void IntegerExpandSetCCOperands(SDValue &NewLHS, SDValue &NewRHS,
                                   ISD::CondCode &CCCode, DebugLoc dl);
@@ -515,6 +517,7 @@ private:
   SDValue ScalarizeVecRes_INSERT_VECTOR_ELT(SDNode *N);
   SDValue ScalarizeVecRes_LOAD(LoadSDNode *N);
   SDValue ScalarizeVecRes_SCALAR_TO_VECTOR(SDNode *N);
+  SDValue ScalarizeVecRes_SIGN_EXTEND_INREG(SDNode *N);
   SDValue ScalarizeVecRes_SELECT(SDNode *N);
   SDValue ScalarizeVecRes_SELECT_CC(SDNode *N);
   SDValue ScalarizeVecRes_SETCC(SDNode *N);
@@ -558,6 +561,7 @@ private:
   void SplitVecRes_INSERT_VECTOR_ELT(SDNode *N, SDValue &Lo, SDValue &Hi);
   void SplitVecRes_LOAD(LoadSDNode *N, SDValue &Lo, SDValue &Hi);
   void SplitVecRes_SCALAR_TO_VECTOR(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void SplitVecRes_SIGN_EXTEND_INREG(SDNode *N, SDValue &Lo, SDValue &Hi);
   void SplitVecRes_SETCC(SDNode *N, SDValue &Lo, SDValue &Hi);
   void SplitVecRes_UNDEF(SDNode *N, SDValue &Lo, SDValue &Hi);
   void SplitVecRes_VECTOR_SHUFFLE(ShuffleVectorSDNode *N, SDValue &Lo,
@@ -600,6 +604,7 @@ private:
   SDValue WidenVecRes_INSERT_VECTOR_ELT(SDNode* N);
   SDValue WidenVecRes_LOAD(SDNode* N);
   SDValue WidenVecRes_SCALAR_TO_VECTOR(SDNode* N);
+  SDValue WidenVecRes_SIGN_EXTEND_INREG(SDNode* N);
   SDValue WidenVecRes_SELECT(SDNode* N);
   SDValue WidenVecRes_SELECT_CC(SDNode* N);
   SDValue WidenVecRes_UNDEF(SDNode *N);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index ca19430..2625245 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -54,9 +54,6 @@ class VectorLegalizer {
   SDValue LegalizeOp(SDValue Op);
   // Assuming the node is legal, "legalize" the results
   SDValue TranslateLegalizeResults(SDValue Op, SDValue Result);
-  // Implements unrolling a generic vector operation, i.e. turning it into
-  // scalar operations.
-  SDValue UnrollVectorOp(SDValue Op);
   // Implements unrolling a VSETCC.
   SDValue UnrollVSETCC(SDValue Op);
   // Implements expansion for FNEG; falls back to UnrollVectorOp if FSUB
@@ -82,7 +79,7 @@ bool VectorLegalizer::Run() {
   // node is only legalized after all of its operands are legalized.
   DAG.AssignTopologicalOrder();
   for (SelectionDAG::allnodes_iterator I = DAG.allnodes_begin(),
-       E = prior(DAG.allnodes_end()); I != next(E); ++I)
+       E = prior(DAG.allnodes_end()); I != llvm::next(E); ++I)
     LegalizeOp(SDValue(I, 0));
 
   // Finally, it's possible the root changed.  Get the new root.
@@ -182,6 +179,7 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
   case ISD::FRINT:
   case ISD::FNEARBYINT:
   case ISD::FFLOOR:
+  case ISD::SIGN_EXTEND_INREG:
     QueryType = Node->getValueType(0);
     break;
   case ISD::SINT_TO_FP:
@@ -211,7 +209,7 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
     else if (Node->getOpcode() == ISD::VSETCC)
       Result = UnrollVSETCC(Op);
     else
-      Result = UnrollVectorOp(Op);
+      Result = DAG.UnrollVectorOp(Op.getNode());
     break;
   }
 
@@ -256,7 +254,7 @@ SDValue VectorLegalizer::ExpandFNEG(SDValue Op) {
     return DAG.getNode(ISD::FSUB, Op.getDebugLoc(), Op.getValueType(),
                        Zero, Op.getOperand(0));
   }
-  return UnrollVectorOp(Op);
+  return DAG.UnrollVectorOp(Op.getNode());
 }
 
 SDValue VectorLegalizer::UnrollVSETCC(SDValue Op) {
@@ -282,56 +280,6 @@ SDValue VectorLegalizer::UnrollVSETCC(SDValue Op) {
   return DAG.getNode(ISD::BUILD_VECTOR, dl, VT, &Ops[0], NumElems);
 }
 
-/// UnrollVectorOp - We know that the given vector has a legal type, however
-/// the operation it performs is not legal, and the target has requested that
-/// the operation be expanded.  "Unroll" the vector, splitting out the scalars
-/// and operating on each element individually.
-SDValue VectorLegalizer::UnrollVectorOp(SDValue Op) {
-  EVT VT = Op.getValueType();
-  assert(Op.getNode()->getNumValues() == 1 &&
-         "Can't unroll a vector with multiple results!");
-  unsigned NE = VT.getVectorNumElements();
-  EVT EltVT = VT.getVectorElementType();
-  DebugLoc dl = Op.getDebugLoc();
-
-  SmallVector<SDValue, 8> Scalars;
-  SmallVector<SDValue, 4> Operands(Op.getNumOperands());
-  for (unsigned i = 0; i != NE; ++i) {
-    for (unsigned j = 0; j != Op.getNumOperands(); ++j) {
-      SDValue Operand = Op.getOperand(j);
-      EVT OperandVT = Operand.getValueType();
-      if (OperandVT.isVector()) {
-        // A vector operand; extract a single element.
-        EVT OperandEltVT = OperandVT.getVectorElementType();
-        Operands[j] = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl,
-                                  OperandEltVT,
-                                  Operand,
-                                  DAG.getConstant(i, MVT::i32));
-      } else {
-        // A scalar operand; just use it as is.
-        Operands[j] = Operand;
-      }
-    }
-
-    switch (Op.getOpcode()) {
-    default:
-      Scalars.push_back(DAG.getNode(Op.getOpcode(), dl, EltVT,
-                                    &Operands[0], Operands.size()));
-      break;
-    case ISD::SHL:
-    case ISD::SRA:
-    case ISD::SRL:
-    case ISD::ROTL:
-    case ISD::ROTR:
-      Scalars.push_back(DAG.getNode(Op.getOpcode(), dl, EltVT, Operands[0],
-                                    DAG.getShiftAmountOperand(Operands[1])));
-      break;
-    }
-  }
-
-  return DAG.getNode(ISD::BUILD_VECTOR, dl, VT, &Scalars[0], Scalars.size());
-}
-
 }
 
 bool SelectionDAG::LegalizeVectors() {
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index 75e1239..cf67ab9 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -54,6 +54,7 @@ void DAGTypeLegalizer::ScalarizeVectorResult(SDNode *N, unsigned ResNo) {
   case ISD::INSERT_VECTOR_ELT: R = ScalarizeVecRes_INSERT_VECTOR_ELT(N); break;
   case ISD::LOAD:           R = ScalarizeVecRes_LOAD(cast<LoadSDNode>(N));break;
   case ISD::SCALAR_TO_VECTOR:  R = ScalarizeVecRes_SCALAR_TO_VECTOR(N); break;
+  case ISD::SIGN_EXTEND_INREG: R = ScalarizeVecRes_SIGN_EXTEND_INREG(N); break;
   case ISD::SELECT:            R = ScalarizeVecRes_SELECT(N); break;
   case ISD::SELECT_CC:         R = ScalarizeVecRes_SELECT_CC(N); break;
   case ISD::SETCC:             R = ScalarizeVecRes_SETCC(N); break;
@@ -195,6 +196,13 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_SCALAR_TO_VECTOR(SDNode *N) {
   return InOp;
 }
 
+SDValue DAGTypeLegalizer::ScalarizeVecRes_SIGN_EXTEND_INREG(SDNode *N) {
+  EVT EltVT = N->getValueType(0).getVectorElementType();
+  SDValue LHS = GetScalarizedVector(N->getOperand(0));
+  return DAG.getNode(ISD::SIGN_EXTEND_INREG, N->getDebugLoc(), EltVT,
+                     LHS, N->getOperand(1));
+}
+
 SDValue DAGTypeLegalizer::ScalarizeVecRes_SELECT(SDNode *N) {
   SDValue LHS = GetScalarizedVector(N->getOperand(1));
   return DAG.getNode(ISD::SELECT, N->getDebugLoc(),
@@ -401,6 +409,7 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
   case ISD::FPOWI:             SplitVecRes_FPOWI(N, Lo, Hi); break;
   case ISD::INSERT_VECTOR_ELT: SplitVecRes_INSERT_VECTOR_ELT(N, Lo, Hi); break;
   case ISD::SCALAR_TO_VECTOR:  SplitVecRes_SCALAR_TO_VECTOR(N, Lo, Hi); break;
+  case ISD::SIGN_EXTEND_INREG: SplitVecRes_SIGN_EXTEND_INREG(N, Lo, Hi); break;
   case ISD::LOAD:
     SplitVecRes_LOAD(cast<LoadSDNode>(N), Lo, Hi);
     break;
@@ -700,6 +709,18 @@ void DAGTypeLegalizer::SplitVecRes_SCALAR_TO_VECTOR(SDNode *N, SDValue &Lo,
   Hi = DAG.getUNDEF(HiVT);
 }
 
+void DAGTypeLegalizer::SplitVecRes_SIGN_EXTEND_INREG(SDNode *N, SDValue &Lo,
+                                                     SDValue &Hi) {
+  SDValue LHSLo, LHSHi;
+  GetSplitVector(N->getOperand(0), LHSLo, LHSHi);
+  DebugLoc dl = N->getDebugLoc();
+
+  Lo = DAG.getNode(N->getOpcode(), dl, LHSLo.getValueType(), LHSLo,
+                   N->getOperand(1));
+  Hi = DAG.getNode(N->getOpcode(), dl, LHSHi.getValueType(), LHSHi,
+                   N->getOperand(1));
+}
+
 void DAGTypeLegalizer::SplitVecRes_LOAD(LoadSDNode *LD, SDValue &Lo,
                                         SDValue &Hi) {
   assert(ISD::isUNINDEXEDLoad(LD) && "Indexed load during type legalization!");
@@ -1118,8 +1139,12 @@ void DAGTypeLegalizer::WidenVectorResult(SDNode *N, unsigned ResNo) {
   DEBUG(errs() << "Widen node result " << ResNo << ": ";
         N->dump(&DAG);
         errs() << "\n");
-  SDValue Res = SDValue();
 
+  // See if the target wants to custom widen this node.
+  if (CustomWidenLowerNode(N, N->getValueType(ResNo)))
+    return;
+
+  SDValue Res = SDValue();
   switch (N->getOpcode()) {
   default:
 #ifndef NDEBUG
@@ -1137,6 +1162,7 @@ void DAGTypeLegalizer::WidenVectorResult(SDNode *N, unsigned ResNo) {
   case ISD::INSERT_VECTOR_ELT: Res = WidenVecRes_INSERT_VECTOR_ELT(N); break;
   case ISD::LOAD:              Res = WidenVecRes_LOAD(N); break;
   case ISD::SCALAR_TO_VECTOR:  Res = WidenVecRes_SCALAR_TO_VECTOR(N); break;
+  case ISD::SIGN_EXTEND_INREG: Res = WidenVecRes_SIGN_EXTEND_INREG(N); break;
   case ISD::SELECT:            Res = WidenVecRes_SELECT(N); break;
   case ISD::SELECT_CC:         Res = WidenVecRes_SELECT_CC(N); break;
   case ISD::UNDEF:             Res = WidenVecRes_UNDEF(N); break;
@@ -1687,6 +1713,13 @@ SDValue DAGTypeLegalizer::WidenVecRes_SCALAR_TO_VECTOR(SDNode *N) {
                      WidenVT, N->getOperand(0));
 }
 
+SDValue DAGTypeLegalizer::WidenVecRes_SIGN_EXTEND_INREG(SDNode *N) {
+  EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
+  SDValue WidenLHS = GetWidenedVector(N->getOperand(0));
+  return DAG.getNode(ISD::SIGN_EXTEND_INREG, N->getDebugLoc(),
+                     WidenVT, WidenLHS, N->getOperand(1));
+}
+
 SDValue DAGTypeLegalizer::WidenVecRes_SELECT(SDNode *N) {
   EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   unsigned WidenNumElts = WidenVT.getVectorNumElements();
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 8f99957..abf36e5 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -27,6 +27,7 @@
 #include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetData.h"
+#include "llvm/Target/TargetFrameInfo.h"
 #include "llvm/Target/TargetLowering.h"
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/Target/TargetInstrInfo.h"
@@ -831,8 +832,12 @@ SDValue SelectionDAG::getZExtOrTrunc(SDValue Op, DebugLoc DL, EVT VT) {
 }
 
 SDValue SelectionDAG::getZeroExtendInReg(SDValue Op, DebugLoc DL, EVT VT) {
+  assert(!VT.isVector() &&
+         "getZeroExtendInReg should use the vector element type instead of "
+         "the vector type!");
   if (Op.getValueType() == VT) return Op;
-  APInt Imm = APInt::getLowBitsSet(Op.getValueSizeInBits(),
+  unsigned BitWidth = Op.getValueType().getScalarType().getSizeInBits();
+  APInt Imm = APInt::getLowBitsSet(BitWidth,
                                    VT.getSizeInBits());
   return getNode(ISD::AND, DL, Op.getValueType(), Op,
                  getConstant(Imm, Op.getValueType()));
@@ -1480,7 +1485,7 @@ bool SelectionDAG::SignBitIsZero(SDValue Op, unsigned Depth) const {
   if (Op.getValueType().isVector())
     return false;
 
-  unsigned BitWidth = Op.getValueSizeInBits();
+  unsigned BitWidth = Op.getValueType().getScalarType().getSizeInBits();
   return MaskedValueIsZero(Op, APInt::getSignBit(BitWidth), Depth);
 }
 
@@ -1503,7 +1508,7 @@ void SelectionDAG::ComputeMaskedBits(SDValue Op, const APInt &Mask,
                                      APInt &KnownZero, APInt &KnownOne,
                                      unsigned Depth) const {
   unsigned BitWidth = Mask.getBitWidth();
-  assert(BitWidth == Op.getValueType().getSizeInBits() &&
+  assert(BitWidth == Op.getValueType().getScalarType().getSizeInBits() &&
          "Mask size mismatches value type size!");
 
   KnownZero = KnownOne = APInt(BitWidth, 0);   // Don't know anything.
@@ -1760,7 +1765,7 @@ void SelectionDAG::ComputeMaskedBits(SDValue Op, const APInt &Mask,
   }
   case ISD::ZERO_EXTEND: {
     EVT InVT = Op.getOperand(0).getValueType();
-    unsigned InBits = InVT.getSizeInBits();
+    unsigned InBits = InVT.getScalarType().getSizeInBits();
     APInt NewBits   = APInt::getHighBitsSet(BitWidth, BitWidth - InBits) & Mask;
     APInt InMask    = Mask;
     InMask.trunc(InBits);
@@ -1774,7 +1779,7 @@ void SelectionDAG::ComputeMaskedBits(SDValue Op, const APInt &Mask,
   }
   case ISD::SIGN_EXTEND: {
     EVT InVT = Op.getOperand(0).getValueType();
-    unsigned InBits = InVT.getSizeInBits();
+    unsigned InBits = InVT.getScalarType().getSizeInBits();
     APInt InSignBit = APInt::getSignBit(InBits);
     APInt NewBits   = APInt::getHighBitsSet(BitWidth, BitWidth - InBits) & Mask;
     APInt InMask = Mask;
@@ -1815,7 +1820,7 @@ void SelectionDAG::ComputeMaskedBits(SDValue Op, const APInt &Mask,
   }
   case ISD::ANY_EXTEND: {
     EVT InVT = Op.getOperand(0).getValueType();
-    unsigned InBits = InVT.getSizeInBits();
+    unsigned InBits = InVT.getScalarType().getSizeInBits();
     APInt InMask = Mask;
     InMask.trunc(InBits);
     KnownZero.trunc(InBits);
@@ -1827,7 +1832,7 @@ void SelectionDAG::ComputeMaskedBits(SDValue Op, const APInt &Mask,
   }
   case ISD::TRUNCATE: {
     EVT InVT = Op.getOperand(0).getValueType();
-    unsigned InBits = InVT.getSizeInBits();
+    unsigned InBits = InVT.getScalarType().getSizeInBits();
     APInt InMask = Mask;
     InMask.zext(InBits);
     KnownZero.zext(InBits);
@@ -1960,7 +1965,7 @@ void SelectionDAG::ComputeMaskedBits(SDValue Op, const APInt &Mask,
 unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, unsigned Depth) const{
   EVT VT = Op.getValueType();
   assert(VT.isInteger() && "Invalid VT!");
-  unsigned VTBits = VT.getSizeInBits();
+  unsigned VTBits = VT.getScalarType().getSizeInBits();
   unsigned Tmp, Tmp2;
   unsigned FirstAnswer = 1;
 
@@ -1987,7 +1992,7 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, unsigned Depth) const{
   }
 
   case ISD::SIGN_EXTEND:
-    Tmp = VTBits-Op.getOperand(0).getValueType().getSizeInBits();
+    Tmp = VTBits-Op.getOperand(0).getValueType().getScalarType().getSizeInBits();
     return ComputeNumSignBits(Op.getOperand(0), Depth+1) + Tmp;
 
   case ISD::SIGN_EXTEND_INREG:
@@ -2623,6 +2628,9 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
     assert(VT == N1.getValueType() && "Not an inreg extend!");
     assert(VT.isInteger() && EVT.isInteger() &&
            "Cannot *_EXTEND_INREG FP types");
+    assert(!EVT.isVector() &&
+           "AssertSExt/AssertZExt type should be the vector element type "
+           "rather than the vector type!");
     assert(EVT.bitsLE(VT) && "Not extending!");
     if (VT == EVT) return N1; // noop assertion.
     break;
@@ -2632,12 +2640,15 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
     assert(VT == N1.getValueType() && "Not an inreg extend!");
     assert(VT.isInteger() && EVT.isInteger() &&
            "Cannot *_EXTEND_INREG FP types");
-    assert(EVT.bitsLE(VT) && "Not extending!");
+    assert(!EVT.isVector() &&
+           "SIGN_EXTEND_INREG type should be the vector element type rather "
+           "than the vector type!");
+    assert(EVT.bitsLE(VT.getScalarType()) && "Not extending!");
     if (EVT == VT) return N1;  // Not actually extending
 
     if (N1C) {
       APInt Val = N1C->getAPIntValue();
-      unsigned FromBits = cast<VTSDNode>(N2)->getVT().getSizeInBits();
+      unsigned FromBits = EVT.getSizeInBits();
       Val <<= Val.getBitWidth()-FromBits;
       Val = Val.ashr(Val.getBitWidth()-FromBits);
       return getConstant(Val, VT);
@@ -5807,6 +5818,159 @@ static void DumpNodes(const SDNode *N, unsigned indent, const SelectionDAG *G) {
   N->dump(G);
 }
 
+SDValue SelectionDAG::UnrollVectorOp(SDNode *N, unsigned ResNE) {
+  assert(N->getNumValues() == 1 &&
+         "Can't unroll a vector with multiple results!");
+
+  EVT VT = N->getValueType(0);
+  unsigned NE = VT.getVectorNumElements();
+  EVT EltVT = VT.getVectorElementType();
+  DebugLoc dl = N->getDebugLoc();
+
+  SmallVector<SDValue, 8> Scalars;
+  SmallVector<SDValue, 4> Operands(N->getNumOperands());
+
+  // If ResNE is 0, fully unroll the vector op.
+  if (ResNE == 0)
+    ResNE = NE;
+  else if (NE > ResNE)
+    NE = ResNE;
+
+  unsigned i;
+  for (i= 0; i != NE; ++i) {
+    for (unsigned j = 0; j != N->getNumOperands(); ++j) {
+      SDValue Operand = N->getOperand(j);
+      EVT OperandVT = Operand.getValueType();
+      if (OperandVT.isVector()) {
+        // A vector operand; extract a single element.
+        EVT OperandEltVT = OperandVT.getVectorElementType();
+        Operands[j] = getNode(ISD::EXTRACT_VECTOR_ELT, dl,
+                              OperandEltVT,
+                              Operand,
+                              getConstant(i, MVT::i32));
+      } else {
+        // A scalar operand; just use it as is.
+        Operands[j] = Operand;
+      }
+    }
+
+    switch (N->getOpcode()) {
+    default:
+      Scalars.push_back(getNode(N->getOpcode(), dl, EltVT,
+                                &Operands[0], Operands.size()));
+      break;
+    case ISD::SHL:
+    case ISD::SRA:
+    case ISD::SRL:
+    case ISD::ROTL:
+    case ISD::ROTR:
+      Scalars.push_back(getNode(N->getOpcode(), dl, EltVT, Operands[0],
+                                getShiftAmountOperand(Operands[1])));
+      break;
+    }
+  }
+
+  for (; i < ResNE; ++i)
+    Scalars.push_back(getUNDEF(EltVT));
+
+  return getNode(ISD::BUILD_VECTOR, dl,
+                 EVT::getVectorVT(*getContext(), EltVT, ResNE),
+                 &Scalars[0], Scalars.size());
+}
+
+
+/// isConsecutiveLoad - Return true if LD is loading 'Bytes' bytes from a 
+/// location that is 'Dist' units away from the location that the 'Base' load 
+/// is loading from.
+bool SelectionDAG::isConsecutiveLoad(LoadSDNode *LD, LoadSDNode *Base, 
+                                     unsigned Bytes, int Dist) const {
+  if (LD->getChain() != Base->getChain())
+    return false;
+  EVT VT = LD->getValueType(0);
+  if (VT.getSizeInBits() / 8 != Bytes)
+    return false;
+
+  SDValue Loc = LD->getOperand(1);
+  SDValue BaseLoc = Base->getOperand(1);
+  if (Loc.getOpcode() == ISD::FrameIndex) {
+    if (BaseLoc.getOpcode() != ISD::FrameIndex)
+      return false;
+    const MachineFrameInfo *MFI = getMachineFunction().getFrameInfo();
+    int FI  = cast<FrameIndexSDNode>(Loc)->getIndex();
+    int BFI = cast<FrameIndexSDNode>(BaseLoc)->getIndex();
+    int FS  = MFI->getObjectSize(FI);
+    int BFS = MFI->getObjectSize(BFI);
+    if (FS != BFS || FS != (int)Bytes) return false;
+    return MFI->getObjectOffset(FI) == (MFI->getObjectOffset(BFI) + Dist*Bytes);
+  }
+  if (Loc.getOpcode() == ISD::ADD && Loc.getOperand(0) == BaseLoc) {
+    ConstantSDNode *V = dyn_cast<ConstantSDNode>(Loc.getOperand(1));
+    if (V && (V->getSExtValue() == Dist*Bytes))
+      return true;
+  }
+
+  GlobalValue *GV1 = NULL;
+  GlobalValue *GV2 = NULL;
+  int64_t Offset1 = 0;
+  int64_t Offset2 = 0;
+  bool isGA1 = TLI.isGAPlusOffset(Loc.getNode(), GV1, Offset1);
+  bool isGA2 = TLI.isGAPlusOffset(BaseLoc.getNode(), GV2, Offset2);
+  if (isGA1 && isGA2 && GV1 == GV2)
+    return Offset1 == (Offset2 + Dist*Bytes);
+  return false;
+}
+
+
+/// InferPtrAlignment - Infer alignment of a load / store address. Return 0 if
+/// it cannot be inferred.
+unsigned SelectionDAG::InferPtrAlignment(SDValue Ptr) const {
+  // If this is a GlobalAddress + cst, return the alignment.
+  GlobalValue *GV;
+  int64_t GVOffset = 0;
+  if (TLI.isGAPlusOffset(Ptr.getNode(), GV, GVOffset))
+    return MinAlign(GV->getAlignment(), GVOffset);
+
+  // If this is a direct reference to a stack slot, use information about the
+  // stack slot's alignment.
+  int FrameIdx = 1 << 31;
+  int64_t FrameOffset = 0;
+  if (FrameIndexSDNode *FI = dyn_cast<FrameIndexSDNode>(Ptr)) {
+    FrameIdx = FI->getIndex();
+  } else if (Ptr.getOpcode() == ISD::ADD &&
+             isa<ConstantSDNode>(Ptr.getOperand(1)) &&
+             isa<FrameIndexSDNode>(Ptr.getOperand(0))) {
+    FrameIdx = cast<FrameIndexSDNode>(Ptr.getOperand(0))->getIndex();
+    FrameOffset = Ptr.getConstantOperandVal(1);
+  }
+
+  if (FrameIdx != (1 << 31)) {
+    // FIXME: Handle FI+CST.
+    const MachineFrameInfo &MFI = *getMachineFunction().getFrameInfo();
+    unsigned FIInfoAlign = MinAlign(MFI.getObjectAlignment(FrameIdx),
+                                    FrameOffset);
+    if (MFI.isFixedObjectIndex(FrameIdx)) {
+      int64_t ObjectOffset = MFI.getObjectOffset(FrameIdx) + FrameOffset;
+
+      // The alignment of the frame index can be determined from its offset from
+      // the incoming frame position.  If the frame object is at offset 32 and
+      // the stack is guaranteed to be 16-byte aligned, then we know that the
+      // object is 16-byte aligned.
+      unsigned StackAlign = getTarget().getFrameInfo()->getStackAlignment();
+      unsigned Align = MinAlign(ObjectOffset, StackAlign);
+
+      // Finally, the frame object itself may have a known alignment.  Factor
+      // the alignment + offset into a new alignment.  For example, if we know
+      // the FI is 8 byte aligned, but the pointer is 4 off, we really have a
+      // 4-byte alignment of the resultant pointer.  Likewise align 4 + 4-byte
+      // offset = 4-byte alignment, align 4 + 1-byte offset = align 1, etc.
+      return std::max(Align, FIInfoAlign);
+    }
+    return FIInfoAlign;
+  }
+
+  return 0;
+}
+
 void SelectionDAG::dump() const {
   errs() << "SelectionDAG has " << AllNodes.size() << " nodes:";
 
@@ -5962,3 +6126,4 @@ bool ShuffleVectorSDNode::isSplatMask(const int *Mask, EVT VT) {
       return false;
   return true;
 }
+
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 57d8903..2a8b57c 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -2108,7 +2108,7 @@ void SelectionDAGBuilder::visitSelect(User &I) {
 
     for (unsigned i = 0; i != NumValues; ++i)
       Values[i] = DAG.getNode(ISD::SELECT, getCurDebugLoc(),
-                              TrueVal.getValueType(), Cond,
+                              TrueVal.getNode()->getValueType(i), Cond,
                               SDValue(TrueVal.getNode(), TrueVal.getResNo() + i),
                               SDValue(FalseVal.getNode(), FalseVal.getResNo() + i));
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
index c39437f..93b56e1 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
@@ -60,8 +60,6 @@
 using namespace llvm;
 
 static cl::opt<bool>
-DisableLegalizeTypes("disable-legalize-types", cl::Hidden);
-static cl::opt<bool>
 EnableFastISelVerbose("fast-isel-verbose", cl::Hidden,
           cl::desc("Enable verbose messages in the \"fast\" "
                    "instruction selector"));
@@ -362,6 +360,39 @@ bool SelectionDAGISel::runOnMachineFunction(MachineFunction &mf) {
   return true;
 }
 
+/// SetDebugLoc - Update MF's and SDB's DebugLocs if debug information is
+/// attached with this instruction.
+static void SetDebugLoc(unsigned MDDbgKind,
+                        MetadataContext &TheMetadata,
+                        Instruction *I,
+                        SelectionDAGBuilder *SDB,
+                        FastISel *FastIS,
+                        MachineFunction *MF) {
+  if (!isa<DbgInfoIntrinsic>(I)) 
+    if (MDNode *Dbg = TheMetadata.getMD(MDDbgKind, I)) {
+      DILocation DILoc(Dbg);
+      DebugLoc Loc = ExtractDebugLocation(DILoc, MF->getDebugLocInfo());
+
+      SDB->setCurDebugLoc(Loc);
+
+      if (FastIS)
+        FastIS->setCurDebugLoc(Loc);
+
+      // If the function doesn't have a default debug location yet, set
+      // it. This is kind of a hack.
+      if (MF->getDefaultDebugLoc().isUnknown())
+        MF->setDefaultDebugLoc(Loc);
+    }
+}
+
+/// ResetDebugLoc - Set MF's and SDB's DebugLocs to Unknown.
+static void ResetDebugLoc(SelectionDAGBuilder *SDB,
+                          FastISel *FastIS) {
+  SDB->setCurDebugLoc(DebugLoc::getUnknownLoc());
+  if (FastIS)
+    SDB->setCurDebugLoc(DebugLoc::getUnknownLoc());
+}
+
 void SelectionDAGISel::SelectBasicBlock(BasicBlock *LLVMBB,
                                         BasicBlock::iterator Begin,
                                         BasicBlock::iterator End,
@@ -373,20 +404,16 @@ void SelectionDAGISel::SelectBasicBlock(BasicBlock *LLVMBB,
   // Lower all of the non-terminator instructions. If a call is emitted
   // as a tail call, cease emitting nodes for this block.
   for (BasicBlock::iterator I = Begin; I != End && !SDB->HasTailCall; ++I) {
-    if (MDDbgKind) {
-      // Update DebugLoc if debug information is attached with this
-      // instruction.
-      if (!isa<DbgInfoIntrinsic>(I)) 
-        if (MDNode *Dbg = TheMetadata.getMD(MDDbgKind, I)) {
-          DILocation DILoc(Dbg);
-          DebugLoc Loc = ExtractDebugLocation(DILoc, MF->getDebugLocInfo());
-          SDB->setCurDebugLoc(Loc);
-          if (MF->getDefaultDebugLoc().isUnknown())
-            MF->setDefaultDebugLoc(Loc);
-        }
-    }
-    if (!isa<TerminatorInst>(I))
+    if (MDDbgKind)
+      SetDebugLoc(MDDbgKind, TheMetadata, I, SDB, 0, MF);
+
+    if (!isa<TerminatorInst>(I)) {
       SDB->visit(*I);
+
+      // Set the current debug location back to "unknown" so that it doesn't
+      // spuriously apply to subsequent instructions.
+      ResetDebugLoc(SDB, 0);
+    }
   }
 
   if (!SDB->HasTailCall) {
@@ -401,7 +428,9 @@ void SelectionDAGISel::SelectBasicBlock(BasicBlock *LLVMBB,
       HandlePHINodesInSuccessorBlocks(LLVMBB);
 
       // Lower the terminator after the copies are emitted.
+      SetDebugLoc(MDDbgKind, TheMetadata, LLVMBB->getTerminator(), SDB, 0, MF);
       SDB->visit(*LLVMBB->getTerminator());
+      ResetDebugLoc(SDB, 0);
     }
   }
 
@@ -498,75 +527,73 @@ void SelectionDAGISel::CodeGenAndEmitDAG() {
 
   // Second step, hack on the DAG until it only uses operations and types that
   // the target supports.
-  if (!DisableLegalizeTypes) {
-    if (ViewLegalizeTypesDAGs) CurDAG->viewGraph("legalize-types input for " +
-                                                 BlockName);
+  if (ViewLegalizeTypesDAGs) CurDAG->viewGraph("legalize-types input for " +
+                                               BlockName);
+
+  bool Changed;
+  if (TimePassesIsEnabled) {
+    NamedRegionTimer T("Type Legalization", GroupName);
+    Changed = CurDAG->LegalizeTypes();
+  } else {
+    Changed = CurDAG->LegalizeTypes();
+  }
+
+  DEBUG(errs() << "Type-legalized selection DAG:\n");
+  DEBUG(CurDAG->dump());
 
-    bool Changed;
+  if (Changed) {
+    if (ViewDAGCombineLT)
+      CurDAG->viewGraph("dag-combine-lt input for " + BlockName);
+
+    // Run the DAG combiner in post-type-legalize mode.
     if (TimePassesIsEnabled) {
-      NamedRegionTimer T("Type Legalization", GroupName);
-      Changed = CurDAG->LegalizeTypes();
+      NamedRegionTimer T("DAG Combining after legalize types", GroupName);
+      CurDAG->Combine(NoIllegalTypes, *AA, OptLevel);
     } else {
-      Changed = CurDAG->LegalizeTypes();
+      CurDAG->Combine(NoIllegalTypes, *AA, OptLevel);
     }
 
-    DEBUG(errs() << "Type-legalized selection DAG:\n");
+    DEBUG(errs() << "Optimized type-legalized selection DAG:\n");
     DEBUG(CurDAG->dump());
+  }
 
-    if (Changed) {
-      if (ViewDAGCombineLT)
-        CurDAG->viewGraph("dag-combine-lt input for " + BlockName);
-
-      // Run the DAG combiner in post-type-legalize mode.
-      if (TimePassesIsEnabled) {
-        NamedRegionTimer T("DAG Combining after legalize types", GroupName);
-        CurDAG->Combine(NoIllegalTypes, *AA, OptLevel);
-      } else {
-        CurDAG->Combine(NoIllegalTypes, *AA, OptLevel);
-      }
-
-      DEBUG(errs() << "Optimized type-legalized selection DAG:\n");
-      DEBUG(CurDAG->dump());
-    }
+  if (TimePassesIsEnabled) {
+    NamedRegionTimer T("Vector Legalization", GroupName);
+    Changed = CurDAG->LegalizeVectors();
+  } else {
+    Changed = CurDAG->LegalizeVectors();
+  }
 
+  if (Changed) {
     if (TimePassesIsEnabled) {
-      NamedRegionTimer T("Vector Legalization", GroupName);
-      Changed = CurDAG->LegalizeVectors();
+      NamedRegionTimer T("Type Legalization 2", GroupName);
+      Changed = CurDAG->LegalizeTypes();
     } else {
-      Changed = CurDAG->LegalizeVectors();
+      Changed = CurDAG->LegalizeTypes();
     }
 
-    if (Changed) {
-      if (TimePassesIsEnabled) {
-        NamedRegionTimer T("Type Legalization 2", GroupName);
-        Changed = CurDAG->LegalizeTypes();
-      } else {
-        Changed = CurDAG->LegalizeTypes();
-      }
-
-      if (ViewDAGCombineLT)
-        CurDAG->viewGraph("dag-combine-lv input for " + BlockName);
+    if (ViewDAGCombineLT)
+      CurDAG->viewGraph("dag-combine-lv input for " + BlockName);
 
-      // Run the DAG combiner in post-type-legalize mode.
-      if (TimePassesIsEnabled) {
-        NamedRegionTimer T("DAG Combining after legalize vectors", GroupName);
-        CurDAG->Combine(NoIllegalOperations, *AA, OptLevel);
-      } else {
-        CurDAG->Combine(NoIllegalOperations, *AA, OptLevel);
-      }
-
-      DEBUG(errs() << "Optimized vector-legalized selection DAG:\n");
-      DEBUG(CurDAG->dump());
+    // Run the DAG combiner in post-type-legalize mode.
+    if (TimePassesIsEnabled) {
+      NamedRegionTimer T("DAG Combining after legalize vectors", GroupName);
+      CurDAG->Combine(NoIllegalOperations, *AA, OptLevel);
+    } else {
+      CurDAG->Combine(NoIllegalOperations, *AA, OptLevel);
     }
+
+    DEBUG(errs() << "Optimized vector-legalized selection DAG:\n");
+    DEBUG(CurDAG->dump());
   }
 
   if (ViewLegalizeDAGs) CurDAG->viewGraph("legalize input for " + BlockName);
 
   if (TimePassesIsEnabled) {
     NamedRegionTimer T("DAG Legalization", GroupName);
-    CurDAG->Legalize(DisableLegalizeTypes, OptLevel);
+    CurDAG->Legalize(OptLevel);
   } else {
-    CurDAG->Legalize(DisableLegalizeTypes, OptLevel);
+    CurDAG->Legalize(OptLevel);
   }
 
   DEBUG(errs() << "Legalized selection DAG:\n");
@@ -738,24 +765,11 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
       FastIS->startNewBlock(BB);
       // Do FastISel on as many instructions as possible.
       for (; BI != End; ++BI) {
-        if (MDDbgKind) {
-          // Update DebugLoc if debug information is attached with this
-          // instruction.
-          if (!isa<DbgInfoIntrinsic>(BI)) 
-            if (MDNode *Dbg = TheMetadata.getMD(MDDbgKind, BI)) {
-              DILocation DILoc(Dbg);
-              DebugLoc Loc = ExtractDebugLocation(DILoc,
-                                                  MF.getDebugLocInfo());
-              FastIS->setCurDebugLoc(Loc);
-              if (MF.getDefaultDebugLoc().isUnknown())
-                MF.setDefaultDebugLoc(Loc);
-            }
-        }
-
         // Just before the terminator instruction, insert instructions to
         // feed PHI nodes in successor blocks.
         if (isa<TerminatorInst>(BI))
           if (!HandlePHINodesInSuccessorBlocksFast(LLVMBB, FastIS)) {
+            ResetDebugLoc(SDB, FastIS);
             if (EnableFastISelVerbose || EnableFastISelAbort) {
               errs() << "FastISel miss: ";
               BI->dump();
@@ -765,13 +779,18 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
             break;
           }
 
+        if (MDDbgKind)
+          SetDebugLoc(MDDbgKind, TheMetadata, BI, SDB, FastIS, &MF);
+
         // First try normal tablegen-generated "fast" selection.
-        if (FastIS->SelectInstruction(BI))
+        if (FastIS->SelectInstruction(BI)) {
+          ResetDebugLoc(SDB, FastIS);
           continue;
+        }
 
-        // Next, try calling the target to attempt to handle the instruction.
-        if (FastIS->TargetSelectInstruction(BI))
-          continue;
+        // Clear out the debug location so that it doesn't carry over to
+        // unrelated instructions.
+        ResetDebugLoc(SDB, FastIS);
 
         // Then handle certain instructions as single-LLVM-Instruction blocks.
         if (isa<CallInst>(BI)) {
@@ -786,10 +805,8 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
               R = FuncInfo->CreateRegForValue(BI);
           }
 
-          SDB->setCurDebugLoc(FastIS->getCurDebugLoc());
-
           bool HadTailCall = false;
-          SelectBasicBlock(LLVMBB, BI, next(BI), HadTailCall);
+          SelectBasicBlock(LLVMBB, BI, llvm::next(BI), HadTailCall);
 
           // If the call was emitted as a tail call, we're done with the block.
           if (HadTailCall) {
@@ -823,9 +840,6 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
     // not handled by FastISel. If FastISel is not run, this is the entire
     // block.
     if (BI != End) {
-      // If FastISel is run and it has known DebugLoc then use it.
-      if (FastIS && !FastIS->getCurDebugLoc().isUnknown())
-        SDB->setCurDebugLoc(FastIS->getCurDebugLoc());
       bool HadTailCall;
       SelectBasicBlock(LLVMBB, BI, End, HadTailCall);
     }
@@ -1313,14 +1327,6 @@ SDNode *SelectionDAGISel::Select_UNDEF(const SDValue &N) {
                               N.getValueType());
 }
 
-SDNode *SelectionDAGISel::Select_DBG_LABEL(const SDValue &N) {
-  SDValue Chain = N.getOperand(0);
-  unsigned C = cast<LabelSDNode>(N)->getLabelID();
-  SDValue Tmp = CurDAG->getTargetConstant(C, MVT::i32);
-  return CurDAG->SelectNodeTo(N.getNode(), TargetInstrInfo::DBG_LABEL,
-                              MVT::Other, Tmp, Chain);
-}
-
 SDNode *SelectionDAGISel::Select_EH_LABEL(const SDValue &N) {
   SDValue Chain = N.getOperand(0);
   unsigned C = cast<LabelSDNode>(N)->getLabelID();
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGPrinter.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGPrinter.cpp
index ccc5e3c..83fa5a8 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGPrinter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGPrinter.cpp
@@ -29,12 +29,15 @@
 #include "llvm/ADT/DenseSet.h"
 #include "llvm/ADT/StringExtras.h"
 #include "llvm/Config/config.h"
-#include <fstream>
 using namespace llvm;
 
 namespace llvm {
   template<>
   struct DOTGraphTraits<SelectionDAG*> : public DefaultDOTGraphTraits {
+
+    explicit DOTGraphTraits(bool isSimple=false) :
+      DefaultDOTGraphTraits(isSimple) {}
+
     static bool hasEdgeDestLabels() {
       return true;
     }
@@ -47,9 +50,14 @@ namespace llvm {
       return ((const SDNode *) Node)->getValueType(i).getEVTString();
     }
 
+    template<typename EdgeIter>
+    static std::string getEdgeSourceLabel(const void *Node, EdgeIter I) {
+      return itostr(I - SDNodeIterator::begin((SDNode *) Node));
+    }
+
     /// edgeTargetsEdgeSource - This method returns true if this outgoing edge
-    /// should actually target another edge source, not a node.  If this method is
-    /// implemented, getEdgeTarget should be implemented.
+    /// should actually target another edge source, not a node.  If this method
+    /// is implemented, getEdgeTarget should be implemented.
     template<typename EdgeIter>
     static bool edgeTargetsEdgeSource(const void *Node, EdgeIter I) {
       return true;
@@ -73,12 +81,12 @@ namespace llvm {
     static bool renderGraphFromBottomUp() {
       return true;
     }
-    
+
     static bool hasNodeAddressLabel(const SDNode *Node,
                                     const SelectionDAG *Graph) {
       return true;
     }
-    
+
     /// If you want to override the dot attributes printed for a particular
     /// edge, override this method.
     template<typename EdgeIter>
@@ -91,11 +99,18 @@ namespace llvm {
         return "color=blue,style=dashed";
       return "";
     }
-    
 
-    static std::string getNodeLabel(const SDNode *Node,
-                                    const SelectionDAG *Graph,
-                                    bool ShortNames);
+
+    static std::string getSimpleNodeLabel(const SDNode *Node,
+                                          const SelectionDAG *G) {
+      std::string Result = Node->getOperationName(G);
+      {
+        raw_string_ostream OS(Result);
+        Node->print_details(OS, G);
+      }
+      return Result;
+    }
+    std::string getNodeLabel(const SDNode *Node, const SelectionDAG *Graph);
     static std::string getNodeAttributes(const SDNode *N,
                                          const SelectionDAG *Graph) {
 #ifndef NDEBUG
@@ -121,14 +136,8 @@ namespace llvm {
 }
 
 std::string DOTGraphTraits<SelectionDAG*>::getNodeLabel(const SDNode *Node,
-                                                        const SelectionDAG *G,
-                                                        bool ShortNames) {
-  std::string Result = Node->getOperationName(G);
-  {
-    raw_string_ostream OS(Result);
-    Node->print_details(OS, G);
-  }
-  return Result;
+                                                        const SelectionDAG *G) {
+  return DOTGraphTraits<SelectionDAG*>::getSimpleNodeLabel(Node, G);
 }
 
 
@@ -138,7 +147,7 @@ std::string DOTGraphTraits<SelectionDAG*>::getNodeLabel(const SDNode *Node,
 void SelectionDAG::viewGraph(const std::string &Title) {
 // This code is only for debugging!
 #ifndef NDEBUG
-  ViewGraph(this, "dag." + getMachineFunction().getFunction()->getNameStr(), 
+  ViewGraph(this, "dag." + getMachineFunction().getFunction()->getNameStr(),
             false, Title);
 #else
   errs() << "SelectionDAG::viewGraph is only available in debug builds on "
@@ -182,7 +191,7 @@ const std::string SelectionDAG::getGraphAttrs(const SDNode *N) const {
 #ifndef NDEBUG
   std::map<const SDNode *, std::string>::const_iterator I =
     NodeGraphAttrs.find(N);
-    
+
   if (I != NodeGraphAttrs.end())
     return I->second;
   else
@@ -248,8 +257,7 @@ void SelectionDAG::setSubgraphColor(SDNode *N, const char *Color) {
     // Visually mark that we hit the limit
     if (strcmp(Color, "red") == 0) {
       setSubgraphColorHelper(N, "blue", visited, 0, printed);
-    }
-    else if (strcmp(Color, "yellow") == 0) {
+    } else if (strcmp(Color, "yellow") == 0) {
       setSubgraphColorHelper(N, "green", visited, 0, printed);
     }
   }
@@ -269,8 +277,8 @@ std::string ScheduleDAGSDNodes::getGraphNodeLabel(const SUnit *SU) const {
     for (SDNode *N = SU->getNode(); N; N = N->getFlaggedNode())
       FlaggedNodes.push_back(N);
     while (!FlaggedNodes.empty()) {
-      O << DOTGraphTraits<SelectionDAG*>::getNodeLabel(FlaggedNodes.back(),
-                                                       DAG, false);
+      O << DOTGraphTraits<SelectionDAG*>
+	     ::getSimpleNodeLabel(FlaggedNodes.back(), DAG);
       FlaggedNodes.pop_back();
       if (!FlaggedNodes.empty())
         O << "\n    ";
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index 68bc2d6..1026169 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -911,7 +911,7 @@ bool TargetLowering::SimplifyDemandedBits(SDValue Op,
                                           TargetLoweringOpt &TLO,
                                           unsigned Depth) const {
   unsigned BitWidth = DemandedMask.getBitWidth();
-  assert(Op.getValueSizeInBits() == BitWidth &&
+  assert(Op.getValueType().getScalarType().getSizeInBits() == BitWidth &&
          "Mask size mismatches value type size!");
   APInt NewMask = DemandedMask;
   DebugLoc dl = Op.getDebugLoc();
@@ -1240,7 +1240,7 @@ bool TargetLowering::SimplifyDemandedBits(SDValue Op,
       // demand the input sign bit.
       APInt HighBits = APInt::getHighBitsSet(BitWidth, ShAmt);
       if (HighBits.intersects(NewMask))
-        InDemandedMask |= APInt::getSignBit(VT.getSizeInBits());
+        InDemandedMask |= APInt::getSignBit(VT.getScalarType().getSizeInBits());
       
       if (SimplifyDemandedBits(Op.getOperand(0), InDemandedMask,
                                KnownZero, KnownOne, TLO, Depth+1))
@@ -2184,48 +2184,6 @@ bool TargetLowering::isGAPlusOffset(SDNode *N, GlobalValue* &GA,
 }
 
 
-/// isConsecutiveLoad - Return true if LD is loading 'Bytes' bytes from a 
-/// location that is 'Dist' units away from the location that the 'Base' load 
-/// is loading from.
-bool TargetLowering::isConsecutiveLoad(LoadSDNode *LD, LoadSDNode *Base, 
-                                       unsigned Bytes, int Dist, 
-                                       const MachineFrameInfo *MFI) const {
-  if (LD->getChain() != Base->getChain())
-    return false;
-  EVT VT = LD->getValueType(0);
-  if (VT.getSizeInBits() / 8 != Bytes)
-    return false;
-
-  SDValue Loc = LD->getOperand(1);
-  SDValue BaseLoc = Base->getOperand(1);
-  if (Loc.getOpcode() == ISD::FrameIndex) {
-    if (BaseLoc.getOpcode() != ISD::FrameIndex)
-      return false;
-    int FI  = cast<FrameIndexSDNode>(Loc)->getIndex();
-    int BFI = cast<FrameIndexSDNode>(BaseLoc)->getIndex();
-    int FS  = MFI->getObjectSize(FI);
-    int BFS = MFI->getObjectSize(BFI);
-    if (FS != BFS || FS != (int)Bytes) return false;
-    return MFI->getObjectOffset(FI) == (MFI->getObjectOffset(BFI) + Dist*Bytes);
-  }
-  if (Loc.getOpcode() == ISD::ADD && Loc.getOperand(0) == BaseLoc) {
-    ConstantSDNode *V = dyn_cast<ConstantSDNode>(Loc.getOperand(1));
-    if (V && (V->getSExtValue() == Dist*Bytes))
-      return true;
-  }
-
-  GlobalValue *GV1 = NULL;
-  GlobalValue *GV2 = NULL;
-  int64_t Offset1 = 0;
-  int64_t Offset2 = 0;
-  bool isGA1 = isGAPlusOffset(Loc.getNode(), GV1, Offset1);
-  bool isGA2 = isGAPlusOffset(BaseLoc.getNode(), GV2, Offset2);
-  if (isGA1 && isGA2 && GV1 == GV2)
-    return Offset1 == (Offset2 + Dist*Bytes);
-  return false;
-}
-
-
 SDValue TargetLowering::
 PerformDAGCombine(SDNode *N, DAGCombinerInfo &DCI) const {
   // Default implementation: no optimization.
diff --git a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
index 7847f8e..810fabe 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
@@ -130,7 +130,8 @@ bool SimpleRegisterCoalescing::AdjustCopiesBackFrom(LiveInterval &IntA,
   // See PR3149:
   // 172     %ECX<def> = MOV32rr %reg1039<kill>
   // 180     INLINEASM <es:subl $5,$1
-  //         sbbl $3,$0>, 10, %EAX<def>, 14, %ECX<earlyclobber,def>, 9, %EAX<kill>,
+  //         sbbl $3,$0>, 10, %EAX<def>, 14, %ECX<earlyclobber,def>, 9,
+  //         %EAX<kill>,
   // 36, <fi#0>, 1, %reg0, 0, 9, %ECX<kill>, 36, <fi#1>, 1, %reg0, 0
   // 188     %EAX<def> = MOV32rr %EAX<kill>
   // 196     %ECX<def> = MOV32rr %ECX<kill>
@@ -281,12 +282,12 @@ TransferImplicitOps(MachineInstr *MI, MachineInstr *NewMI) {
   }
 }
 
-/// RemoveCopyByCommutingDef - We found a non-trivially-coalescable copy with IntA
-/// being the source and IntB being the dest, thus this defines a value number
-/// in IntB.  If the source value number (in IntA) is defined by a commutable
-/// instruction and its other operand is coalesced to the copy dest register,
-/// see if we can transform the copy into a noop by commuting the definition. For
-/// example,
+/// RemoveCopyByCommutingDef - We found a non-trivially-coalescable copy with
+/// IntA being the source and IntB being the dest, thus this defines a value
+/// number in IntB.  If the source value number (in IntA) is defined by a
+/// commutable instruction and its other operand is coalesced to the copy dest
+/// register, see if we can transform the copy into a noop by commuting the
+/// definition. For example,
 ///
 ///  A3 = op A2 B0<kill>
 ///    ...
@@ -508,7 +509,8 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
     if (BHasSubRegs) {
       for (const unsigned *SR = tri_->getSubRegisters(IntB.reg); *SR; ++SR) {
         LiveInterval &SRLI = li_->getInterval(*SR);
-        SRLI.MergeInClobberRange(*li_, AI->start, End, li_->getVNInfoAllocator());
+        SRLI.MergeInClobberRange(*li_, AI->start, End,
+                                 li_->getVNInfoAllocator());
       }
     }
   }
@@ -708,7 +710,8 @@ bool SimpleRegisterCoalescing::ReMaterializeTrivialDef(LiveInterval &SrcInt,
       checkForDeadDef = true;
     }
 
-  MachineBasicBlock::iterator MII = next(MachineBasicBlock::iterator(CopyMI));
+  MachineBasicBlock::iterator MII =
+    llvm::next(MachineBasicBlock::iterator(CopyMI));
   tii_->reMaterialize(*MBB, MII, DstReg, DstSubIdx, DefMI, tri_);
   MachineInstr *NewMI = prior(MII);
 
@@ -1314,7 +1317,13 @@ bool SimpleRegisterCoalescing::JoinCopy(CopyRec &TheCopy, bool &Again) {
                       "coalesced to another register.\n");
       return false;  // Not coalescable.
     }
-  } else if (!tii_->isMoveInstr(*CopyMI, SrcReg, DstReg, SrcSubIdx, DstSubIdx)){
+  } else if (tii_->isMoveInstr(*CopyMI, SrcReg, DstReg, SrcSubIdx, DstSubIdx)) {
+    if (SrcSubIdx && DstSubIdx && SrcSubIdx != DstSubIdx) {
+      // e.g. %reg16404:1<def> = MOV8rr %reg16412:2<kill>
+      Again = true;
+      return false;  // Not coalescable.
+    }
+  } else {
     llvm_unreachable("Unrecognized copy instruction!");
   }
 
@@ -1611,9 +1620,9 @@ bool SimpleRegisterCoalescing::JoinCopy(CopyRec &TheCopy, bool &Again) {
           }
         }
       } else {
-        // If the virtual register live interval is long but it has low use desity,
-        // do not join them, instead mark the physical register as its allocation
-        // preference.
+        // If the virtual register live interval is long but it has low use
+        // density, do not join them, instead mark the physical register as its
+        // allocation preference.
         LiveInterval &JoinVInt = SrcIsPhys ? DstInt : SrcInt;
         unsigned JoinVReg = SrcIsPhys ? DstReg : SrcReg;
         unsigned JoinPReg = SrcIsPhys ? SrcReg : DstReg;
@@ -1938,6 +1947,10 @@ bool SimpleRegisterCoalescing::SimpleJoin(LiveInterval &LHS, LiveInterval &RHS){
     if (Overlaps) {
       // If we haven't already recorded that this value # is safe, check it.
       if (!InVector(LHSIt->valno, EliminatedLHSVals)) {
+        // If it's re-defined by an early clobber somewhere in the live range,
+        // then conservatively abort coalescing.
+        if (LHSIt->valno->hasRedefByEC())
+          return false;
         // Copy from the RHS?
         if (!RangeIsDefinedByCopyFromReg(LHS, LHSIt, RHS.reg))
           return false;    // Nope, bail out.
@@ -1977,6 +1990,10 @@ bool SimpleRegisterCoalescing::SimpleJoin(LiveInterval &LHS, LiveInterval &RHS){
           // if coalescing succeeds.  Just skip the liverange.
           if (++LHSIt == LHSEnd) break;
         } else {
+          // If it's re-defined by an early clobber somewhere in the live range,
+          // then conservatively abort coalescing.
+          if (LHSIt->valno->hasRedefByEC())
+            return false;
           // Otherwise, if this is a copy from the RHS, mark it as being merged
           // in.
           if (RangeIsDefinedByCopyFromReg(LHS, LHSIt, RHS.reg)) {
@@ -2316,6 +2333,10 @@ SimpleRegisterCoalescing::JoinIntervals(LiveInterval &LHS, LiveInterval &RHS,
       if (LHSValNoAssignments[I->valno->id] !=
           RHSValNoAssignments[J->valno->id])
         return false;
+      // If it's re-defined by an early clobber somewhere in the live range,
+      // then conservatively abort coalescing.
+      if (NewVNInfo[LHSValNoAssignments[I->valno->id]]->hasRedefByEC())
+        return false;
     }
 
     if (I->end < J->end) {
@@ -2371,9 +2392,19 @@ namespace {
   struct DepthMBBCompare {
     typedef std::pair<unsigned, MachineBasicBlock*> DepthMBBPair;
     bool operator()(const DepthMBBPair &LHS, const DepthMBBPair &RHS) const {
-      if (LHS.first > RHS.first) return true;   // Deeper loops first
-      return LHS.first == RHS.first &&
-        LHS.second->getNumber() < RHS.second->getNumber();
+      // Deeper loops first
+      if (LHS.first != RHS.first)
+        return LHS.first > RHS.first;
+
+      // Prefer blocks that are more connected in the CFG. This takes care of
+      // the most difficult copies first while intervals are short.
+      unsigned cl = LHS.second->pred_size() + LHS.second->succ_size();
+      unsigned cr = RHS.second->pred_size() + RHS.second->succ_size();
+      if (cl != cr)
+        return cl > cr;
+
+      // As a last resort, sort by block number.
+      return LHS.second->getNumber() < RHS.second->getNumber();
     }
   };
 }
@@ -2391,9 +2422,15 @@ void SimpleRegisterCoalescing::CopyCoalesceInMBB(MachineBasicBlock *MBB,
 
     // If this isn't a copy nor a extract_subreg, we can't join intervals.
     unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
+    bool isInsUndef = false;
     if (Inst->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
       DstReg = Inst->getOperand(0).getReg();
       SrcReg = Inst->getOperand(1).getReg();
+    } else if (Inst->getOpcode() == TargetInstrInfo::INSERT_SUBREG) {
+      DstReg = Inst->getOperand(0).getReg();
+      SrcReg = Inst->getOperand(2).getReg();
+      if (Inst->getOperand(1).isUndef())
+        isInsUndef = true;
     } else if (Inst->getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
                Inst->getOpcode() == TargetInstrInfo::SUBREG_TO_REG) {
       DstReg = Inst->getOperand(0).getReg();
@@ -2403,7 +2440,8 @@ void SimpleRegisterCoalescing::CopyCoalesceInMBB(MachineBasicBlock *MBB,
 
     bool SrcIsPhys = TargetRegisterInfo::isPhysicalRegister(SrcReg);
     bool DstIsPhys = TargetRegisterInfo::isPhysicalRegister(DstReg);
-    if (li_->hasInterval(SrcReg) && li_->getInterval(SrcReg).empty())
+    if (isInsUndef ||
+        (li_->hasInterval(SrcReg) && li_->getInterval(SrcReg).empty()))
       ImpDefCopies.push_back(CopyRec(Inst, 0));
     else if (SrcIsPhys || DstIsPhys)
       PhysCopies.push_back(CopyRec(Inst, 0));
@@ -2411,9 +2449,9 @@ void SimpleRegisterCoalescing::CopyCoalesceInMBB(MachineBasicBlock *MBB,
       VirtCopies.push_back(CopyRec(Inst, 0));
   }
 
-  // Try coalescing implicit copies first, followed by copies to / from
-  // physical registers, then finally copies from virtual registers to
-  // virtual registers.
+  // Try coalescing implicit copies and insert_subreg <undef> first,
+  // followed by copies to / from physical registers, then finally copies
+  // from virtual registers to virtual registers.
   for (unsigned i = 0, e = ImpDefCopies.size(); i != e; ++i) {
     CopyRec &TheCopy = ImpDefCopies[i];
     bool Again = false;
@@ -2717,7 +2755,8 @@ bool SimpleRegisterCoalescing::runOnMachineFunction(MachineFunction &fn) {
     joinIntervals();
     DEBUG({
         errs() << "********** INTERVALS POST JOINING **********\n";
-        for (LiveIntervals::iterator I = li_->begin(), E = li_->end(); I != E; ++I){
+        for (LiveIntervals::iterator I = li_->begin(), E = li_->end();
+             I != E; ++I){
           I->second->print(errs(), tri_);
           errs() << "\n";
         }
@@ -2758,7 +2797,7 @@ bool SimpleRegisterCoalescing::runOnMachineFunction(MachineFunction &fn) {
           DoDelete = true;
         }
         if (!DoDelete)
-          mii = next(mii);
+          mii = llvm::next(mii);
         else {
           li_->RemoveMachineInstrFromMaps(MI);
           mii = mbbi->erase(mii);
diff --git a/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp b/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp
index 237d0b5..bc246c1 100644
--- a/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/Spiller.cpp
@@ -20,22 +20,25 @@
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
+#include <set>
 
 using namespace llvm;
 
 namespace {
-  enum SpillerName { trivial, standard };
+  enum SpillerName { trivial, standard, splitting };
 }
 
 static cl::opt<SpillerName>
 spillerOpt("spiller",
            cl::desc("Spiller to use: (default: standard)"),
            cl::Prefix,
-           cl::values(clEnumVal(trivial, "trivial spiller"),
-                      clEnumVal(standard, "default spiller"),
+           cl::values(clEnumVal(trivial,   "trivial spiller"),
+                      clEnumVal(standard,  "default spiller"),
+                      clEnumVal(splitting, "splitting spiller"),
                       clEnumValEnd),
            cl::init(standard));
 
+// Spiller virtual destructor implementation.
 Spiller::~Spiller() {}
 
 namespace {
@@ -140,9 +143,9 @@ protected:
 
       // Insert store if necessary.
       if (hasDef) {
-        tii->storeRegToStackSlot(*mi->getParent(), next(miItr), newVReg, true,
+        tii->storeRegToStackSlot(*mi->getParent(), llvm::next(miItr), newVReg, true,
                                  ss, trc);
-        MachineInstr *storeInstr(next(miItr));
+        MachineInstr *storeInstr(llvm::next(miItr));
         SlotIndex storeIndex =
           lis->InsertMachineInstrInMaps(storeInstr).getDefIndex();
         SlotIndex beginIndex = storeIndex.getPrevIndex();
@@ -170,7 +173,8 @@ public:
     : SpillerBase(mf, lis, vrm) {}
 
   std::vector<LiveInterval*> spill(LiveInterval *li,
-                                   SmallVectorImpl<LiveInterval*> &spillIs) {
+                                   SmallVectorImpl<LiveInterval*> &spillIs,
+                                   SlotIndex*) {
     // Ignore spillIs - we don't use it.
     return trivialSpillEverywhere(li);
   }
@@ -179,23 +183,336 @@ public:
 
 /// Falls back on LiveIntervals::addIntervalsForSpills.
 class StandardSpiller : public Spiller {
-private:
+protected:
   LiveIntervals *lis;
   const MachineLoopInfo *loopInfo;
   VirtRegMap *vrm;
 public:
-  StandardSpiller(MachineFunction *mf, LiveIntervals *lis,
-                  const MachineLoopInfo *loopInfo, VirtRegMap *vrm)
+  StandardSpiller(LiveIntervals *lis, const MachineLoopInfo *loopInfo,
+                  VirtRegMap *vrm)
     : lis(lis), loopInfo(loopInfo), vrm(vrm) {}
 
   /// Falls back on LiveIntervals::addIntervalsForSpills.
   std::vector<LiveInterval*> spill(LiveInterval *li,
-                                   SmallVectorImpl<LiveInterval*> &spillIs) {
+                                   SmallVectorImpl<LiveInterval*> &spillIs,
+                                   SlotIndex*) {
     return lis->addIntervalsForSpills(*li, spillIs, loopInfo, *vrm);
   }
 
 };
 
+/// When a call to spill is placed this spiller will first try to break the
+/// interval up into its component values (one new interval per value).
+/// If this fails, or if a call is placed to spill a previously split interval
+/// then the spiller falls back on the standard spilling mechanism. 
+class SplittingSpiller : public StandardSpiller {
+public:
+  SplittingSpiller(MachineFunction *mf, LiveIntervals *lis,
+                   const MachineLoopInfo *loopInfo, VirtRegMap *vrm)
+    : StandardSpiller(lis, loopInfo, vrm) {
+
+    mri = &mf->getRegInfo();
+    tii = mf->getTarget().getInstrInfo();
+    tri = mf->getTarget().getRegisterInfo();
+  }
+
+  std::vector<LiveInterval*> spill(LiveInterval *li,
+                                   SmallVectorImpl<LiveInterval*> &spillIs,
+                                   SlotIndex *earliestStart) {
+    
+    if (worthTryingToSplit(li)) {
+      return tryVNISplit(li, earliestStart);
+    }
+    // else
+    return StandardSpiller::spill(li, spillIs, earliestStart);
+  }
+
+private:
+
+  MachineRegisterInfo *mri;
+  const TargetInstrInfo *tii;
+  const TargetRegisterInfo *tri;  
+  DenseSet<LiveInterval*> alreadySplit;
+
+  bool worthTryingToSplit(LiveInterval *li) const {
+    return (!alreadySplit.count(li) && li->getNumValNums() > 1);
+  }
+
+  /// Try to break a LiveInterval into its component values.
+  std::vector<LiveInterval*> tryVNISplit(LiveInterval *li,
+                                         SlotIndex *earliestStart) {
+
+    DEBUG(errs() << "Trying VNI split of %reg" << *li << "\n");
+
+    std::vector<LiveInterval*> added;
+    SmallVector<VNInfo*, 4> vnis;
+
+    std::copy(li->vni_begin(), li->vni_end(), std::back_inserter(vnis));
+   
+    for (SmallVectorImpl<VNInfo*>::iterator vniItr = vnis.begin(),
+         vniEnd = vnis.end(); vniItr != vniEnd; ++vniItr) {
+      VNInfo *vni = *vniItr;
+      
+      // Skip unused VNIs, or VNIs with no kills.
+      if (vni->isUnused() || vni->kills.empty())
+        continue;
+
+      DEBUG(errs() << "  Extracted Val #" << vni->id << " as ");
+      LiveInterval *splitInterval = extractVNI(li, vni);
+      
+      if (splitInterval != 0) {
+        DEBUG(errs() << *splitInterval << "\n");
+        added.push_back(splitInterval);
+        alreadySplit.insert(splitInterval);
+        if (earliestStart != 0) {
+          if (splitInterval->beginIndex() < *earliestStart)
+            *earliestStart = splitInterval->beginIndex();
+        }
+      } else {
+        DEBUG(errs() << "0\n");
+      }
+    } 
+
+    DEBUG(errs() << "Original LI: " << *li << "\n");
+
+    // If there original interval still contains some live ranges
+    // add it to added and alreadySplit.    
+    if (!li->empty()) {
+      added.push_back(li);
+      alreadySplit.insert(li);
+      if (earliestStart != 0) {
+        if (li->beginIndex() < *earliestStart)
+          *earliestStart = li->beginIndex();
+      }
+    }
+
+    return added;
+  }
+
+  /// Extract the given value number from the interval.
+  LiveInterval* extractVNI(LiveInterval *li, VNInfo *vni) const {
+    assert(vni->isDefAccurate() || vni->isPHIDef());
+    assert(!vni->kills.empty());
+
+    // Create a new vreg and live interval, copy VNI kills & ranges over.                                                                                                                                                     
+    const TargetRegisterClass *trc = mri->getRegClass(li->reg);
+    unsigned newVReg = mri->createVirtualRegister(trc);
+    vrm->grow();
+    LiveInterval *newLI = &lis->getOrCreateInterval(newVReg);
+    VNInfo *newVNI = newLI->createValueCopy(vni, lis->getVNInfoAllocator());
+
+    // Start by copying all live ranges in the VN to the new interval.                                                                                                                                                        
+    for (LiveInterval::iterator rItr = li->begin(), rEnd = li->end();
+         rItr != rEnd; ++rItr) {
+      if (rItr->valno == vni) {
+        newLI->addRange(LiveRange(rItr->start, rItr->end, newVNI));
+      }
+    }
+
+    // Erase the old VNI & ranges.                                                                                                                                                                                            
+    li->removeValNo(vni);
+
+    // Collect all current uses of the register belonging to the given VNI.
+    // We'll use this to rename the register after we've dealt with the def.
+    std::set<MachineInstr*> uses;
+    for (MachineRegisterInfo::use_iterator
+         useItr = mri->use_begin(li->reg), useEnd = mri->use_end();
+         useItr != useEnd; ++useItr) {
+      uses.insert(&*useItr);
+    }
+
+    // Process the def instruction for this VNI.
+    if (newVNI->isPHIDef()) {
+      // Insert a copy at the start of the MBB. The range proceeding the
+      // copy will be attached to the original LiveInterval.
+      MachineBasicBlock *defMBB = lis->getMBBFromIndex(newVNI->def);
+      tii->copyRegToReg(*defMBB, defMBB->begin(), newVReg, li->reg, trc, trc);
+      MachineInstr *copyMI = defMBB->begin();
+      copyMI->addRegisterKilled(li->reg, tri);
+      SlotIndex copyIdx = lis->InsertMachineInstrInMaps(copyMI);
+      VNInfo *phiDefVNI = li->getNextValue(lis->getMBBStartIdx(defMBB),
+                                           0, false, lis->getVNInfoAllocator());
+      phiDefVNI->setIsPHIDef(true);
+      phiDefVNI->addKill(copyIdx.getDefIndex());
+      li->addRange(LiveRange(phiDefVNI->def, copyIdx.getDefIndex(), phiDefVNI));
+      LiveRange *oldPHIDefRange =
+        newLI->getLiveRangeContaining(lis->getMBBStartIdx(defMBB));
+
+      // If the old phi def starts in the middle of the range chop it up.
+      if (oldPHIDefRange->start < lis->getMBBStartIdx(defMBB)) {
+        LiveRange oldPHIDefRange2(copyIdx.getDefIndex(), oldPHIDefRange->end,
+                                  oldPHIDefRange->valno);
+        oldPHIDefRange->end = lis->getMBBStartIdx(defMBB);
+        newLI->addRange(oldPHIDefRange2);
+      } else if (oldPHIDefRange->start == lis->getMBBStartIdx(defMBB)) {
+        // Otherwise if it's at the start of the range just trim it.
+        oldPHIDefRange->start = copyIdx.getDefIndex();
+      } else {
+        assert(false && "PHI def range doesn't cover PHI def?");
+      }
+
+      newVNI->def = copyIdx.getDefIndex();
+      newVNI->setCopy(copyMI);
+      newVNI->setIsPHIDef(false); // not a PHI def anymore.
+      newVNI->setIsDefAccurate(true);
+    } else {
+      // non-PHI def. Rename the def. If it's two-addr that means renaming the use
+      // and inserting a new copy too.
+      MachineInstr *defInst = lis->getInstructionFromIndex(newVNI->def);
+      // We'll rename this now, so we can remove it from uses.
+      uses.erase(defInst);
+      unsigned defOpIdx = defInst->findRegisterDefOperandIdx(li->reg);
+      bool isTwoAddr = defInst->isRegTiedToUseOperand(defOpIdx),
+        twoAddrUseIsUndef = false;
+
+      for (unsigned i = 0; i < defInst->getNumOperands(); ++i) {
+        MachineOperand &mo = defInst->getOperand(i);
+        if (mo.isReg() && (mo.isDef() || isTwoAddr) && (mo.getReg()==li->reg)) {
+          mo.setReg(newVReg);
+          if (isTwoAddr && mo.isUse() && mo.isUndef())
+            twoAddrUseIsUndef = true;
+        }
+      }
+    
+      SlotIndex defIdx = lis->getInstructionIndex(defInst);
+      newVNI->def = defIdx.getDefIndex();
+
+      if (isTwoAddr && !twoAddrUseIsUndef) {
+        MachineBasicBlock *defMBB = defInst->getParent();
+        tii->copyRegToReg(*defMBB, defInst, newVReg, li->reg, trc, trc);
+        MachineInstr *copyMI = prior(MachineBasicBlock::iterator(defInst));
+        SlotIndex copyIdx = lis->InsertMachineInstrInMaps(copyMI);
+        copyMI->addRegisterKilled(li->reg, tri);
+        LiveRange *origUseRange =
+          li->getLiveRangeContaining(newVNI->def.getUseIndex());
+        VNInfo *origUseVNI = origUseRange->valno;
+        origUseRange->end = copyIdx.getDefIndex();
+        bool updatedKills = false;
+        for (unsigned k = 0; k < origUseVNI->kills.size(); ++k) {
+          if (origUseVNI->kills[k] == defIdx.getDefIndex()) {
+            origUseVNI->kills[k] = copyIdx.getDefIndex();
+            updatedKills = true;
+            break;
+          }
+        }
+        assert(updatedKills && "Failed to update VNI kill list.");
+        VNInfo *copyVNI = newLI->getNextValue(copyIdx.getDefIndex(), copyMI,
+                                              true, lis->getVNInfoAllocator());
+        copyVNI->addKill(defIdx.getDefIndex());
+        LiveRange copyRange(copyIdx.getDefIndex(),defIdx.getDefIndex(),copyVNI);
+        newLI->addRange(copyRange);
+      }    
+    }
+    
+    for (std::set<MachineInstr*>::iterator
+         usesItr = uses.begin(), usesEnd = uses.end();
+         usesItr != usesEnd; ++usesItr) {
+      MachineInstr *useInst = *usesItr;
+      SlotIndex useIdx = lis->getInstructionIndex(useInst);
+      LiveRange *useRange =
+        newLI->getLiveRangeContaining(useIdx.getUseIndex());
+
+      // If this use doesn't belong to the new interval skip it.
+      if (useRange == 0)
+        continue;
+
+      // This use doesn't belong to the VNI, skip it.
+      if (useRange->valno != newVNI)
+        continue;
+
+      // Check if this instr is two address.
+      unsigned useOpIdx = useInst->findRegisterUseOperandIdx(li->reg);
+      bool isTwoAddress = useInst->isRegTiedToDefOperand(useOpIdx);
+      
+      // Rename uses (and defs for two-address instrs).
+      for (unsigned i = 0; i < useInst->getNumOperands(); ++i) {
+        MachineOperand &mo = useInst->getOperand(i);
+        if (mo.isReg() && (mo.isUse() || isTwoAddress) &&
+            (mo.getReg() == li->reg)) {
+          mo.setReg(newVReg);
+        }
+      }
+
+      // If this is a two address instruction we've got some extra work to do.
+      if (isTwoAddress) {
+        // We modified the def operand, so we need to copy back to the original
+        // reg.
+        MachineBasicBlock *useMBB = useInst->getParent();
+        MachineBasicBlock::iterator useItr(useInst);
+        tii->copyRegToReg(*useMBB, next(useItr), li->reg, newVReg, trc, trc);
+        MachineInstr *copyMI = next(useItr);
+        copyMI->addRegisterKilled(newVReg, tri);
+        SlotIndex copyIdx = lis->InsertMachineInstrInMaps(copyMI);
+
+        // Change the old two-address defined range & vni to start at
+        // (and be defined by) the copy.
+        LiveRange *origDefRange =
+          li->getLiveRangeContaining(useIdx.getDefIndex());
+        origDefRange->start = copyIdx.getDefIndex();
+        origDefRange->valno->def = copyIdx.getDefIndex();
+        origDefRange->valno->setCopy(copyMI);
+
+        // Insert a new range & vni for the two-address-to-copy value. This
+        // will be attached to the new live interval.
+        VNInfo *copyVNI =
+          newLI->getNextValue(useIdx.getDefIndex(), 0, true,
+                              lis->getVNInfoAllocator());
+        copyVNI->addKill(copyIdx.getDefIndex());
+        LiveRange copyRange(useIdx.getDefIndex(),copyIdx.getDefIndex(),copyVNI);
+        newLI->addRange(copyRange);
+      }
+    }
+    
+    // Iterate over any PHI kills - we'll need to insert new copies for them.
+    for (VNInfo::KillSet::iterator
+         killItr = newVNI->kills.begin(), killEnd = newVNI->kills.end();
+         killItr != killEnd; ++killItr) {
+      SlotIndex killIdx(*killItr);
+      if (killItr->isPHI()) {
+        MachineBasicBlock *killMBB = lis->getMBBFromIndex(killIdx);
+        LiveRange *oldKillRange =
+          newLI->getLiveRangeContaining(killIdx);
+
+        assert(oldKillRange != 0 && "No kill range?");
+
+        tii->copyRegToReg(*killMBB, killMBB->getFirstTerminator(),
+                          li->reg, newVReg, trc, trc);
+        MachineInstr *copyMI = prior(killMBB->getFirstTerminator());
+        copyMI->addRegisterKilled(newVReg, tri);
+        SlotIndex copyIdx = lis->InsertMachineInstrInMaps(copyMI);
+
+        // Save the current end. We may need it to add a new range if the
+        // current range runs of the end of the MBB.
+        SlotIndex newKillRangeEnd = oldKillRange->end;
+        oldKillRange->end = copyIdx.getDefIndex();
+
+        if (newKillRangeEnd != lis->getMBBEndIdx(killMBB).getNextSlot()) {
+          assert(newKillRangeEnd > lis->getMBBEndIdx(killMBB).getNextSlot() &&
+                 "PHI kill range doesn't reach kill-block end. Not sane.");
+          newLI->addRange(LiveRange(lis->getMBBEndIdx(killMBB).getNextSlot(),
+                                    newKillRangeEnd, newVNI));
+        }
+
+        *killItr = oldKillRange->end;
+        VNInfo *newKillVNI = li->getNextValue(copyIdx.getDefIndex(),
+                                              copyMI, true,
+                                              lis->getVNInfoAllocator());
+        newKillVNI->addKill(lis->getMBBTerminatorGap(killMBB));
+        newKillVNI->setHasPHIKill(true);
+        li->addRange(LiveRange(copyIdx.getDefIndex(),
+                               lis->getMBBEndIdx(killMBB).getNextSlot(),
+                               newKillVNI));
+      }
+
+    }
+
+    newVNI->setHasPHIKill(false);
+
+    return newLI;
+  }
+
+};
+
 }
 
 llvm::Spiller* llvm::createSpiller(MachineFunction *mf, LiveIntervals *lis,
@@ -203,7 +520,8 @@ llvm::Spiller* llvm::createSpiller(MachineFunction *mf, LiveIntervals *lis,
                                    VirtRegMap *vrm) {
   switch (spillerOpt) {
     case trivial: return new TrivialSpiller(mf, lis, vrm); break;
-    case standard: return new StandardSpiller(mf, lis, loopInfo, vrm); break;
+    case standard: return new StandardSpiller(lis, loopInfo, vrm); break;
+    case splitting: return new SplittingSpiller(mf, lis, loopInfo, vrm); break;
     default: llvm_unreachable("Unreachable!"); break;
   }
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/Spiller.h b/libclamav/c++/llvm/lib/CodeGen/Spiller.h
index c6bd985..dda52e8 100644
--- a/libclamav/c++/llvm/lib/CodeGen/Spiller.h
+++ b/libclamav/c++/llvm/lib/CodeGen/Spiller.h
@@ -21,6 +21,7 @@ namespace llvm {
   class MachineFunction;
   class MachineInstr;
   class MachineLoopInfo;
+  class SlotIndex;
   class VirtRegMap;
   class VNInfo;
 
@@ -35,7 +36,8 @@ namespace llvm {
     /// Spill the given live range. The method used will depend on the Spiller
     /// implementation selected.
     virtual std::vector<LiveInterval*> spill(LiveInterval *li,
-                                   SmallVectorImpl<LiveInterval*> &spillIs) = 0;
+					     SmallVectorImpl<LiveInterval*> &spillIs,
+                                             SlotIndex *earliestIndex = 0) = 0;
 
   };
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp b/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp
index c299192..fd25a37 100644
--- a/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp
@@ -668,7 +668,7 @@ bool StackSlotColoring::RemoveDeadStores(MachineBasicBlock* MBB) {
     if (DCELimit != -1 && (int)NumDead >= DCELimit)
       break;
     
-    MachineBasicBlock::iterator NextMI = next(I);
+    MachineBasicBlock::iterator NextMI = llvm::next(I);
     if (NextMI == MBB->end()) continue;
     
     int FirstSS, SecondSS;
diff --git a/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp b/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
index 12610b0..b53ebec 100644
--- a/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
@@ -17,15 +17,19 @@
 #include "llvm/CodeGen/Passes.h"
 #include "llvm/CodeGen/MachineModuleInfo.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/MachineSSAUpdater.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/ADT/SetVector.h"
 #include "llvm/ADT/Statistic.h"
 using namespace llvm;
 
+STATISTIC(NumTails     , "Number of tails duplicated");
 STATISTIC(NumTailDups  , "Number of tail duplicated blocks");
 STATISTIC(NumInstrDups , "Additional instructions due to tail duplication");
 STATISTIC(NumDeadBlocks, "Number of dead blocks removed");
@@ -36,34 +40,70 @@ TailDuplicateSize("tail-dup-size",
                   cl::desc("Maximum instructions to consider tail duplicating"),
                   cl::init(2), cl::Hidden);
 
+static cl::opt<bool>
+TailDupVerify("tail-dup-verify",
+              cl::desc("Verify sanity of PHI instructions during taildup"),
+              cl::init(false), cl::Hidden);
+
+static cl::opt<unsigned>
+TailDupLimit("tail-dup-limit", cl::init(~0U), cl::Hidden);
+
+typedef std::vector<std::pair<MachineBasicBlock*,unsigned> > AvailableValsTy;
+
 namespace {
   /// TailDuplicatePass - Perform tail duplication.
   class TailDuplicatePass : public MachineFunctionPass {
+    bool PreRegAlloc;
     const TargetInstrInfo *TII;
     MachineModuleInfo *MMI;
+    MachineRegisterInfo *MRI;
+
+    // SSAUpdateVRs - A list of virtual registers for which to update SSA form.
+    SmallVector<unsigned, 16> SSAUpdateVRs;
+
+    // SSAUpdateVals - For each virtual register in SSAUpdateVals keep a list of
+    // source virtual registers.
+    DenseMap<unsigned, AvailableValsTy> SSAUpdateVals;
 
   public:
     static char ID;
-    explicit TailDuplicatePass() : MachineFunctionPass(&ID) {}
+    explicit TailDuplicatePass(bool PreRA) :
+      MachineFunctionPass(&ID), PreRegAlloc(PreRA) {}
 
     virtual bool runOnMachineFunction(MachineFunction &MF);
     virtual const char *getPassName() const { return "Tail Duplication"; }
 
   private:
+    void AddSSAUpdateEntry(unsigned OrigReg, unsigned NewReg,
+                           MachineBasicBlock *BB);
+    void ProcessPHI(MachineInstr *MI, MachineBasicBlock *TailBB,
+                    MachineBasicBlock *PredBB,
+                    DenseMap<unsigned, unsigned> &LocalVRMap,
+                    SmallVector<std::pair<unsigned,unsigned>, 4> &Copies);
+    void DuplicateInstruction(MachineInstr *MI,
+                              MachineBasicBlock *TailBB,
+                              MachineBasicBlock *PredBB,
+                              MachineFunction &MF,
+                              DenseMap<unsigned, unsigned> &LocalVRMap);
+    void UpdateSuccessorsPHIs(MachineBasicBlock *FromBB, bool isDead,
+                              SmallVector<MachineBasicBlock*, 8> &TDBBs,
+                              SmallSetVector<MachineBasicBlock*, 8> &Succs);
     bool TailDuplicateBlocks(MachineFunction &MF);
-    bool TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF);
+    bool TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
+                       SmallVector<MachineBasicBlock*, 8> &TDBBs);
     void RemoveDeadBlock(MachineBasicBlock *MBB);
   };
 
   char TailDuplicatePass::ID = 0;
 }
 
-FunctionPass *llvm::createTailDuplicatePass() {
-  return new TailDuplicatePass();
+FunctionPass *llvm::createTailDuplicatePass(bool PreRegAlloc) {
+  return new TailDuplicatePass(PreRegAlloc);
 }
 
 bool TailDuplicatePass::runOnMachineFunction(MachineFunction &MF) {
   TII = MF.getTarget().getInstrInfo();
+  MRI = &MF.getRegInfo();
   MMI = getAnalysisIfAvailable<MachineModuleInfo>();
 
   bool MadeChange = false;
@@ -77,36 +117,308 @@ bool TailDuplicatePass::runOnMachineFunction(MachineFunction &MF) {
   return MadeChange;
 }
 
+static void VerifyPHIs(MachineFunction &MF, bool CheckExtra) {
+  for (MachineFunction::iterator I = ++MF.begin(), E = MF.end(); I != E; ++I) {
+    MachineBasicBlock *MBB = I;
+    SmallSetVector<MachineBasicBlock*, 8> Preds(MBB->pred_begin(),
+                                                MBB->pred_end());
+    MachineBasicBlock::iterator MI = MBB->begin();
+    while (MI != MBB->end()) {
+      if (MI->getOpcode() != TargetInstrInfo::PHI)
+        break;
+      for (SmallSetVector<MachineBasicBlock *, 8>::iterator PI = Preds.begin(),
+             PE = Preds.end(); PI != PE; ++PI) {
+        MachineBasicBlock *PredBB = *PI;
+        bool Found = false;
+        for (unsigned i = 1, e = MI->getNumOperands(); i != e; i += 2) {
+          MachineBasicBlock *PHIBB = MI->getOperand(i+1).getMBB();
+          if (PHIBB == PredBB) {
+            Found = true;
+            break;
+          }
+        }
+        if (!Found) {
+          errs() << "Malformed PHI in BB#" << MBB->getNumber() << ": " << *MI;
+          errs() << "  missing input from predecessor BB#"
+                 << PredBB->getNumber() << '\n';
+          llvm_unreachable(0);
+        }
+      }
+
+      for (unsigned i = 1, e = MI->getNumOperands(); i != e; i += 2) {
+        MachineBasicBlock *PHIBB = MI->getOperand(i+1).getMBB();
+        if (CheckExtra && !Preds.count(PHIBB)) {
+          // This is not a hard error.
+          errs() << "Warning: malformed PHI in BB#" << MBB->getNumber()
+                 << ": " << *MI;
+          errs() << "  extra input from predecessor BB#"
+                 << PHIBB->getNumber() << '\n';
+        }
+        if (PHIBB->getNumber() < 0) {
+          errs() << "Malformed PHI in BB#" << MBB->getNumber() << ": " << *MI;
+          errs() << "  non-existing BB#" << PHIBB->getNumber() << '\n';
+          llvm_unreachable(0);
+        }
+      }
+      ++MI;
+    }
+  }
+}
+
 /// TailDuplicateBlocks - Look for small blocks that are unconditionally
 /// branched to and do not fall through. Tail-duplicate their instructions
 /// into their predecessors to eliminate (dynamic) branches.
 bool TailDuplicatePass::TailDuplicateBlocks(MachineFunction &MF) {
   bool MadeChange = false;
 
+  if (PreRegAlloc && TailDupVerify) {
+    DEBUG(errs() << "\n*** Before tail-duplicating\n");
+    VerifyPHIs(MF, true);
+  }
+
+  SmallVector<MachineInstr*, 8> NewPHIs;
+  MachineSSAUpdater SSAUpdate(MF, &NewPHIs);
+
   for (MachineFunction::iterator I = ++MF.begin(), E = MF.end(); I != E; ) {
     MachineBasicBlock *MBB = I++;
 
+    if (NumTails == TailDupLimit)
+      break;
+
     // Only duplicate blocks that end with unconditional branches.
     if (MBB->canFallThrough())
       continue;
 
-    MadeChange |= TailDuplicate(MBB, MF);
-
-    // If it is dead, remove it.
-    if (MBB->pred_empty()) {
-      NumInstrDups -= MBB->size();
-      RemoveDeadBlock(MBB);
+    // Save the successors list.
+    SmallSetVector<MachineBasicBlock*, 8> Succs(MBB->succ_begin(),
+                                                MBB->succ_end());
+
+    SmallVector<MachineBasicBlock*, 8> TDBBs;
+    if (TailDuplicate(MBB, MF, TDBBs)) {
+      ++NumTails;
+
+      // TailBB's immediate successors are now successors of those predecessors
+      // which duplicated TailBB. Add the predecessors as sources to the PHI
+      // instructions.
+      bool isDead = MBB->pred_empty();
+      if (PreRegAlloc)
+        UpdateSuccessorsPHIs(MBB, isDead, TDBBs, Succs);
+
+      // If it is dead, remove it.
+      if (isDead) {
+        NumInstrDups -= MBB->size();
+        RemoveDeadBlock(MBB);
+        ++NumDeadBlocks;
+      }
+
+      // Update SSA form.
+      if (!SSAUpdateVRs.empty()) {
+        for (unsigned i = 0, e = SSAUpdateVRs.size(); i != e; ++i) {
+          unsigned VReg = SSAUpdateVRs[i];
+          SSAUpdate.Initialize(VReg);
+
+          // If the original definition is still around, add it as an available
+          // value.
+          MachineInstr *DefMI = MRI->getVRegDef(VReg);
+          MachineBasicBlock *DefBB = 0;
+          if (DefMI) {
+            DefBB = DefMI->getParent();
+            SSAUpdate.AddAvailableValue(DefBB, VReg);
+          }
+
+          // Add the new vregs as available values.
+          DenseMap<unsigned, AvailableValsTy>::iterator LI =
+            SSAUpdateVals.find(VReg);  
+          for (unsigned j = 0, ee = LI->second.size(); j != ee; ++j) {
+            MachineBasicBlock *SrcBB = LI->second[j].first;
+            unsigned SrcReg = LI->second[j].second;
+            SSAUpdate.AddAvailableValue(SrcBB, SrcReg);
+          }
+
+          // Rewrite uses that are outside of the original def's block.
+          MachineRegisterInfo::use_iterator UI = MRI->use_begin(VReg);
+          while (UI != MRI->use_end()) {
+            MachineOperand &UseMO = UI.getOperand();
+            MachineInstr *UseMI = &*UI;
+            ++UI;
+            if (UseMI->getParent() == DefBB)
+              continue;
+            SSAUpdate.RewriteUse(UseMO);
+          }
+        }
+
+        SSAUpdateVRs.clear();
+        SSAUpdateVals.clear();
+      }
+
+      if (PreRegAlloc && TailDupVerify)
+        VerifyPHIs(MF, false);
       MadeChange = true;
-      ++NumDeadBlocks;
     }
   }
+
   return MadeChange;
 }
 
+static bool isDefLiveOut(unsigned Reg, MachineBasicBlock *BB,
+                         const MachineRegisterInfo *MRI) {
+  for (MachineRegisterInfo::use_iterator UI = MRI->use_begin(Reg),
+         UE = MRI->use_end(); UI != UE; ++UI) {
+    MachineInstr *UseMI = &*UI;
+    if (UseMI->getParent() != BB)
+      return true;
+  }
+  return false;
+}
+
+static unsigned getPHISrcRegOpIdx(MachineInstr *MI, MachineBasicBlock *SrcBB) {
+  for (unsigned i = 1, e = MI->getNumOperands(); i != e; i += 2)
+    if (MI->getOperand(i+1).getMBB() == SrcBB)
+      return i;
+  return 0;
+}
+
+/// AddSSAUpdateEntry - Add a definition and source virtual registers pair for
+/// SSA update.
+void TailDuplicatePass::AddSSAUpdateEntry(unsigned OrigReg, unsigned NewReg,
+                                          MachineBasicBlock *BB) {
+  DenseMap<unsigned, AvailableValsTy>::iterator LI= SSAUpdateVals.find(OrigReg);
+  if (LI != SSAUpdateVals.end())
+    LI->second.push_back(std::make_pair(BB, NewReg));
+  else {
+    AvailableValsTy Vals;
+    Vals.push_back(std::make_pair(BB, NewReg));
+    SSAUpdateVals.insert(std::make_pair(OrigReg, Vals));
+    SSAUpdateVRs.push_back(OrigReg);
+  }
+}
+
+/// ProcessPHI - Process PHI node in TailBB by turning it into a copy in PredBB.
+/// Remember the source register that's contributed by PredBB and update SSA
+/// update map.
+void TailDuplicatePass::ProcessPHI(MachineInstr *MI,
+                                   MachineBasicBlock *TailBB,
+                                   MachineBasicBlock *PredBB,
+                                   DenseMap<unsigned, unsigned> &LocalVRMap,
+                         SmallVector<std::pair<unsigned,unsigned>, 4> &Copies) {
+  unsigned DefReg = MI->getOperand(0).getReg();
+  unsigned SrcOpIdx = getPHISrcRegOpIdx(MI, PredBB);
+  assert(SrcOpIdx && "Unable to find matching PHI source?");
+  unsigned SrcReg = MI->getOperand(SrcOpIdx).getReg();
+  const TargetRegisterClass *RC = MRI->getRegClass(DefReg);
+  LocalVRMap.insert(std::make_pair(DefReg, SrcReg));
+
+  // Insert a copy from source to the end of the block. The def register is the
+  // available value liveout of the block.
+  unsigned NewDef = MRI->createVirtualRegister(RC);
+  Copies.push_back(std::make_pair(NewDef, SrcReg));
+  if (isDefLiveOut(DefReg, TailBB, MRI))
+    AddSSAUpdateEntry(DefReg, NewDef, PredBB);
+
+  // Remove PredBB from the PHI node.
+  MI->RemoveOperand(SrcOpIdx+1);
+  MI->RemoveOperand(SrcOpIdx);
+  if (MI->getNumOperands() == 1)
+    MI->eraseFromParent();
+}
+
+/// DuplicateInstruction - Duplicate a TailBB instruction to PredBB and update
+/// the source operands due to earlier PHI translation.
+void TailDuplicatePass::DuplicateInstruction(MachineInstr *MI,
+                                     MachineBasicBlock *TailBB,
+                                     MachineBasicBlock *PredBB,
+                                     MachineFunction &MF,
+                                     DenseMap<unsigned, unsigned> &LocalVRMap) {
+  MachineInstr *NewMI = MF.CloneMachineInstr(MI);
+  for (unsigned i = 0, e = NewMI->getNumOperands(); i != e; ++i) {
+    MachineOperand &MO = NewMI->getOperand(i);
+    if (!MO.isReg())
+      continue;
+    unsigned Reg = MO.getReg();
+    if (!Reg || TargetRegisterInfo::isPhysicalRegister(Reg))
+      continue;
+    if (MO.isDef()) {
+      const TargetRegisterClass *RC = MRI->getRegClass(Reg);
+      unsigned NewReg = MRI->createVirtualRegister(RC);
+      MO.setReg(NewReg);
+      LocalVRMap.insert(std::make_pair(Reg, NewReg));
+      if (isDefLiveOut(Reg, TailBB, MRI))
+        AddSSAUpdateEntry(Reg, NewReg, PredBB);
+    } else {
+      DenseMap<unsigned, unsigned>::iterator VI = LocalVRMap.find(Reg);
+      if (VI != LocalVRMap.end())
+        MO.setReg(VI->second);
+    }
+  }
+  PredBB->insert(PredBB->end(), NewMI);
+}
+
+/// UpdateSuccessorsPHIs - After FromBB is tail duplicated into its predecessor
+/// blocks, the successors have gained new predecessors. Update the PHI
+/// instructions in them accordingly.
+void
+TailDuplicatePass::UpdateSuccessorsPHIs(MachineBasicBlock *FromBB, bool isDead,
+                                  SmallVector<MachineBasicBlock*, 8> &TDBBs,
+                                  SmallSetVector<MachineBasicBlock*,8> &Succs) {
+  for (SmallSetVector<MachineBasicBlock*, 8>::iterator SI = Succs.begin(),
+         SE = Succs.end(); SI != SE; ++SI) {
+    MachineBasicBlock *SuccBB = *SI;
+    for (MachineBasicBlock::iterator II = SuccBB->begin(), EE = SuccBB->end();
+         II != EE; ++II) {
+      if (II->getOpcode() != TargetInstrInfo::PHI)
+        break;
+      unsigned Idx = 0;
+      for (unsigned i = 1, e = II->getNumOperands(); i != e; i += 2) {
+        MachineOperand &MO = II->getOperand(i+1);
+        if (MO.getMBB() == FromBB) {
+          Idx = i;
+          break;
+        }
+      }
+
+      assert(Idx != 0);
+      MachineOperand &MO0 = II->getOperand(Idx);
+      unsigned Reg = MO0.getReg();
+      if (isDead) {
+        // Folded into the previous BB.
+        // There could be duplicate phi source entries. FIXME: Should sdisel
+        // or earlier pass fixed this?
+        for (unsigned i = II->getNumOperands()-2; i != Idx; i -= 2) {
+          MachineOperand &MO = II->getOperand(i+1);
+          if (MO.getMBB() == FromBB) {
+            II->RemoveOperand(i+1);
+            II->RemoveOperand(i);
+          }
+        }
+        II->RemoveOperand(Idx+1);
+        II->RemoveOperand(Idx);
+      }
+      DenseMap<unsigned,AvailableValsTy>::iterator LI=SSAUpdateVals.find(Reg);
+      if (LI != SSAUpdateVals.end()) {
+        // This register is defined in the tail block.
+        for (unsigned j = 0, ee = LI->second.size(); j != ee; ++j) {
+          MachineBasicBlock *SrcBB = LI->second[j].first;
+          unsigned SrcReg = LI->second[j].second;
+          II->addOperand(MachineOperand::CreateReg(SrcReg, false));
+          II->addOperand(MachineOperand::CreateMBB(SrcBB));
+        }
+      } else {
+        // Live in tail block, must also be live in predecessors.
+        for (unsigned j = 0, ee = TDBBs.size(); j != ee; ++j) {
+          MachineBasicBlock *SrcBB = TDBBs[j];
+          II->addOperand(MachineOperand::CreateReg(Reg, false));
+          II->addOperand(MachineOperand::CreateMBB(SrcBB));
+        }
+      }
+    }
+  }
+}
+
 /// TailDuplicate - If it is profitable, duplicate TailBB's contents in each
 /// of its predecessors.
-bool TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB,
-                                        MachineFunction &MF) {
+bool
+TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
+                                 SmallVector<MachineBasicBlock*, 8> &TDBBs) {
   // Don't try to tail-duplicate single-block loops.
   if (TailBB->isSuccessor(TailBB))
     return false;
@@ -116,42 +428,49 @@ bool TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB,
   // duplicate only one, because one branch instruction can be eliminated to
   // compensate for the duplication.
   unsigned MaxDuplicateCount;
-  if (MF.getFunction()->hasFnAttr(Attribute::OptimizeForSize))
-    MaxDuplicateCount = 1;
-  else if (TII->isProfitableToDuplicateIndirectBranch() &&
-           !TailBB->empty() && TailBB->back().getDesc().isIndirectBranch())
+  if (!TailBB->empty() && TailBB->back().getDesc().isIndirectBranch())
     // If the target has hardware branch prediction that can handle indirect
     // branches, duplicating them can often make them predictable when there
     // are common paths through the code.  The limit needs to be high enough
     // to allow undoing the effects of tail merging.
     MaxDuplicateCount = 20;
+  else if (MF.getFunction()->hasFnAttr(Attribute::OptimizeForSize))
+    MaxDuplicateCount = 1;
   else
     MaxDuplicateCount = TailDuplicateSize;
 
   // Check the instructions in the block to determine whether tail-duplication
   // is invalid or unlikely to be profitable.
-  unsigned i = 0;
+  unsigned InstrCount = 0;
   bool HasCall = false;
   for (MachineBasicBlock::iterator I = TailBB->begin();
-       I != TailBB->end(); ++I, ++i) {
+       I != TailBB->end(); ++I) {
     // Non-duplicable things shouldn't be tail-duplicated.
     if (I->getDesc().isNotDuplicable()) return false;
+    // Do not duplicate 'return' instructions if this is a pre-regalloc run.
+    // A return may expand into a lot more instructions (e.g. reload of callee
+    // saved registers) after PEI.
+    if (PreRegAlloc && I->getDesc().isReturn()) return false;
     // Don't duplicate more than the threshold.
-    if (i == MaxDuplicateCount) return false;
+    if (InstrCount == MaxDuplicateCount) return false;
     // Remember if we saw a call.
     if (I->getDesc().isCall()) HasCall = true;
+    if (I->getOpcode() != TargetInstrInfo::PHI)
+      InstrCount += 1;
   }
   // Heuristically, don't tail-duplicate calls if it would expand code size,
   // as it's less likely to be worth the extra cost.
-  if (i > 1 && HasCall)
+  if (InstrCount > 1 && HasCall)
     return false;
 
+  DEBUG(errs() << "\n*** Tail-duplicating BB#" << TailBB->getNumber() << '\n');
+
   // Iterate through all the unique predecessors and tail-duplicate this
   // block into them, if possible. Copying the list ahead of time also
   // avoids trouble with the predecessor list reallocating.
   bool Changed = false;
-  SmallSetVector<MachineBasicBlock *, 8> Preds(TailBB->pred_begin(),
-                                               TailBB->pred_end());
+  SmallSetVector<MachineBasicBlock*, 8> Preds(TailBB->pred_begin(),
+                                              TailBB->pred_end());
   for (SmallSetVector<MachineBasicBlock *, 8>::iterator PI = Preds.begin(),
        PE = Preds.end(); PI != PE; ++PI) {
     MachineBasicBlock *PredBB = *PI;
@@ -176,13 +495,32 @@ bool TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB,
     DEBUG(errs() << "\nTail-duplicating into PredBB: " << *PredBB
                  << "From Succ: " << *TailBB);
 
+    TDBBs.push_back(PredBB);
+
     // Remove PredBB's unconditional branch.
     TII->RemoveBranch(*PredBB);
+
     // Clone the contents of TailBB into PredBB.
-    for (MachineBasicBlock::iterator I = TailBB->begin(), E = TailBB->end();
-         I != E; ++I) {
-      MachineInstr *NewMI = MF.CloneMachineInstr(I);
-      PredBB->insert(PredBB->end(), NewMI);
+    DenseMap<unsigned, unsigned> LocalVRMap;
+    SmallVector<std::pair<unsigned,unsigned>, 4> Copies;
+    MachineBasicBlock::iterator I = TailBB->begin();
+    while (I != TailBB->end()) {
+      MachineInstr *MI = &*I;
+      ++I;
+      if (MI->getOpcode() == TargetInstrInfo::PHI) {
+        // Replace the uses of the def of the PHI with the register coming
+        // from PredBB.
+        ProcessPHI(MI, TailBB, PredBB, LocalVRMap, Copies);
+      } else {
+        // Replace def of virtual registers with new registers, and update
+        // uses with PHI source register or the new registers.
+        DuplicateInstruction(MI, TailBB, PredBB, MF, LocalVRMap);
+      }
+    }
+    MachineBasicBlock::iterator Loc = PredBB->getFirstTerminator();
+    for (unsigned i = 0, e = Copies.size(); i != e; ++i) {
+      const TargetRegisterClass *RC = MRI->getRegClass(Copies[i].first);
+      TII->copyRegToReg(*PredBB, Loc, Copies[i].first, Copies[i].second, RC, RC);
     }
     NumInstrDups += TailBB->size() - 1; // subtract one for removed branch
 
@@ -191,8 +529,8 @@ bool TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB,
     assert(PredBB->succ_empty() &&
            "TailDuplicate called on block with multiple successors!");
     for (MachineBasicBlock::succ_iterator I = TailBB->succ_begin(),
-         E = TailBB->succ_end(); I != E; ++I)
-       PredBB->addSuccessor(*I);
+           E = TailBB->succ_end(); I != E; ++I)
+      PredBB->addSuccessor(*I);
 
     Changed = true;
     ++NumTailDups;
@@ -201,22 +539,53 @@ bool TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB,
   // If TailBB was duplicated into all its predecessors except for the prior
   // block, which falls through unconditionally, move the contents of this
   // block into the prior block.
-  MachineBasicBlock &PrevBB = *prior(MachineFunction::iterator(TailBB));
+  MachineBasicBlock *PrevBB = prior(MachineFunction::iterator(TailBB));
   MachineBasicBlock *PriorTBB = 0, *PriorFBB = 0;
   SmallVector<MachineOperand, 4> PriorCond;
   bool PriorUnAnalyzable =
-    TII->AnalyzeBranch(PrevBB, PriorTBB, PriorFBB, PriorCond, true);
+    TII->AnalyzeBranch(*PrevBB, PriorTBB, PriorFBB, PriorCond, true);
   // This has to check PrevBB->succ_size() because EH edges are ignored by
   // AnalyzeBranch.
   if (!PriorUnAnalyzable && PriorCond.empty() && !PriorTBB &&
-      TailBB->pred_size() == 1 && PrevBB.succ_size() == 1 &&
+      TailBB->pred_size() == 1 && PrevBB->succ_size() == 1 &&
       !TailBB->hasAddressTaken()) {
-    DEBUG(errs() << "\nMerging into block: " << PrevBB
+    DEBUG(errs() << "\nMerging into block: " << *PrevBB
           << "From MBB: " << *TailBB);
-    PrevBB.splice(PrevBB.end(), TailBB, TailBB->begin(), TailBB->end());
-    PrevBB.removeSuccessor(PrevBB.succ_begin());;
-    assert(PrevBB.succ_empty());
-    PrevBB.transferSuccessors(TailBB);
+    if (PreRegAlloc) {
+      DenseMap<unsigned, unsigned> LocalVRMap;
+      SmallVector<std::pair<unsigned,unsigned>, 4> Copies;
+      MachineBasicBlock::iterator I = TailBB->begin();
+      // Process PHI instructions first.
+      while (I != TailBB->end() && I->getOpcode() == TargetInstrInfo::PHI) {
+        // Replace the uses of the def of the PHI with the register coming
+        // from PredBB.
+        MachineInstr *MI = &*I++;
+        ProcessPHI(MI, TailBB, PrevBB, LocalVRMap, Copies);
+        if (MI->getParent())
+          MI->eraseFromParent();
+      }
+
+      // Now copy the non-PHI instructions.
+      while (I != TailBB->end()) {
+        // Replace def of virtual registers with new registers, and update
+        // uses with PHI source register or the new registers.
+        MachineInstr *MI = &*I++;
+        DuplicateInstruction(MI, TailBB, PrevBB, MF, LocalVRMap);
+        MI->eraseFromParent();
+      }
+      MachineBasicBlock::iterator Loc = PrevBB->getFirstTerminator();
+      for (unsigned i = 0, e = Copies.size(); i != e; ++i) {
+        const TargetRegisterClass *RC = MRI->getRegClass(Copies[i].first);
+        TII->copyRegToReg(*PrevBB, Loc, Copies[i].first, Copies[i].second, RC, RC);
+      }
+    } else {
+      // No PHIs to worry about, just splice the instructions over.
+      PrevBB->splice(PrevBB->end(), TailBB, TailBB->begin(), TailBB->end());
+    }
+    PrevBB->removeSuccessor(PrevBB->succ_begin());
+    assert(PrevBB->succ_empty());
+    PrevBB->transferSuccessors(TailBB);
+    TDBBs.push_back(PrevBB);
     Changed = true;
   }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/TargetInstrInfoImpl.cpp b/libclamav/c++/llvm/lib/CodeGen/TargetInstrInfoImpl.cpp
index 102e2a3..393e315 100644
--- a/libclamav/c++/llvm/lib/CodeGen/TargetInstrInfoImpl.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/TargetInstrInfoImpl.cpp
@@ -329,7 +329,7 @@ TargetInstrInfo::isReallyTriviallyReMaterializableGeneric(const MachineInstr *
       return false;
 
     // For the def, it should be the only def of that register.
-    if (MO.isDef() && (next(MRI.def_begin(Reg)) != MRI.def_end() ||
+    if (MO.isDef() && (llvm::next(MRI.def_begin(Reg)) != MRI.def_end() ||
                        MRI.isLiveIn(Reg)))
       return false;
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp b/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
index 5fa690b..98b95ac 100644
--- a/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
@@ -211,7 +211,7 @@ bool TwoAddressInstructionPass::Sink3AddrInstruction(MachineBasicBlock *MBB,
   ++KillPos;
 
   unsigned NumVisited = 0;
-  for (MachineBasicBlock::iterator I = next(OldPos); I != KillPos; ++I) {
+  for (MachineBasicBlock::iterator I = llvm::next(OldPos); I != KillPos; ++I) {
     MachineInstr *OtherMI = I;
     if (NumVisited > 30)  // FIXME: Arbitrary limit to reduce compile time cost.
       return false;
@@ -412,7 +412,7 @@ static bool isKilled(MachineInstr &MI, unsigned Reg,
     MachineRegisterInfo::def_iterator Begin = MRI->def_begin(Reg);
     // If there are multiple defs, we can't do a simple analysis, so just
     // go with what the kill flag says.
-    if (next(Begin) != MRI->def_end())
+    if (llvm::next(Begin) != MRI->def_end())
       return true;
     DefMI = &*Begin;
     bool IsSrcPhys, IsDstPhys;
@@ -643,7 +643,7 @@ TwoAddressInstructionPass::ConvertInstTo3Addr(MachineBasicBlock::iterator &mi,
     if (!Sunk) {
       DistanceMap.insert(std::make_pair(NewMI, Dist));
       mi = NewMI;
-      nmi = next(mi);
+      nmi = llvm::next(mi);
     }
     return true;
   }
@@ -923,7 +923,7 @@ bool TwoAddressInstructionPass::runOnMachineFunction(MachineFunction &MF) {
     Processed.clear();
     for (MachineBasicBlock::iterator mi = mbbi->begin(), me = mbbi->end();
          mi != me; ) {
-      MachineBasicBlock::iterator nmi = next(mi);
+      MachineBasicBlock::iterator nmi = llvm::next(mi);
       const TargetInstrDesc &TID = mi->getDesc();
       bool FirstTied = true;
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp b/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp
index 10c8066..054c3b6 100644
--- a/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp
@@ -754,7 +754,7 @@ void AvailableSpills::AddAvailableRegsToLiveIn(MachineBasicBlock &MBB,
     }
 
     // Skip over the same register.
-    std::multimap<unsigned, int>::iterator NI = next(I);
+    std::multimap<unsigned, int>::iterator NI = llvm::next(I);
     while (NI != E && NI->first == Reg) {
       ++I;
       ++NI;
@@ -1133,7 +1133,7 @@ private:
                          std::vector<MachineOperand*> &KillOps,
                          VirtRegMap &VRM) {
 
-    MachineBasicBlock::iterator NextMII = next(MII);
+    MachineBasicBlock::iterator NextMII = llvm::next(MII);
     if (NextMII == MBB.end())
       return false;
 
@@ -1186,7 +1186,7 @@ private:
     // Unfold next instructions that fold the same SS.
     do {
       MachineInstr &NextMI = *NextMII;
-      NextMII = next(NextMII);
+      NextMII = llvm::next(NextMII);
       NewMIs.clear();
       if (!TII->unfoldMemoryOperand(MF, &NextMI, VirtReg, false, false, NewMIs))
         llvm_unreachable("Unable unfold the load / store folding instruction!");
@@ -1463,8 +1463,8 @@ private:
                            std::vector<MachineOperand*> &KillOps,
                            VirtRegMap &VRM) {
 
-    MachineBasicBlock::iterator oldNextMII = next(MII);
-    TII->storeRegToStackSlot(MBB, next(MII), PhysReg, true, StackSlot, RC);
+    MachineBasicBlock::iterator oldNextMII = llvm::next(MII);
+    TII->storeRegToStackSlot(MBB, llvm::next(MII), PhysReg, true, StackSlot, RC);
     MachineInstr *StoreMI = prior(oldNextMII);
     VRM.addSpillSlotUse(StackSlot, StoreMI);
     DEBUG(errs() << "Store:\t" << *StoreMI);
@@ -1626,14 +1626,14 @@ private:
     DistanceMap.clear();
     for (MachineBasicBlock::iterator MII = MBB.begin(), E = MBB.end();
          MII != E; ) {
-      MachineBasicBlock::iterator NextMII = next(MII);
+      MachineBasicBlock::iterator NextMII = llvm::next(MII);
 
       VirtRegMap::MI2VirtMapTy::const_iterator I, End;
       bool Erased = false;
       bool BackTracked = false;
       if (OptimizeByUnfold(MBB, MII,
                            MaybeDeadStores, Spills, RegKills, KillOps, VRM))
-        NextMII = next(MII);
+        NextMII = llvm::next(MII);
 
       MachineInstr &MI = *MII;
 
@@ -1657,7 +1657,7 @@ private:
 
           // Back-schedule reloads and remats.
           MachineBasicBlock::iterator InsertLoc =
-            ComputeReloadLoc(next(MII), MBB.begin(), PhysReg, TRI, false,
+            ComputeReloadLoc(llvm::next(MII), MBB.begin(), PhysReg, TRI, false,
                              SS, TII, MF);
 
           TII->loadRegFromStackSlot(MBB, InsertLoc, PhysReg, SS, RC);
@@ -1667,7 +1667,7 @@ private:
           ++NumPSpills;
           DistanceMap.insert(std::make_pair(LoadMI, Dist++));
         }
-        NextMII = next(MII);
+        NextMII = llvm::next(MII);
       }
 
       // Insert restores here if asked to.
@@ -1785,14 +1785,14 @@ private:
           const TargetRegisterClass *RC = RegInfo->getRegClass(VirtReg);
           unsigned Phys = VRM.getPhys(VirtReg);
           int StackSlot = VRM.getStackSlot(VirtReg);
-          MachineBasicBlock::iterator oldNextMII = next(MII);
-          TII->storeRegToStackSlot(MBB, next(MII), Phys, isKill, StackSlot, RC);
+          MachineBasicBlock::iterator oldNextMII = llvm::next(MII);
+          TII->storeRegToStackSlot(MBB, llvm::next(MII), Phys, isKill, StackSlot, RC);
           MachineInstr *StoreMI = prior(oldNextMII);
           VRM.addSpillSlotUse(StackSlot, StoreMI);
           DEBUG(errs() << "Store:\t" << *StoreMI);
           VRM.virtFolded(VirtReg, StoreMI, VirtRegMap::isMod);
         }
-        NextMII = next(MII);
+        NextMII = llvm::next(MII);
       }
 
       /// ReusedOperands - Keep track of operand reuse in case we need to undo
@@ -2265,7 +2265,7 @@ private:
 
               if (CommuteToFoldReload(MBB, MII, VirtReg, SrcReg, StackSlot,
                                       Spills, RegKills, KillOps, TRI, VRM)) {
-                NextMII = next(MII);
+                NextMII = llvm::next(MII);
                 BackTracked = true;
                 goto ProcessNextInst;
               }
@@ -2381,7 +2381,7 @@ private:
           MachineInstr *&LastStore = MaybeDeadStores[StackSlot];
           SpillRegToStackSlot(MBB, MII, -1, PhysReg, StackSlot, RC, true,
                             LastStore, Spills, ReMatDefs, RegKills, KillOps, VRM);
-          NextMII = next(MII);
+          NextMII = llvm::next(MII);
 
           // Check to see if this is a noop copy.  If so, eliminate the
           // instruction before considering the dest reg to be changed.
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/OProfileJITEventListener.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/OProfileJITEventListener.cpp
index b45c71f..52a8f71 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/OProfileJITEventListener.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/OProfileJITEventListener.cpp
@@ -69,24 +69,18 @@ OProfileJITEventListener::~OProfileJITEventListener() {
 }
 
 class FilenameCache {
-  // Holds the filename of each Scope, so that we can pass the
-  // pointer into oprofile.  These char*s are freed in the destructor.
-  DenseMap<MDNode*, char*> Filenames;
+  // Holds the filename of each Scope, so that we can pass a null-terminated
+  // string into oprofile.
+  DenseMap<MDNode*, std::string> Filenames;
 
  public:
   const char *getFilename(MDNode *Scope) {
-    char *&Filename = Filenames[Scope];
-    if (Filename == NULL) {
+    std::string &Filename = Filenames[Scope];
+    if (Filename.empty()) {
       DIScope S(Scope);
-      Filename = strdup(S.getFilename());
-    }
-    return Filename;
-  }
-  ~FilenameCache() {
-    for (DenseMap<MDNode*, char*>::iterator
-             I = Filenames.begin(), E = Filenames.end(); I != E; ++I) {
-      free(I->second);
+      Filename = S.getFilename();
     }
+    return Filename.c_str();
   }
 };
 
diff --git a/libclamav/c++/llvm/lib/Support/CMakeLists.txt b/libclamav/c++/llvm/lib/Support/CMakeLists.txt
index cd355ff..ac736dc 100644
--- a/libclamav/c++/llvm/lib/Support/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Support/CMakeLists.txt
@@ -6,6 +6,7 @@ add_llvm_library(LLVMSupport
   CommandLine.cpp
   ConstantRange.cpp
   Debug.cpp
+  DeltaAlgorithm.cpp
   Dwarf.cpp
   ErrorHandling.cpp
   FileUtilities.cpp
diff --git a/libclamav/c++/llvm/lib/Support/CommandLine.cpp b/libclamav/c++/llvm/lib/Support/CommandLine.cpp
index 9cf9c89..b6c0e08 100644
--- a/libclamav/c++/llvm/lib/Support/CommandLine.cpp
+++ b/libclamav/c++/llvm/lib/Support/CommandLine.cpp
@@ -778,9 +778,10 @@ void cl::ParseCommandLineOptions(int argc, char **argv,
       free(*i);
   }
 
-  DEBUG(errs() << "\nArgs: ";
+  DEBUG(errs() << "Args: ";
         for (int i = 0; i < argc; ++i)
           errs() << argv[i] << ' ';
+        errs() << '\n';
        );
 
   // If we had an error processing our arguments, don't let the program execute
diff --git a/libclamav/c++/llvm/lib/Support/DeltaAlgorithm.cpp b/libclamav/c++/llvm/lib/Support/DeltaAlgorithm.cpp
new file mode 100644
index 0000000..d176548
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Support/DeltaAlgorithm.cpp
@@ -0,0 +1,114 @@
+//===--- DeltaAlgorithm.cpp - A Set Minimization Algorithm -----*- C++ -*--===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//===----------------------------------------------------------------------===//
+
+#include "llvm/ADT/DeltaAlgorithm.h"
+#include <algorithm>
+#include <iterator>
+using namespace llvm;
+
+DeltaAlgorithm::~DeltaAlgorithm() {
+}
+
+bool DeltaAlgorithm::GetTestResult(const changeset_ty &Changes) {
+  if (FailedTestsCache.count(Changes))
+    return false;
+
+  bool Result = ExecuteOneTest(Changes);
+  if (!Result)
+    FailedTestsCache.insert(Changes);
+
+  return Result;
+}
+
+void DeltaAlgorithm::Split(const changeset_ty &S, changesetlist_ty &Res) {
+  // FIXME: Allow clients to provide heuristics for improved splitting.
+
+  // FIXME: This is really slow.
+  changeset_ty LHS, RHS;
+  unsigned idx = 0;
+  for (changeset_ty::const_iterator it = S.begin(),
+         ie = S.end(); it != ie; ++it, ++idx)
+    ((idx & 1) ? LHS : RHS).insert(*it);
+  if (!LHS.empty())
+    Res.push_back(LHS);
+  if (!RHS.empty())
+    Res.push_back(RHS);
+}
+
+DeltaAlgorithm::changeset_ty
+DeltaAlgorithm::Delta(const changeset_ty &Changes,
+                      const changesetlist_ty &Sets) {
+  // Invariant: union(Res) == Changes
+  UpdatedSearchState(Changes, Sets);
+
+  // If there is nothing left we can remove, we are done.
+  if (Sets.size() <= 1)
+    return Changes;
+
+  // Look for a passing subset.
+  changeset_ty Res;
+  if (Search(Changes, Sets, Res))
+    return Res;
+
+  // Otherwise, partition the sets if possible; if not we are done.
+  changesetlist_ty SplitSets;
+  for (changesetlist_ty::const_iterator it = Sets.begin(),
+         ie = Sets.end(); it != ie; ++it)
+    Split(*it, SplitSets);
+  if (SplitSets.size() == Sets.size())
+    return Changes;
+
+  return Delta(Changes, SplitSets);
+}
+
+bool DeltaAlgorithm::Search(const changeset_ty &Changes,
+                            const changesetlist_ty &Sets,
+                            changeset_ty &Res) {
+  // FIXME: Parallelize.
+  for (changesetlist_ty::const_iterator it = Sets.begin(),
+         ie = Sets.end(); it != ie; ++it) {
+    // If the test passes on this subset alone, recurse.
+    if (GetTestResult(*it)) {
+      changesetlist_ty Sets;
+      Split(*it, Sets);
+      Res = Delta(*it, Sets);
+      return true;
+    }
+
+    // Otherwise, if we have more than two sets, see if test passes on the
+    // complement.
+    if (Sets.size() > 2) {
+      // FIXME: This is really slow.
+      changeset_ty Complement;
+      std::set_difference(
+        Changes.begin(), Changes.end(), it->begin(), it->end(),
+        std::insert_iterator<changeset_ty>(Complement, Complement.begin()));
+      if (GetTestResult(Complement)) {
+        changesetlist_ty ComplementSets;
+        ComplementSets.insert(ComplementSets.end(), Sets.begin(), it);
+        ComplementSets.insert(ComplementSets.end(), it + 1, Sets.end());
+        Res = Delta(Complement, ComplementSets);
+        return true;
+      }
+    }
+  }
+
+  return false;
+}
+
+DeltaAlgorithm::changeset_ty DeltaAlgorithm::Run(const changeset_ty &Changes) {
+  // Check empty set first to quickly find poor test functions.
+  if (GetTestResult(changeset_ty()))
+    return changeset_ty();
+
+  // Otherwise run the real delta algorithm.
+  changesetlist_ty Sets;
+  Split(Changes, Sets);
+
+  return Delta(Changes, Sets);
+}
diff --git a/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp b/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp
index b04864a..df1aa6a 100644
--- a/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp
+++ b/libclamav/c++/llvm/lib/Support/MemoryBuffer.cpp
@@ -176,7 +176,7 @@ MemoryBuffer *MemoryBuffer::getFile(StringRef Filename, std::string *ErrStr,
 #endif
   int FD = ::open(Filename.str().c_str(), O_RDONLY|OpenFlags);
   if (FD == -1) {
-    if (ErrStr) *ErrStr = "could not open file";
+    if (ErrStr) *ErrStr = strerror(errno);
     return 0;
   }
   
@@ -186,7 +186,7 @@ MemoryBuffer *MemoryBuffer::getFile(StringRef Filename, std::string *ErrStr,
     struct stat FileInfo;
     // TODO: This should use fstat64 when available.
     if (fstat(FD, &FileInfo) == -1) {
-      if (ErrStr) *ErrStr = "could not get file length";
+      if (ErrStr) *ErrStr = strerror(errno);
       ::close(FD);
       return 0;
     }
@@ -230,8 +230,8 @@ MemoryBuffer *MemoryBuffer::getFile(StringRef Filename, std::string *ErrStr,
       // try again
     } else {
       // error reading.
+      if (ErrStr) *ErrStr = strerror(errno);
       close(FD);
-      if (ErrStr) *ErrStr = "error reading file data";
       return 0;
     }
   }
diff --git a/libclamav/c++/llvm/lib/System/Unix/Path.inc b/libclamav/c++/llvm/lib/System/Unix/Path.inc
index 4300d67..33b26f7 100644
--- a/libclamav/c++/llvm/lib/System/Unix/Path.inc
+++ b/libclamav/c++/llvm/lib/System/Unix/Path.inc
@@ -77,7 +77,7 @@ inline bool lastIsSlash(const std::string& path) {
 namespace llvm {
 using namespace sys;
 
-extern const char sys::PathSeparator = ':';
+const char sys::PathSeparator = ':';
 
 Path::Path(const std::string& p)
   : path(p) {}
@@ -348,7 +348,8 @@ Path Path::GetMainExecutable(const char *argv0, void *MainAddr) {
   uint32_t size = sizeof(exe_path);
   if (_NSGetExecutablePath(exe_path, &size) == 0) {
     char link_path[MAXPATHLEN];
-    return Path(std::string(realpath(exe_path, link_path)));
+    if (realpath(exe_path, link_path))
+      return Path(std::string(link_path));
   }
 #elif defined(__FreeBSD__)
   char exe_path[PATH_MAX];
@@ -370,7 +371,8 @@ Path Path::GetMainExecutable(const char *argv0, void *MainAddr) {
   // If the filename is a symlink, we need to resolve and return the location of
   // the actual executable.
   char link_path[MAXPATHLEN];
-  return Path(std::string(realpath(DLInfo.dli_fname, link_path)));
+  if (realpath(DLInfo.dli_fname, link_path))
+    return Path(std::string(link_path));
 #endif
   return Path();
 }
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARM.h b/libclamav/c++/llvm/lib/Target/ARM/ARM.h
index ff1980d..21445ad 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARM.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARM.h
@@ -109,7 +109,6 @@ FunctionPass *createNEONPreAllocPass();
 FunctionPass *createNEONMoveFixPass();
 FunctionPass *createThumb2ITBlockPass();
 FunctionPass *createThumb2SizeReductionPass();
-FunctionPass *createARMMaxStackAlignmentCalculatorPass();
 
 extern Target TheARMTarget, TheThumbTarget;
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
index 705f970..1aae369 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
@@ -418,11 +418,13 @@ bool ARMBaseInstrInfo::isPredicable(MachineInstr *MI) const {
   return true;
 }
 
-/// FIXME: Works around a gcc miscompilation with -fstrict-aliasing
+/// FIXME: Works around a gcc miscompilation with -fstrict-aliasing.
+DISABLE_INLINE
 static unsigned getNumJTEntries(const std::vector<MachineJumpTableEntry> &JT,
-                                unsigned JTI) DISABLE_INLINE;
+                                unsigned JTI);
 static unsigned getNumJTEntries(const std::vector<MachineJumpTableEntry> &JT,
                                 unsigned JTI) {
+  assert(JTI < JT.size());
   return JT[JTI].MBBs.size();
 }
 
@@ -467,6 +469,8 @@ unsigned ARMBaseInstrInfo::GetInstSizeInBytes(const MachineInstr *MI) const {
       return MI->getOperand(2).getImm();
     case ARM::Int_eh_sjlj_setjmp:
       return 24;
+    case ARM::tInt_eh_sjlj_setjmp:
+      return 22;
     case ARM::t2Int_eh_sjlj_setjmp:
       return 22;
     case ARM::BR_JTr:
@@ -755,7 +759,6 @@ loadRegFromStackSlot(MachineBasicBlock &MBB, MachineBasicBlock::iterator I,
     assert((RC == ARM::QPRRegisterClass ||
             RC == ARM::QPR_VFP2RegisterClass ||
             RC == ARM::QPR_8RegisterClass) && "Unknown regclass!");
-    // FIXME: Neon instructions should support predicates
     if (Align >= 16
         && (getRegisterInfo().needsStackRealignment(MF))) {
       AddDefaultPred(BuildMI(MBB, I, DL, get(ARM::VLD1q64), DestReg)
@@ -1027,12 +1030,6 @@ bool ARMBaseInstrInfo::isIdentical(const MachineInstr *MI0,
   return TargetInstrInfoImpl::isIdentical(MI0, MI1, MRI);
 }
 
-bool ARMBaseInstrInfo::isProfitableToDuplicateIndirectBranch() const {
-  // If the target processor can predict indirect branches, it is highly
-  // desirable to duplicate them, since it can often make them predictable.
-  return getSubtarget().hasBranchTargetBuffer();
-}
-
 /// getInstrPredicate - If instruction is predicated, returns its predicate
 /// condition, otherwise returns AL. It also returns the condition code
 /// register by reference.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.h b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.h
index 7944f35..78d9135 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.h
@@ -92,6 +92,8 @@ namespace ARMII {
     StMiscFrm     = 9  << FormShift,
     LdStMulFrm    = 10 << FormShift,
 
+    LdStExFrm     = 28 << FormShift,
+
     // Miscellaneous arithmetic instructions
     ArithMiscFrm  = 11 << FormShift,
 
@@ -190,9 +192,6 @@ public:
   // if there is not such an opcode.
   virtual unsigned getUnindexedOpcode(unsigned Opc) const =0;
 
-  // Return true if the block does not fall through.
-  virtual bool BlockHasNoFallThrough(const MachineBasicBlock &MBB) const =0;
-
   virtual MachineInstr *convertToThreeAddress(MachineFunction::iterator &MFI,
                                               MachineBasicBlock::iterator &MBBI,
                                               LiveVariables *LV) const;
@@ -290,8 +289,6 @@ public:
 
   virtual bool isIdentical(const MachineInstr *MI, const MachineInstr *Other,
                            const MachineRegisterInfo *MRI) const;
-
-  virtual bool isProfitableToDuplicateIndirectBranch() const;
 };
 
 static inline
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
index 653328d..9b5f79f 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
@@ -471,21 +471,6 @@ ARMBaseRegisterInfo::UpdateRegAllocHint(unsigned Reg, unsigned NewReg,
   }
 }
 
-static unsigned calculateMaxStackAlignment(const MachineFrameInfo *FFI) {
-  unsigned MaxAlign = 0;
-
-  for (int i = FFI->getObjectIndexBegin(),
-         e = FFI->getObjectIndexEnd(); i != e; ++i) {
-    if (FFI->isDeadObjectIndex(i))
-      continue;
-
-    unsigned Align = FFI->getObjectAlignment(i);
-    MaxAlign = std::max(MaxAlign, Align);
-  }
-
-  return MaxAlign;
-}
-
 /// hasFP - Return true if the specified function should have a dedicated frame
 /// pointer register.  This is true if the function has variable sized allocas
 /// or if frame pointer elimination is disabled.
@@ -585,16 +570,21 @@ ARMBaseRegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
   SmallVector<unsigned, 4> UnspilledCS2GPRs;
   ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
 
-  MachineFrameInfo *MFI = MF.getFrameInfo();
 
   // Calculate and set max stack object alignment early, so we can decide
   // whether we will need stack realignment (and thus FP).
   if (RealignStack) {
-    unsigned MaxAlign = std::max(MFI->getMaxAlignment(),
-                                 calculateMaxStackAlignment(MFI));
-    MFI->setMaxAlignment(MaxAlign);
+    MachineFrameInfo *MFI = MF.getFrameInfo();
+    MFI->calculateMaxStackAlignment();
   }
 
+  // Spill R4 if Thumb2 function requires stack realignment - it will be used as
+  // scratch register.
+  // FIXME: It will be better just to find spare register here.
+  if (needsStackRealignment(MF) &&
+      AFI->isThumb2Function())
+    MF.getRegInfo().setPhysRegUsed(ARM::R4);
+
   // Don't spill FP if the frame can be eliminated. This is determined
   // by scanning the callee-save registers to see if any is used.
   const unsigned *CSRegs = getCalleeSavedRegs();
@@ -1368,14 +1358,30 @@ emitPrologue(MachineFunction &MF) const {
 
   // If we need dynamic stack realignment, do it here.
   if (needsStackRealignment(MF)) {
-    unsigned Opc;
     unsigned MaxAlign = MFI->getMaxAlignment();
     assert (!AFI->isThumb1OnlyFunction());
-    Opc = AFI->isThumbFunction() ? ARM::t2BICri : ARM::BICri;
-
-    AddDefaultCC(AddDefaultPred(BuildMI(MBB, MBBI, dl, TII.get(Opc), ARM::SP)
+    if (!AFI->isThumbFunction()) {
+      // Emit bic sp, sp, MaxAlign
+      AddDefaultCC(AddDefaultPred(BuildMI(MBB, MBBI, dl,
+                                          TII.get(ARM::BICri), ARM::SP)
                                   .addReg(ARM::SP, RegState::Kill)
                                   .addImm(MaxAlign-1)));
+    } else {
+      // We cannot use sp as source/dest register here, thus we're emitting the
+      // following sequence:
+      // mov r4, sp
+      // bic r4, r4, MaxAlign
+      // mov sp, r4
+      // FIXME: It will be better just to find spare register here.
+      BuildMI(MBB, MBBI, dl, TII.get(ARM::tMOVtgpr2gpr), ARM::R4)
+        .addReg(ARM::SP, RegState::Kill);
+      AddDefaultCC(AddDefaultPred(BuildMI(MBB, MBBI, dl,
+                                          TII.get(ARM::t2BICri), ARM::R4)
+                                  .addReg(ARM::R4, RegState::Kill)
+                                  .addImm(MaxAlign-1)));
+      BuildMI(MBB, MBBI, dl, TII.get(ARM::tMOVtgpr2gpr), ARM::SP)
+        .addReg(ARM::R4, RegState::Kill);
+    }
   }
 }
 
@@ -1479,48 +1485,4 @@ emitEpilogue(MachineFunction &MF, MachineBasicBlock &MBB) const {
     emitSPUpdate(isARM, MBB, MBBI, dl, TII, VARegSaveSize);
 }
 
-namespace {
-  struct MaximalStackAlignmentCalculator : public MachineFunctionPass {
-    static char ID;
-    MaximalStackAlignmentCalculator() : MachineFunctionPass(&ID) {}
-
-    virtual bool runOnMachineFunction(MachineFunction &MF) {
-      MachineFrameInfo *FFI = MF.getFrameInfo();
-      MachineRegisterInfo &RI = MF.getRegInfo();
-
-      // Calculate max stack alignment of all already allocated stack objects.
-      unsigned MaxAlign = calculateMaxStackAlignment(FFI);
-
-      // Be over-conservative: scan over all vreg defs and find, whether vector
-      // registers are used. If yes - there is probability, that vector register
-      // will be spilled and thus stack needs to be aligned properly.
-      for (unsigned RegNum = TargetRegisterInfo::FirstVirtualRegister;
-           RegNum < RI.getLastVirtReg(); ++RegNum)
-        MaxAlign = std::max(MaxAlign, RI.getRegClass(RegNum)->getAlignment());
-
-      if (FFI->getMaxAlignment() == MaxAlign)
-        return false;
-
-      FFI->setMaxAlignment(MaxAlign);
-      return true;
-    }
-
-    virtual const char *getPassName() const {
-      return "ARM Stack Required Alignment Auto-Detector";
-    }
-
-    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
-      AU.setPreservesCFG();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
-  };
-
-  char MaximalStackAlignmentCalculator::ID = 0;
-}
-
-FunctionPass*
-llvm::createARMMaxStackAlignmentCalculatorPass() {
-  return new MaximalStackAlignmentCalculator();
-}
-
 #include "ARMGenRegisterInfo.inc"
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp
index e59a315..acd30d2 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp
@@ -418,10 +418,10 @@ void ARMConstantIslands::DoInitialPlacement(MachineFunction &MF,
 static bool BBHasFallthrough(MachineBasicBlock *MBB) {
   // Get the next machine basic block in the function.
   MachineFunction::iterator MBBI = MBB;
-  if (next(MBBI) == MBB->getParent()->end())  // Can't fall off end of function.
+  if (llvm::next(MBBI) == MBB->getParent()->end())  // Can't fall off end of function.
     return false;
 
-  MachineBasicBlock *NextBB = next(MBBI);
+  MachineBasicBlock *NextBB = llvm::next(MBBI);
   for (MachineBasicBlock::succ_iterator I = MBB->succ_begin(),
        E = MBB->succ_end(); I != E; ++I)
     if (*I == NextBB)
@@ -760,7 +760,7 @@ MachineBasicBlock *ARMConstantIslands::SplitBlockBeforeInstr(MachineInstr *MI) {
                      CompareMBBNumbers);
   MachineBasicBlock* WaterBB = *IP;
   if (WaterBB == OrigBB)
-    WaterList.insert(next(IP), NewBB);
+    WaterList.insert(llvm::next(IP), NewBB);
   else
     WaterList.insert(IP, OrigBB);
   NewWaterList.insert(OrigBB);
@@ -887,7 +887,7 @@ static bool BBIsJumpedOver(MachineBasicBlock *MBB) {
 
 void ARMConstantIslands::AdjustBBOffsetsAfter(MachineBasicBlock *BB,
                                               int delta) {
-  MachineFunction::iterator MBBI = BB; MBBI = next(MBBI);
+  MachineFunction::iterator MBBI = BB; MBBI = llvm::next(MBBI);
   for(unsigned i = BB->getNumber()+1, e = BB->getParent()->getNumBlockIDs();
       i < e; ++i) {
     BBOffsets[i] += delta;
@@ -929,7 +929,7 @@ void ARMConstantIslands::AdjustBBOffsetsAfter(MachineBasicBlock *BB,
       if (delta==0)
         return;
     }
-    MBBI = next(MBBI);
+    MBBI = llvm::next(MBBI);
   }
 }
 
@@ -1096,7 +1096,7 @@ void ARMConstantIslands::CreateNewWater(unsigned CPUserIndex,
     DEBUG(errs() << "Split at end of block\n");
     if (&UserMBB->back() == UserMI)
       assert(BBHasFallthrough(UserMBB) && "Expected a fallthrough BB!");
-    NewMBB = next(MachineFunction::iterator(UserMBB));
+    NewMBB = llvm::next(MachineFunction::iterator(UserMBB));
     // Add an unconditional branch from UserMBB to fallthrough block.
     // Record it for branch lengthening; this new branch will not get out of
     // range, but if the preceding conditional branch is out of range, the
@@ -1144,7 +1144,7 @@ void ARMConstantIslands::CreateNewWater(unsigned CPUserIndex,
     for (unsigned Offset = UserOffset+TII->GetInstSizeInBytes(UserMI);
          Offset < BaseInsertOffset;
          Offset += TII->GetInstSizeInBytes(MI),
-            MI = next(MI)) {
+            MI = llvm::next(MI)) {
       if (CPUIndex < CPUsers.size() && CPUsers[CPUIndex].MI == MI) {
         CPUser &U = CPUsers[CPUIndex];
         if (!OffsetIsInRange(Offset, EndInsertOffset,
@@ -1204,7 +1204,7 @@ bool ARMConstantIslands::HandleConstantPoolUser(MachineFunction &MF,
       NewWaterList.insert(NewIsland);
     }
     // The new CPE goes before the following block (NewMBB).
-    NewMBB = next(MachineFunction::iterator(WaterBB));
+    NewMBB = llvm::next(MachineFunction::iterator(WaterBB));
 
   } else {
     // No water found.
@@ -1406,7 +1406,7 @@ ARMConstantIslands::FixUpConditionalBr(MachineFunction &MF, ImmBranch &Br) {
 
   NumCBrFixed++;
   if (BMI != MI) {
-    if (next(MachineBasicBlock::iterator(MI)) == prior(MBB->end()) &&
+    if (llvm::next(MachineBasicBlock::iterator(MI)) == prior(MBB->end()) &&
         BMI->getOpcode() == Br.UncondBr) {
       // Last MI in the BB is an unconditional branch. Can we simply invert the
       // condition and swap destinations:
@@ -1433,12 +1433,12 @@ ARMConstantIslands::FixUpConditionalBr(MachineFunction &MF, ImmBranch &Br) {
     // branch to the destination.
     int delta = TII->GetInstSizeInBytes(&MBB->back());
     BBSizes[MBB->getNumber()] -= delta;
-    MachineBasicBlock* SplitBB = next(MachineFunction::iterator(MBB));
+    MachineBasicBlock* SplitBB = llvm::next(MachineFunction::iterator(MBB));
     AdjustBBOffsetsAfter(SplitBB, -delta);
     MBB->back().eraseFromParent();
     // BBOffsets[SplitBB] is wrong temporarily, fixed below
   }
-  MachineBasicBlock *NextBB = next(MachineFunction::iterator(MBB));
+  MachineBasicBlock *NextBB = llvm::next(MachineFunction::iterator(MBB));
 
   DEBUG(errs() << "  Insert B to BB#" << DestBB->getNumber()
                << " also invert condition and change dest. to BB#"
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMExpandPseudoInsts.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMExpandPseudoInsts.cpp
index c929c54..1b8727d 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMExpandPseudoInsts.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMExpandPseudoInsts.cpp
@@ -48,7 +48,7 @@ bool ARMExpandPseudo::ExpandMBB(MachineBasicBlock &MBB) {
   MachineBasicBlock::iterator MBBI = MBB.begin(), E = MBB.end();
   while (MBBI != E) {
     MachineInstr &MI = *MBBI;
-    MachineBasicBlock::iterator NMBBI = next(MBBI);
+    MachineBasicBlock::iterator NMBBI = llvm::next(MBBI);
 
     unsigned Opcode = MI.getOpcode();
     switch (Opcode) {
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
index c839fc6..ac6b203 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -42,6 +42,7 @@
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/MathExtras.h"
+#include "llvm/Support/raw_ostream.h"
 #include <sstream>
 using namespace llvm;
 
@@ -377,7 +378,7 @@ ARMTargetLowering::ARMTargetLowering(TargetMachine &TM)
     setOperationAction(ISD::DYNAMIC_STACKALLOC, MVT::i32, Custom);
   else
     setOperationAction(ISD::DYNAMIC_STACKALLOC, MVT::i32, Expand);
-  setOperationAction(ISD::MEMBARRIER,         MVT::Other, Expand);
+  setOperationAction(ISD::MEMBARRIER,         MVT::Other, Custom);
 
   if (!Subtarget->hasV6Ops() && !Subtarget->isThumb2()) {
     setOperationAction(ISD::SIGN_EXTEND_INREG, MVT::i16, Expand);
@@ -500,6 +501,9 @@ const char *ARMTargetLowering::getTargetNodeName(unsigned Opcode) const {
 
   case ARMISD::DYN_ALLOC:     return "ARMISD::DYN_ALLOC";
 
+  case ARMISD::MEMBARRIER:    return "ARMISD::MEMBARRIER";
+  case ARMISD::SYNCBARRIER:   return "ARMISD::SYNCBARRIER";
+
   case ARMISD::VCEQ:          return "ARMISD::VCEQ";
   case ARMISD::VCGE:          return "ARMISD::VCGE";
   case ARMISD::VCGEU:         return "ARMISD::VCGEU";
@@ -1470,6 +1474,21 @@ ARMTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) {
   }
 }
 
+static SDValue LowerMEMBARRIER(SDValue Op, SelectionDAG &DAG) {
+  DebugLoc dl = Op.getDebugLoc();
+  SDValue Op5 = Op.getOperand(5);
+  SDValue Res;
+  unsigned isDeviceBarrier = cast<ConstantSDNode>(Op5)->getZExtValue();
+  if (isDeviceBarrier) {
+    Res = DAG.getNode(ARMISD::SYNCBARRIER, dl, MVT::Other,
+                              Op.getOperand(0));
+  } else {
+    Res = DAG.getNode(ARMISD::MEMBARRIER, dl, MVT::Other,
+                              Op.getOperand(0));
+  }
+  return Res;
+}
+
 static SDValue LowerVASTART(SDValue Op, SelectionDAG &DAG,
                             unsigned VarArgsFrameIndex) {
   // vastart just stores the address of the VarArgsFrameIndex slot into the
@@ -2528,6 +2547,25 @@ static bool isVTRNMask(const SmallVectorImpl<int> &M, EVT VT,
   return true;
 }
 
+/// isVTRN_v_undef_Mask - Special case of isVTRNMask for canonical form of
+/// "vector_shuffle v, v", i.e., "vector_shuffle v, undef".
+/// Mask is e.g., <0, 0, 2, 2> instead of <0, 4, 2, 6>.
+static bool isVTRN_v_undef_Mask(const SmallVectorImpl<int> &M, EVT VT,
+                                unsigned &WhichResult) {
+  unsigned EltSz = VT.getVectorElementType().getSizeInBits();
+  if (EltSz == 64)
+    return false;
+
+  unsigned NumElts = VT.getVectorNumElements();
+  WhichResult = (M[0] == 0 ? 0 : 1);
+  for (unsigned i = 0; i < NumElts; i += 2) {
+    if ((unsigned) M[i] != i + WhichResult ||
+        (unsigned) M[i+1] != i + WhichResult)
+      return false;
+  }
+  return true;
+}
+
 static bool isVUZPMask(const SmallVectorImpl<int> &M, EVT VT,
                        unsigned &WhichResult) {
   unsigned EltSz = VT.getVectorElementType().getSizeInBits();
@@ -2548,6 +2586,33 @@ static bool isVUZPMask(const SmallVectorImpl<int> &M, EVT VT,
   return true;
 }
 
+/// isVUZP_v_undef_Mask - Special case of isVUZPMask for canonical form of
+/// "vector_shuffle v, v", i.e., "vector_shuffle v, undef".
+/// Mask is e.g., <0, 2, 0, 2> instead of <0, 2, 4, 6>,
+static bool isVUZP_v_undef_Mask(const SmallVectorImpl<int> &M, EVT VT,
+                                unsigned &WhichResult) {
+  unsigned EltSz = VT.getVectorElementType().getSizeInBits();
+  if (EltSz == 64)
+    return false;
+
+  unsigned Half = VT.getVectorNumElements() / 2;
+  WhichResult = (M[0] == 0 ? 0 : 1);
+  for (unsigned j = 0; j != 2; ++j) {
+    unsigned Idx = WhichResult;
+    for (unsigned i = 0; i != Half; ++i) {
+      if ((unsigned) M[i + j * Half] != Idx)
+        return false;
+      Idx += 2;
+    }
+  }
+
+  // VUZP.32 for 64-bit vectors is a pseudo-instruction alias for VTRN.32.
+  if (VT.is64BitVector() && EltSz == 32)
+    return false;
+
+  return true;
+}
+
 static bool isVZIPMask(const SmallVectorImpl<int> &M, EVT VT,
                        unsigned &WhichResult) {
   unsigned EltSz = VT.getVectorElementType().getSizeInBits();
@@ -2571,6 +2636,33 @@ static bool isVZIPMask(const SmallVectorImpl<int> &M, EVT VT,
   return true;
 }
 
+/// isVZIP_v_undef_Mask - Special case of isVZIPMask for canonical form of
+/// "vector_shuffle v, v", i.e., "vector_shuffle v, undef".
+/// Mask is e.g., <0, 0, 1, 1> instead of <0, 4, 1, 5>.
+static bool isVZIP_v_undef_Mask(const SmallVectorImpl<int> &M, EVT VT,
+                                unsigned &WhichResult) {
+  unsigned EltSz = VT.getVectorElementType().getSizeInBits();
+  if (EltSz == 64)
+    return false;
+
+  unsigned NumElts = VT.getVectorNumElements();
+  WhichResult = (M[0] == 0 ? 0 : 1);
+  unsigned Idx = WhichResult * NumElts / 2;
+  for (unsigned i = 0; i != NumElts; i += 2) {
+    if ((unsigned) M[i] != Idx ||
+        (unsigned) M[i+1] != Idx)
+      return false;
+    Idx += 1;
+  }
+
+  // VZIP.32 for 64-bit vectors is a pseudo-instruction alias for VTRN.32.
+  if (VT.is64BitVector() && EltSz == 32)
+    return false;
+
+  return true;
+}
+
+
 static SDValue BuildSplat(SDValue Val, EVT VT, SelectionDAG &DAG, DebugLoc dl) {
   // Canonicalize all-zeros and all-ones vectors.
   ConstantSDNode *ConstVal = cast<ConstantSDNode>(Val.getNode());
@@ -2683,7 +2775,10 @@ ARMTargetLowering::isShuffleMaskLegal(const SmallVectorImpl<int> &M,
           isVEXTMask(M, VT, ReverseVEXT, Imm) ||
           isVTRNMask(M, VT, WhichResult) ||
           isVUZPMask(M, VT, WhichResult) ||
-          isVZIPMask(M, VT, WhichResult));
+          isVZIPMask(M, VT, WhichResult) ||
+          isVTRN_v_undef_Mask(M, VT, WhichResult) ||
+          isVUZP_v_undef_Mask(M, VT, WhichResult) ||
+          isVZIP_v_undef_Mask(M, VT, WhichResult));
 }
 
 /// GeneratePerfectShuffle - Given an entry in the perfect-shuffle table, emit
@@ -2815,6 +2910,16 @@ static SDValue LowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG) {
     return DAG.getNode(ARMISD::VZIP, dl, DAG.getVTList(VT, VT),
                        V1, V2).getValue(WhichResult);
 
+  if (isVTRN_v_undef_Mask(ShuffleMask, VT, WhichResult))
+    return DAG.getNode(ARMISD::VTRN, dl, DAG.getVTList(VT, VT),
+                       V1, V1).getValue(WhichResult);
+  if (isVUZP_v_undef_Mask(ShuffleMask, VT, WhichResult))
+    return DAG.getNode(ARMISD::VUZP, dl, DAG.getVTList(VT, VT),
+                       V1, V1).getValue(WhichResult);
+  if (isVZIP_v_undef_Mask(ShuffleMask, VT, WhichResult))
+    return DAG.getNode(ARMISD::VZIP, dl, DAG.getVTList(VT, VT),
+                       V1, V1).getValue(WhichResult);
+
   // If the shuffle is not directly supported and it has 4 elements, use
   // the PerfectShuffle-generated table to synthesize it from other shuffles.
   if (VT.getVectorNumElements() == 4 &&
@@ -2886,6 +2991,7 @@ SDValue ARMTargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) {
   case ISD::BR_JT:         return LowerBR_JT(Op, DAG);
   case ISD::DYNAMIC_STACKALLOC: return LowerDYNAMIC_STACKALLOC(Op, DAG);
   case ISD::VASTART:       return LowerVASTART(Op, DAG, VarArgsFrameIndex);
+  case ISD::MEMBARRIER:    return LowerMEMBARRIER(Op, DAG);
   case ISD::SINT_TO_FP:
   case ISD::UINT_TO_FP:    return LowerINT_TO_FP(Op, DAG);
   case ISD::FP_TO_SINT:
@@ -2938,6 +3044,88 @@ void ARMTargetLowering::ReplaceNodeResults(SDNode *N,
 //===----------------------------------------------------------------------===//
 
 MachineBasicBlock *
+ARMTargetLowering::EmitAtomicCmpSwap(MachineInstr *MI,
+                                     MachineBasicBlock *BB,
+                                     unsigned Size) const {
+  unsigned dest    = MI->getOperand(0).getReg();
+  unsigned ptr     = MI->getOperand(1).getReg();
+  unsigned oldval  = MI->getOperand(2).getReg();
+  unsigned newval  = MI->getOperand(3).getReg();
+  unsigned scratch = BB->getParent()->getRegInfo()
+    .createVirtualRegister(ARM::GPRRegisterClass);
+  const TargetInstrInfo *TII = getTargetMachine().getInstrInfo();
+  DebugLoc dl = MI->getDebugLoc();
+
+  unsigned ldrOpc, strOpc;
+  switch (Size) {
+  default: llvm_unreachable("unsupported size for AtomicCmpSwap!");
+  case 1: ldrOpc = ARM::LDREXB; strOpc = ARM::STREXB; break;
+  case 2: ldrOpc = ARM::LDREXH; strOpc = ARM::STREXH; break;
+  case 4: ldrOpc = ARM::LDREX;  strOpc = ARM::STREX;  break;
+  }
+
+  MachineFunction *MF = BB->getParent();
+  const BasicBlock *LLVM_BB = BB->getBasicBlock();
+  MachineFunction::iterator It = BB;
+  ++It; // insert the new blocks after the current block
+
+  MachineBasicBlock *loop1MBB = MF->CreateMachineBasicBlock(LLVM_BB);
+  MachineBasicBlock *loop2MBB = MF->CreateMachineBasicBlock(LLVM_BB);
+  MachineBasicBlock *exitMBB = MF->CreateMachineBasicBlock(LLVM_BB);
+  MF->insert(It, loop1MBB);
+  MF->insert(It, loop2MBB);
+  MF->insert(It, exitMBB);
+  exitMBB->transferSuccessors(BB);
+
+  //  thisMBB:
+  //   ...
+  //   fallthrough --> loop1MBB
+  BB->addSuccessor(loop1MBB);
+
+  // loop1MBB:
+  //   ldrex dest, [ptr]
+  //   cmp dest, oldval
+  //   bne exitMBB
+  BB = loop1MBB;
+  AddDefaultPred(BuildMI(BB, dl, TII->get(ldrOpc), dest).addReg(ptr));
+  AddDefaultPred(BuildMI(BB, dl, TII->get(ARM::CMPrr))
+                 .addReg(dest).addReg(oldval));
+  BuildMI(BB, dl, TII->get(ARM::Bcc)).addMBB(exitMBB).addImm(ARMCC::NE)
+    .addReg(ARM::CPSR);
+  BB->addSuccessor(loop2MBB);
+  BB->addSuccessor(exitMBB);
+
+  // loop2MBB:
+  //   strex scratch, newval, [ptr]
+  //   cmp scratch, #0
+  //   bne loop1MBB
+  BB = loop2MBB;
+  AddDefaultPred(BuildMI(BB, dl, TII->get(strOpc), scratch).addReg(newval)
+                 .addReg(ptr));
+  AddDefaultPred(BuildMI(BB, dl, TII->get(ARM::CMPri))
+                 .addReg(scratch).addImm(0));
+  BuildMI(BB, dl, TII->get(ARM::Bcc)).addMBB(loop1MBB).addImm(ARMCC::NE)
+    .addReg(ARM::CPSR);
+  BB->addSuccessor(loop1MBB);
+  BB->addSuccessor(exitMBB);
+
+  //  exitMBB:
+  //   ...
+  BB = exitMBB;
+  return BB;
+}
+
+MachineBasicBlock *
+ARMTargetLowering::EmitAtomicBinary(MachineInstr *MI, MachineBasicBlock *BB,
+                                    unsigned Size, unsigned BinOpcode) const {
+  std::string msg;
+  raw_string_ostream Msg(msg);
+  Msg << "Cannot yet emit: ";
+  MI->print(Msg);
+  llvm_report_error(Msg.str());
+}
+
+MachineBasicBlock *
 ARMTargetLowering::EmitInstrWithCustomInserter(MachineInstr *MI,
                                                MachineBasicBlock *BB,
                    DenseMap<MachineBasicBlock*, MachineBasicBlock*> *EM) const {
@@ -2945,7 +3133,41 @@ ARMTargetLowering::EmitInstrWithCustomInserter(MachineInstr *MI,
   DebugLoc dl = MI->getDebugLoc();
   switch (MI->getOpcode()) {
   default:
+    MI->dump();
     llvm_unreachable("Unexpected instr type to insert");
+
+  case ARM::ATOMIC_LOAD_ADD_I8:  return EmitAtomicBinary(MI, BB, 1, ARM::ADDrr);
+  case ARM::ATOMIC_LOAD_ADD_I16: return EmitAtomicBinary(MI, BB, 2, ARM::ADDrr);
+  case ARM::ATOMIC_LOAD_ADD_I32: return EmitAtomicBinary(MI, BB, 4, ARM::ADDrr);
+
+  case ARM::ATOMIC_LOAD_AND_I8:  return EmitAtomicBinary(MI, BB, 1, ARM::ANDrr);
+  case ARM::ATOMIC_LOAD_AND_I16: return EmitAtomicBinary(MI, BB, 2, ARM::ANDrr);
+  case ARM::ATOMIC_LOAD_AND_I32: return EmitAtomicBinary(MI, BB, 4, ARM::ANDrr);
+
+  case ARM::ATOMIC_LOAD_OR_I8:   return EmitAtomicBinary(MI, BB, 1, ARM::ORRrr);
+  case ARM::ATOMIC_LOAD_OR_I16:  return EmitAtomicBinary(MI, BB, 2, ARM::ORRrr);
+  case ARM::ATOMIC_LOAD_OR_I32:  return EmitAtomicBinary(MI, BB, 4, ARM::ORRrr);
+
+  case ARM::ATOMIC_LOAD_XOR_I8:  return EmitAtomicBinary(MI, BB, 1, ARM::EORrr);
+  case ARM::ATOMIC_LOAD_XOR_I16: return EmitAtomicBinary(MI, BB, 2, ARM::EORrr);
+  case ARM::ATOMIC_LOAD_XOR_I32: return EmitAtomicBinary(MI, BB, 4, ARM::EORrr);
+
+  case ARM::ATOMIC_LOAD_NAND_I8: return EmitAtomicBinary(MI, BB, 1, ARM::BICrr);
+  case ARM::ATOMIC_LOAD_NAND_I16:return EmitAtomicBinary(MI, BB, 2, ARM::BICrr);
+  case ARM::ATOMIC_LOAD_NAND_I32:return EmitAtomicBinary(MI, BB, 4, ARM::BICrr);
+
+  case ARM::ATOMIC_LOAD_SUB_I8:  return EmitAtomicBinary(MI, BB, 1, ARM::SUBrr);
+  case ARM::ATOMIC_LOAD_SUB_I16: return EmitAtomicBinary(MI, BB, 2, ARM::SUBrr);
+  case ARM::ATOMIC_LOAD_SUB_I32: return EmitAtomicBinary(MI, BB, 4, ARM::SUBrr);
+
+  case ARM::ATOMIC_SWAP_I8:      return EmitAtomicBinary(MI, BB, 1, 0);
+  case ARM::ATOMIC_SWAP_I16:     return EmitAtomicBinary(MI, BB, 2, 0);
+  case ARM::ATOMIC_SWAP_I32:     return EmitAtomicBinary(MI, BB, 4, 0);
+
+  case ARM::ATOMIC_CMP_SWAP_I8:  return EmitAtomicCmpSwap(MI, BB, 1);
+  case ARM::ATOMIC_CMP_SWAP_I16: return EmitAtomicCmpSwap(MI, BB, 2);
+  case ARM::ATOMIC_CMP_SWAP_I32: return EmitAtomicCmpSwap(MI, BB, 4);
+
   case ARM::tMOVCCr_pseudo: {
     // To "insert" a SELECT_CC instruction, we actually have to insert the
     // diamond control-flow pattern.  The incoming instruction knows the
@@ -3935,6 +4157,8 @@ ARMTargetLowering::getRegForInlineAsmConstraint(const std::string &Constraint,
         return std::make_pair(0U, ARM::SPRRegisterClass);
       if (VT == MVT::f64)
         return std::make_pair(0U, ARM::DPRRegisterClass);
+      if (VT.getSizeInBits() == 128)
+        return std::make_pair(0U, ARM::QPRRegisterClass);
       break;
     }
   }
@@ -3973,6 +4197,9 @@ getRegClassForInlineAsmConstraint(const std::string &Constraint,
                                    ARM::D4, ARM::D5, ARM::D6, ARM::D7,
                                    ARM::D8, ARM::D9, ARM::D10,ARM::D11,
                                    ARM::D12,ARM::D13,ARM::D14,ARM::D15, 0);
+    if (VT.getSizeInBits() == 128)
+      return make_vector<unsigned>(ARM::Q0, ARM::Q1, ARM::Q2, ARM::Q3,
+                                   ARM::Q4, ARM::Q5, ARM::Q6, ARM::Q7, 0);
       break;
   }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h
index 4f31f8a..e1b3348 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h
@@ -72,6 +72,9 @@ namespace llvm {
 
       DYN_ALLOC,    // Dynamic allocation on the stack.
 
+      MEMBARRIER,   // Memory barrier
+      SYNCBARRIER,  // Memory sync barrier
+
       VCEQ,         // Vector compare equal.
       VCGE,         // Vector compare greater than or equal.
       VCGEU,        // Vector compare unsigned greater than or equal.
@@ -328,6 +331,15 @@ namespace llvm {
 
     SDValue getARMCmp(SDValue LHS, SDValue RHS, ISD::CondCode CC,
                       SDValue &ARMCC, SelectionDAG &DAG, DebugLoc dl);
+
+    MachineBasicBlock *EmitAtomicCmpSwap(MachineInstr *MI,
+                                         MachineBasicBlock *BB,
+                                         unsigned Size) const;
+    MachineBasicBlock *EmitAtomicBinary(MachineInstr *MI,
+                                        MachineBasicBlock *BB,
+                                        unsigned Size,
+                                        unsigned BinOpcode) const;
+
   };
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
index e76e93c..9ce93d1 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
@@ -33,6 +33,8 @@ def LdMiscFrm     : Format<8>;
 def StMiscFrm     : Format<9>;
 def LdStMulFrm    : Format<10>;
 
+def LdStExFrm     : Format<28>;
+
 def ArithMiscFrm  : Format<11>;
 def ExtFrm        : Format<12>;
 
@@ -264,6 +266,28 @@ class JTI<dag oops, dag iops, InstrItinClass itin,
   : XI<oops, iops, AddrModeNone, SizeSpecial, IndexModeNone, BrMiscFrm, itin,
        asm, "", pattern>;
 
+
+// Atomic load/store instructions
+
+class AIldrex<bits<2> opcod, dag oops, dag iops, InstrItinClass itin,
+              string opc, string asm, list<dag> pattern>
+  : I<oops, iops, AddrModeNone, Size4Bytes, IndexModeNone, LdStExFrm, itin,
+      opc, asm, "", pattern> {
+  let Inst{27-23} = 0b00011;
+  let Inst{22-21} = opcod;
+  let Inst{20} = 1;
+  let Inst{11-0}  = 0b111110011111;
+}
+class AIstrex<bits<2> opcod, dag oops, dag iops, InstrItinClass itin,
+              string opc, string asm, list<dag> pattern>
+  : I<oops, iops, AddrModeNone, Size4Bytes, IndexModeNone, LdStExFrm, itin,
+      opc, asm, "", pattern> {
+  let Inst{27-23} = 0b00011;
+  let Inst{22-21} = opcod;
+  let Inst{20} = 0;
+  let Inst{11-4}  = 0b11111001;
+}
+
 // addrmode1 instructions
 class AI1<bits<4> opcod, dag oops, dag iops, Format f, InstrItinClass itin,
           string opc, string asm, list<dag> pattern>
@@ -967,6 +991,17 @@ class Thumb2XI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
   list<Predicate> Predicates = [IsThumb2];
 }
 
+class ThumbXI<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
+               InstrItinClass itin,
+               string asm, string cstr, list<dag> pattern>
+  : InstARM<am, sz, IndexModeNone, ThumbFrm, GenericDomain, cstr, itin> {
+  let OutOperandList = oops;
+  let InOperandList = iops;
+  let AsmString   = asm;
+  let Pattern = pattern;
+  list<Predicate> Predicates = [IsThumb1Only];
+}
+
 class T2I<dag oops, dag iops, InstrItinClass itin,
           string opc, string asm, list<dag> pattern>
   : Thumb2I<oops, iops, AddrModeNone, Size4Bytes, itin, opc, asm, "", pattern>;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.cpp
index 87bb12b..85f6b40 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.cpp
@@ -60,25 +60,6 @@ unsigned ARMInstrInfo::getUnindexedOpcode(unsigned Opc) const {
   return 0;
 }
 
-bool ARMInstrInfo::BlockHasNoFallThrough(const MachineBasicBlock &MBB) const {
-  if (MBB.empty()) return false;
-
-  switch (MBB.back().getOpcode()) {
-  case ARM::BX_RET:   // Return.
-  case ARM::LDM_RET:
-  case ARM::B:
-  case ARM::BRIND:
-  case ARM::BR_JTr:   // Jumptable branch.
-  case ARM::BR_JTm:   // Jumptable branch through mem.
-  case ARM::BR_JTadd: // Jumptable branch add to pc.
-    return true;
-  default:
-    break;
-  }
-
-  return false;
-}
-
 void ARMInstrInfo::
 reMaterialize(MachineBasicBlock &MBB, MachineBasicBlock::iterator I,
               unsigned DestReg, unsigned SubIdx, const MachineInstr *Orig,
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.h b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.h
index 4319577..d4199d1 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.h
@@ -32,9 +32,6 @@ public:
   // if there is not such an opcode.
   unsigned getUnindexedOpcode(unsigned Opc) const;
 
-  // Return true if the block does not fall through.
-  bool BlockHasNoFallThrough(const MachineBasicBlock &MBB) const;
-
   void reMaterialize(MachineBasicBlock &MBB, MachineBasicBlock::iterator MI,
                      unsigned DestReg, unsigned SubIdx,
                      const MachineInstr *Orig,
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
index 0a8ecc0..a0798a6 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
@@ -46,6 +46,9 @@ def SDT_ARMPICAdd  : SDTypeProfile<1, 2, [SDTCisSameAs<0, 1>,
 def SDT_ARMThreadPointer : SDTypeProfile<1, 0, [SDTCisPtrTy<0>]>;
 def SDT_ARMEH_SJLJ_Setjmp : SDTypeProfile<1, 1, [SDTCisInt<0>, SDTCisPtrTy<1>]>;
 
+def SDT_ARMMEMBARRIER  : SDTypeProfile<0, 0, []>;
+def SDT_ARMSYNCBARRIER : SDTypeProfile<0, 0, []>;
+
 // Node definitions.
 def ARMWrapper       : SDNode<"ARMISD::Wrapper",     SDTIntUnaryOp>;
 def ARMWrapperJT     : SDNode<"ARMISD::WrapperJT",   SDTIntBinOp>;
@@ -93,6 +96,11 @@ def ARMrrx           : SDNode<"ARMISD::RRX"     , SDTIntUnaryOp, [SDNPInFlag ]>;
 def ARMthread_pointer: SDNode<"ARMISD::THREAD_POINTER", SDT_ARMThreadPointer>;
 def ARMeh_sjlj_setjmp: SDNode<"ARMISD::EH_SJLJ_SETJMP", SDT_ARMEH_SJLJ_Setjmp>;
 
+def ARMMemBarrier    : SDNode<"ARMISD::MEMBARRIER", SDT_ARMMEMBARRIER,
+                              [SDNPHasChain]>;
+def ARMSyncBarrier   : SDNode<"ARMISD::SYNCBARRIER", SDT_ARMMEMBARRIER,
+                              [SDNPHasChain]>;
+
 //===----------------------------------------------------------------------===//
 // ARM Instruction Predicate Definitions.
 //
@@ -600,19 +608,19 @@ def PICLDR  : AXI2ldw<(outs GPR:$dst), (ins addrmodepc:$addr, pred:$p),
                   [(set GPR:$dst, (load addrmodepc:$addr))]>;
 
 def PICLDRH : AXI3ldh<(outs GPR:$dst), (ins addrmodepc:$addr, pred:$p),
-                Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}h\t$dst, $addr",
+                Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldrh${p}\t$dst, $addr",
                   [(set GPR:$dst, (zextloadi16 addrmodepc:$addr))]>;
 
 def PICLDRB : AXI2ldb<(outs GPR:$dst), (ins addrmodepc:$addr, pred:$p),
-                Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}b\t$dst, $addr",
+                Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldrb${p}\t$dst, $addr",
                   [(set GPR:$dst, (zextloadi8 addrmodepc:$addr))]>;
 
 def PICLDRSH : AXI3ldsh<(outs GPR:$dst), (ins addrmodepc:$addr, pred:$p),
-               Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}sh\t$dst, $addr",
+               Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldrsh${p}\t$dst, $addr",
                   [(set GPR:$dst, (sextloadi16 addrmodepc:$addr))]>;
 
 def PICLDRSB : AXI3ldsb<(outs GPR:$dst), (ins addrmodepc:$addr, pred:$p),
-               Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldr${p}sb\t$dst, $addr",
+               Pseudo, IIC_iLoadr, "\n${addr:label}:\n\tldrsb${p}\t$dst, $addr",
                   [(set GPR:$dst, (sextloadi8 addrmodepc:$addr))]>;
 }
 let AddedComplexity = 10 in {
@@ -1561,6 +1569,163 @@ def MOVCCi : AI1<0b1101, (outs GPR:$dst),
   let Inst{25} = 1;
 }
 
+//===----------------------------------------------------------------------===//
+// Atomic operations intrinsics
+//
+
+// memory barriers protect the atomic sequences
+let isPredicable = 0, hasSideEffects = 1 in {
+def Int_MemBarrierV7 : AI<(outs), (ins),
+                        Pseudo, NoItinerary,
+                        "dmb", "",
+                        [(ARMMemBarrier)]>,
+                        Requires<[HasV7]> {
+  let Inst{31-4} = 0xf57ff05;
+  // FIXME: add support for options other than a full system DMB
+  let Inst{3-0} = 0b1111;
+}
+
+def Int_SyncBarrierV7 : AI<(outs), (ins),
+                        Pseudo, NoItinerary,
+                        "dsb", "",
+                        [(ARMSyncBarrier)]>,
+                        Requires<[HasV7]> {
+  let Inst{31-4} = 0xf57ff04;
+  // FIXME: add support for options other than a full system DSB
+  let Inst{3-0} = 0b1111;
+}
+}
+
+let usesCustomInserter = 1 in {
+  let Uses = [CPSR] in {
+    def ATOMIC_LOAD_ADD_I8 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_ADD_I8 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_add_8 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_SUB_I8 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_SUB_I8 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_sub_8 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_AND_I8 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_AND_I8 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_and_8 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_OR_I8 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_OR_I8 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_or_8 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_XOR_I8 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_XOR_I8 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_xor_8 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_NAND_I8 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_NAND_I8 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_nand_8 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_ADD_I16 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_ADD_I16 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_add_16 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_SUB_I16 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_SUB_I16 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_sub_16 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_AND_I16 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_AND_I16 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_and_16 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_OR_I16 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_OR_I16 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_or_16 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_XOR_I16 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_XOR_I16 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_xor_16 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_NAND_I16 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_NAND_I16 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_nand_16 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_ADD_I32 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_ADD_I32 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_add_32 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_SUB_I32 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_SUB_I32 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_sub_32 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_AND_I32 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_AND_I32 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_and_32 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_OR_I32 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_OR_I32 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_or_32 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_XOR_I32 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_XOR_I32 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_xor_32 GPR:$ptr, GPR:$incr))]>;
+    def ATOMIC_LOAD_NAND_I32 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$incr), NoItinerary,
+      "${:comment} ATOMIC_LOAD_NAND_I32 PSEUDO!",
+      [(set GPR:$dst, (atomic_load_nand_32 GPR:$ptr, GPR:$incr))]>;
+
+    def ATOMIC_SWAP_I8 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$new), NoItinerary,
+      "${:comment} ATOMIC_SWAP_I8 PSEUDO!",
+      [(set GPR:$dst, (atomic_swap_8 GPR:$ptr, GPR:$new))]>;
+    def ATOMIC_SWAP_I16 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$new), NoItinerary,
+      "${:comment} ATOMIC_SWAP_I16 PSEUDO!",
+      [(set GPR:$dst, (atomic_swap_16 GPR:$ptr, GPR:$new))]>;
+    def ATOMIC_SWAP_I32 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$new), NoItinerary,
+      "${:comment} ATOMIC_SWAP_I32 PSEUDO!",
+      [(set GPR:$dst, (atomic_swap_32 GPR:$ptr, GPR:$new))]>;
+
+
+    def ATOMIC_CMP_SWAP_I8 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$old, GPR:$new), NoItinerary,
+      "${:comment} ATOMIC_CMP_SWAP_I8 PSEUDO!",
+      [(set GPR:$dst, (atomic_cmp_swap_8 GPR:$ptr, GPR:$old, GPR:$new))]>;
+    def ATOMIC_CMP_SWAP_I16 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$old, GPR:$new), NoItinerary,
+      "${:comment} ATOMIC_CMP_SWAP_I16 PSEUDO!",
+      [(set GPR:$dst, (atomic_cmp_swap_16 GPR:$ptr, GPR:$old, GPR:$new))]>;
+    def ATOMIC_CMP_SWAP_I32 : PseudoInst<
+      (outs GPR:$dst), (ins GPR:$ptr, GPR:$old, GPR:$new), NoItinerary,
+      "${:comment} ATOMIC_CMP_SWAP_I32 PSEUDO!",
+      [(set GPR:$dst, (atomic_cmp_swap_32 GPR:$ptr, GPR:$old, GPR:$new))]>;
+}
+}
+
+let mayLoad = 1 in {
+def LDREXB : AIldrex<0b10, (outs GPR:$dest), (ins GPR:$ptr), NoItinerary,
+                    "ldrexb", "\t$dest, [$ptr]",
+                    []>;
+def LDREXH : AIldrex<0b11, (outs GPR:$dest), (ins GPR:$ptr), NoItinerary,
+                    "ldrexh", "\t$dest, [$ptr]",
+                    []>;
+def LDREX  : AIldrex<0b00, (outs GPR:$dest), (ins GPR:$ptr), NoItinerary,
+                    "ldrex", "\t$dest, [$ptr]",
+                    []>;
+}
+
+let mayStore = 1 in {
+def STREXB : AIstrex<0b10, (outs GPR:$success), (ins GPR:$src, GPR:$ptr),
+                     NoItinerary,
+                    "strexb", "\t$success, $src, [$ptr]",
+                    []>;
+def STREXH : AIstrex<0b11, (outs GPR:$success), (ins GPR:$src, GPR:$ptr),
+                    NoItinerary,
+                    "strexh", "\t$success, $src, [$ptr]",
+                    []>;
+def STREX  : AIstrex<0b00, (outs GPR:$success), (ins GPR:$src, GPR:$ptr),
+                     NoItinerary,
+                    "strex", "\t$success, $src, [$ptr]",
+                    []>;
+}
 
 //===----------------------------------------------------------------------===//
 // TLS Instructions
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td
index a4fe752..61b7705 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td
@@ -152,7 +152,7 @@ def VLDRQ : NI4<(outs QPR:$dst), (ins addrmode4:$addr),
   let Inst{24}    = 0; // P bit
   let Inst{23}    = 1; // U bit
   let Inst{20}    = 1;
-  let Inst{11-9}  = 0b101;
+  let Inst{11-8}  = 0b1011;
 }
 
 // Use vstmia to store a Q register as a D register pair.
@@ -164,7 +164,7 @@ def VSTRQ : NI4<(outs), (ins QPR:$src, addrmode4:$addr),
   let Inst{24}    = 0; // P bit
   let Inst{23}    = 1; // U bit
   let Inst{20}    = 0;
-  let Inst{11-9}  = 0b101;
+  let Inst{11-8}  = 0b1011;
 }
 
 //   VLD1     : Vector Load (multiple single elements)
@@ -2582,20 +2582,20 @@ def VMOVv16i8 : N1ModImm<1, 0b000, 0b1110, 0, 1, 0, 1, (outs QPR:$dst),
                          "vmov", "i8", "$dst, $SIMM", "",
                          [(set QPR:$dst, (v16i8 vmovImm8:$SIMM))]>;
 
-def VMOVv4i16 : N1ModImm<1, 0b000, 0b1000, 0, 0, 0, 1, (outs DPR:$dst),
+def VMOVv4i16 : N1ModImm<1, 0b000, {1,0,?,?}, 0, 0, {?}, 1, (outs DPR:$dst),
                          (ins h16imm:$SIMM), IIC_VMOVImm,
                          "vmov", "i16", "$dst, $SIMM", "",
                          [(set DPR:$dst, (v4i16 vmovImm16:$SIMM))]>;
-def VMOVv8i16 : N1ModImm<1, 0b000, 0b1000, 0, 1, 0, 1, (outs QPR:$dst),
+def VMOVv8i16 : N1ModImm<1, 0b000, {1,0,?,?}, 0, 1, {?}, 1, (outs QPR:$dst),
                          (ins h16imm:$SIMM), IIC_VMOVImm,
                          "vmov", "i16", "$dst, $SIMM", "",
                          [(set QPR:$dst, (v8i16 vmovImm16:$SIMM))]>;
 
-def VMOVv2i32 : N1ModImm<1, 0b000, 0b0000, 0, 0, 0, 1, (outs DPR:$dst),
+def VMOVv2i32 : N1ModImm<1, 0b000, {?,?,?,?}, 0, 0, {?}, 1, (outs DPR:$dst),
                          (ins h32imm:$SIMM), IIC_VMOVImm,
                          "vmov", "i32", "$dst, $SIMM", "",
                          [(set DPR:$dst, (v2i32 vmovImm32:$SIMM))]>;
-def VMOVv4i32 : N1ModImm<1, 0b000, 0b0000, 0, 1, 0, 1, (outs QPR:$dst),
+def VMOVv4i32 : N1ModImm<1, 0b000, {?,?,?,?}, 0, 1, {?}, 1, (outs QPR:$dst),
                          (ins h32imm:$SIMM), IIC_VMOVImm,
                          "vmov", "i32", "$dst, $SIMM", "",
                          [(set QPR:$dst, (v4i32 vmovImm32:$SIMM))]>;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
index b5956a3..9306bdb 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
@@ -669,6 +669,35 @@ let isCall = 1,
                [(set R0, ARMthread_pointer)]>;
 }
 
+// SJLJ Exception handling intrinsics
+//   eh_sjlj_setjmp() is an instruction sequence to store the return
+//   address and save #0 in R0 for the non-longjmp case.
+//   Since by its nature we may be coming from some other function to get
+//   here, and we're using the stack frame for the containing function to
+//   save/restore registers, we can't keep anything live in regs across
+//   the eh_sjlj_setjmp(), else it will almost certainly have been tromped upon
+//   when we get here from a longjmp(). We force everthing out of registers
+//   except for our own input by listing the relevant registers in Defs. By
+//   doing so, we also cause the prologue/epilogue code to actively preserve
+//   all of the callee-saved resgisters, which is exactly what we want.
+let Defs =
+  [ R0,  R1,  R2,  R3,  R4,  R5,  R6,  R7, R12 ] in {
+  def tInt_eh_sjlj_setjmp : ThumbXI<(outs), (ins GPR:$src),
+                              AddrModeNone, SizeSpecial, NoItinerary,
+                              "mov\tr12, r1\t@ begin eh.setjmp\n"
+                              "\tmov\tr1, sp\n"
+                              "\tstr\tr1, [$src, #8]\n"
+                              "\tadr\tr1, 0f\n"
+                              "\tadds\tr1, #1\n"
+                              "\tstr\tr1, [$src, #4]\n"
+                              "\tmov\tr1, r12\n"
+                              "\tmovs\tr0, #0\n"
+                              "\tb\t1f\n"
+                              ".align 2\n"
+                              "0:\tmovs\tr0, #1\t@ end eh.setjmp\n"
+                              "1:", "",
+                              [(set R0, (ARMeh_sjlj_setjmp GPR:$src))]>;
+}
 //===----------------------------------------------------------------------===//
 // Non-Instruction Patterns
 //
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
index 304d0ef..22bd80e 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
@@ -449,7 +449,7 @@ bool ARMLoadStoreOpt::MergeBaseUpdateLSMultiple(MachineBasicBlock &MBB,
     }
 
     if (MBBI != MBB.end()) {
-      MachineBasicBlock::iterator NextMBBI = next(MBBI);
+      MachineBasicBlock::iterator NextMBBI = llvm::next(MBBI);
       if ((Mode == ARM_AM::ia || Mode == ARM_AM::ib) &&
           isMatchingIncrement(NextMBBI, Base, Bytes, 0, Pred, PredReg)) {
         MI->getOperand(1).setImm(ARM_AM::getAM4ModeImm(Mode, true));
@@ -494,7 +494,7 @@ bool ARMLoadStoreOpt::MergeBaseUpdateLSMultiple(MachineBasicBlock &MBB,
     }
 
     if (MBBI != MBB.end()) {
-      MachineBasicBlock::iterator NextMBBI = next(MBBI);
+      MachineBasicBlock::iterator NextMBBI = llvm::next(MBBI);
       if (Mode == ARM_AM::ia &&
           isMatchingIncrement(NextMBBI, Base, Bytes, 0, Pred, PredReg)) {
         MI->getOperand(1).setImm(ARM_AM::getAM5Opc(ARM_AM::ia, true, Offset));
@@ -604,7 +604,7 @@ bool ARMLoadStoreOpt::MergeBaseUpdateLoadStore(MachineBasicBlock &MBB,
   }
 
   if (!DoMerge && MBBI != MBB.end()) {
-    MachineBasicBlock::iterator NextMBBI = next(MBBI);
+    MachineBasicBlock::iterator NextMBBI = llvm::next(MBBI);
     if (!isAM5 &&
         isMatchingDecrement(NextMBBI, Base, Bytes, Limit, Pred, PredReg)) {
       DoMerge = true;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp
index d6b072b..71f3883 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp
@@ -114,8 +114,6 @@ ARMSubtarget::ARMSubtarget(const std::string &TT, const std::string &FS,
     if (UseNEONFP.getPosition() == 0)
       UseNEONForSinglePrecisionFP = true;
   }
-  HasBranchTargetBuffer = (CPUString == "cortex-a8" ||
-                           CPUString == "cortex-a9");
 }
 
 /// GVIsIndirectSymbol - true if the GV will be accessed via an indirect symbol.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.h b/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.h
index b2467b0..3f06b7b 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.h
@@ -50,9 +50,6 @@ protected:
   /// determine if NEON should actually be used.
   bool UseNEONForSinglePrecisionFP;
 
-  /// HasBranchTargetBuffer - True if processor can predict indirect branches.
-  bool HasBranchTargetBuffer;
-
   /// IsThumb - True if we are in thumb mode, false if in ARM mode.
   bool IsThumb;
 
@@ -130,8 +127,6 @@ protected:
   bool isThumb2() const { return IsThumb && (ThumbMode == Thumb2); }
   bool hasThumb2() const { return ThumbMode >= Thumb2; }
 
-  bool hasBranchTargetBuffer() const { return HasBranchTargetBuffer; }
-
   bool isR9Reserved() const { return IsR9Reserved; }
 
   bool useMovt() const { return UseMovt && hasV6T2Ops(); }
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp
index 2564ed9..1c6fca7 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp
@@ -95,7 +95,7 @@ bool ARMBaseTargetMachine::addPreRegAlloc(PassManagerBase &PM,
 
   // Calculate and set max stack object alignment early, so we can decide
   // whether we will need stack realignment (and thus FP).
-  PM.add(createARMMaxStackAlignmentCalculatorPass());
+  PM.add(createMaxStackAlignmentCalculatorPass());
 
   // FIXME: temporarily disabling load / store optimization pass for Thumb1.
   if (OptLevel != CodeGenOpt::None && !Subtarget.isThumb1Only())
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
index 692bb19..362bbf1 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
@@ -1045,6 +1045,7 @@ bool ARMAsmPrinter::PrintAsmOperand(const MachineInstr *MI, unsigned OpNum,
       printNoHashImmediate(MI, OpNum);
       return false;
     case 'P': // Print a VFP double precision register.
+    case 'q': // Print a NEON quad precision register.
       printOperand(MI, OpNum);
       return false;
     case 'Q':
diff --git a/libclamav/c++/llvm/lib/Target/ARM/NEONMoveFix.cpp b/libclamav/c++/llvm/lib/Target/ARM/NEONMoveFix.cpp
index 50abcf4..3c0414d 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/NEONMoveFix.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/NEONMoveFix.cpp
@@ -51,7 +51,7 @@ bool NEONMoveFixPass::InsertMoves(MachineBasicBlock &MBB) {
   MachineBasicBlock::iterator MII = MBB.begin(), E = MBB.end();
   MachineBasicBlock::iterator NextMII;
   for (; MII != E; MII = NextMII) {
-    NextMII = next(MII);
+    NextMII = llvm::next(MII);
     MachineInstr *MI = &*MII;
 
     if (MI->getOpcode() == ARM::VMOVD &&
diff --git a/libclamav/c++/llvm/lib/Target/ARM/NEONPreAllocPass.cpp b/libclamav/c++/llvm/lib/Target/ARM/NEONPreAllocPass.cpp
index 206677b..d9942c8 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/NEONPreAllocPass.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/NEONPreAllocPass.cpp
@@ -338,7 +338,7 @@ bool NEONPreAllocPass::PreAllocNEONRegisters(MachineBasicBlock &MBB) {
     if (!isNEONMultiRegOp(MI->getOpcode(), FirstOpnd, NumRegs, Offset, Stride))
       continue;
 
-    MachineBasicBlock::iterator NextI = next(MBBI);
+    MachineBasicBlock::iterator NextI = llvm::next(MBBI);
     for (unsigned R = 0; R < NumRegs; ++R) {
       MachineOperand &MO = MI->getOperand(FirstOpnd + R);
       assert(MO.isReg() && MO.getSubReg() == 0 && "unexpected operand");
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp
index 7602b6d..66d3b83 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.cpp
@@ -32,25 +32,6 @@ unsigned Thumb1InstrInfo::getUnindexedOpcode(unsigned Opc) const {
   return 0;
 }
 
-bool
-Thumb1InstrInfo::BlockHasNoFallThrough(const MachineBasicBlock &MBB) const {
-  if (MBB.empty()) return false;
-
-  switch (MBB.back().getOpcode()) {
-  case ARM::tBX_RET:
-  case ARM::tBX_RET_vararg:
-  case ARM::tPOP_RET:
-  case ARM::tB:
-  case ARM::tBRIND:
-  case ARM::tBR_JTr:
-    return true;
-  default:
-    break;
-  }
-
-  return false;
-}
-
 bool Thumb1InstrInfo::copyRegToReg(MachineBasicBlock &MBB,
                                    MachineBasicBlock::iterator I,
                                    unsigned DestReg, unsigned SrcReg,
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.h b/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.h
index b28229d..516ddf1 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb1InstrInfo.h
@@ -31,9 +31,6 @@ public:
   // if there is not such an opcode.
   unsigned getUnindexedOpcode(unsigned Opc) const;
 
-  // Return true if the block does not fall through.
-  bool BlockHasNoFallThrough(const MachineBasicBlock &MBB) const;
-
   /// getRegisterInfo - TargetInstrInfo is a superset of MRegister info.  As
   /// such, whenever a client has an instance of instruction info, it should
   /// always be able to get register info as well (through this method).
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.cpp
index 37adf37..9f3816a 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb1RegisterInfo.cpp
@@ -528,7 +528,7 @@ Thumb1RegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
         MI.getOperand(i+1).ChangeToImmediate(Mask);
       }
       Offset = (Offset - Mask * Scale);
-      MachineBasicBlock::iterator NII = next(II);
+      MachineBasicBlock::iterator NII = llvm::next(II);
       emitThumbRegPlusImmediate(MBB, NII, DestReg, DestReg, Offset, TII,
                                 *this, dl);
     } else {
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp
index 16c1e6f..f4a8c27 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp
@@ -36,30 +36,6 @@ unsigned Thumb2InstrInfo::getUnindexedOpcode(unsigned Opc) const {
 }
 
 bool
-Thumb2InstrInfo::BlockHasNoFallThrough(const MachineBasicBlock &MBB) const {
-  if (MBB.empty()) return false;
-
-  switch (MBB.back().getOpcode()) {
-  case ARM::t2LDM_RET:
-  case ARM::t2B:        // Uncond branch.
-  case ARM::t2BR_JT:    // Jumptable branch.
-  case ARM::t2TBB:      // Table branch byte.
-  case ARM::t2TBH:      // Table branch halfword.
-  case ARM::tBR_JTr:    // Jumptable branch (16-bit version).
-  case ARM::tBX_RET:
-  case ARM::tBX_RET_vararg:
-  case ARM::tPOP_RET:
-  case ARM::tB:
-  case ARM::tBRIND:
-    return true;
-  default:
-    break;
-  }
-
-  return false;
-}
-
-bool
 Thumb2InstrInfo::copyRegToReg(MachineBasicBlock &MBB,
                               MachineBasicBlock::iterator I,
                               unsigned DestReg, unsigned SrcReg,
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.h b/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.h
index 663a60b..a0f89a6 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.h
@@ -31,9 +31,6 @@ public:
   // if there is not such an opcode.
   unsigned getUnindexedOpcode(unsigned Opc) const;
 
-  // Return true if the block does not fall through.
-  bool BlockHasNoFallThrough(const MachineBasicBlock &MBB) const;
-
   bool copyRegToReg(MachineBasicBlock &MBB,
                     MachineBasicBlock::iterator I,
                     unsigned DestReg, unsigned SrcReg,
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp
index b2fd7b3..35359aa 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp
@@ -649,7 +649,7 @@ bool Thumb2SizeReduce::ReduceMBB(MachineBasicBlock &MBB) {
   MachineBasicBlock::iterator MII = MBB.begin(), E = MBB.end();
   MachineBasicBlock::iterator NextMII;
   for (; MII != E; MII = NextMII) {
-    NextMII = next(MII);
+    NextMII = llvm::next(MII);
 
     MachineInstr *MI = &*MII;
     LiveCPSR = UpdateCPSRUse(*MI, LiveCPSR);
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.cpp
index 0083598..af7d812 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.cpp
@@ -740,18 +740,6 @@ bool PPCInstrInfo::canFoldMemoryOperand(const MachineInstr *MI,
 }
 
 
-bool PPCInstrInfo::BlockHasNoFallThrough(const MachineBasicBlock &MBB) const {
-  if (MBB.empty()) return false;
-  
-  switch (MBB.back().getOpcode()) {
-  case PPC::BLR:   // Return.
-  case PPC::B:     // Uncond branch.
-  case PPC::BCTR:  // Indirect branch.
-    return true;
-  default: return false;
-  }
-}
-
 bool PPCInstrInfo::
 ReverseBranchCondition(SmallVectorImpl<MachineOperand> &Cond) const {
   assert(Cond.size() == 2 && "Invalid PPC branch opcode!");
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.h
index ab341bd..57facac 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.h
@@ -143,7 +143,6 @@ public:
   virtual bool canFoldMemoryOperand(const MachineInstr *MI,
                                     const SmallVectorImpl<unsigned> &Ops) const;
   
-  virtual bool BlockHasNoFallThrough(const MachineBasicBlock &MBB) const;
   virtual
   bool ReverseBranchCondition(SmallVectorImpl<MachineOperand> &Cond) const;
   
@@ -151,8 +150,6 @@ public:
   /// instruction may be.  This returns the maximum number of bytes.
   ///
   virtual unsigned GetInstSizeInBytes(const MachineInstr *MI) const;
-
-  virtual bool isProfitableToDuplicateIndirectBranch() const { return true; }
 };
 
 }
diff --git a/libclamav/c++/llvm/lib/Target/README.txt b/libclamav/c++/llvm/lib/Target/README.txt
index e7a55a0..c788360 100644
--- a/libclamav/c++/llvm/lib/Target/README.txt
+++ b/libclamav/c++/llvm/lib/Target/README.txt
@@ -2,6 +2,29 @@ Target Independent Opportunities:
 
 //===---------------------------------------------------------------------===//
 
+Dead argument elimination should be enhanced to handle cases when an argument is
+dead to an externally visible function.  Though the argument can't be removed
+from the externally visible function, the caller doesn't need to pass it in.
+For example in this testcase:
+
+  void foo(int X) __attribute__((noinline));
+  void foo(int X) { sideeffect(); }
+  void bar(int A) { foo(A+1); }
+
+We compile bar to:
+
+define void @bar(i32 %A) nounwind ssp {
+  %0 = add nsw i32 %A, 1                          ; <i32> [#uses=1]
+  tail call void @foo(i32 %0) nounwind noinline ssp
+  ret void
+}
+
+The add is dead, we could pass in 'i32 undef' instead.  This occurs for C++
+templates etc, which usually have linkonce_odr/weak_odr linkage, not internal
+linkage.
+
+//===---------------------------------------------------------------------===//
+
 With the recent changes to make the implicit def/use set explicit in
 machineinstrs, we should change the target descriptions for 'call' instructions
 so that the .td files don't list all the call-clobbered registers as implicit
@@ -1093,6 +1116,8 @@ later.
 
 //===---------------------------------------------------------------------===//
 
+[STORE SINKING]
+
 Store sinking: This code:
 
 void f (int n, int *cond, int *res) {
@@ -1148,6 +1173,8 @@ This is GCC PR38204.
 
 //===---------------------------------------------------------------------===//
 
+[STORE SINKING]
+
 GCC PR37810 is an interesting case where we should sink load/store reload
 into the if block and outside the loop, so we don't reload/store it on the
 non-call path.
@@ -1175,7 +1202,7 @@ we don't sink the store.  We need partially dead store sinking.
 
 //===---------------------------------------------------------------------===//
 
-[PHI TRANSLATE GEPs]
+[LOAD PRE CRIT EDGE SPLITTING]
 
 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
 leading to excess stack traffic. This could be handled by GVN with some crazy
@@ -1192,64 +1219,59 @@ bb3:		; preds = %bb1, %bb2, %bb
 	%10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
 	%11 = load i32* %10, align 4
 
-%11 is fully redundant, an in BB2 it should have the value %8.
+%11 is partially redundant, an in BB2 it should have the value %8.
 
-GCC PR33344 is a similar case.
+GCC PR33344 and PR35287 are similar cases.
 
 
 //===---------------------------------------------------------------------===//
 
+[LOAD PRE]
+
 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
-GCC testsuite.  There are many pre testcases as ssa-pre-*.c
+GCC testsuite, ones we don't get yet are (checked through loadpre25):
 
-//===---------------------------------------------------------------------===//
+[CRIT EDGE BREAKING]
+loadpre3.c predcom-4.c
 
-There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
-GCC testsuite.  For example, predcom-1.c is:
-
- for (i = 2; i < 1000; i++)
-    fib[i] = (fib[i-1] + fib[i - 2]) & 0xffff;
-
-which compiles into:
-
-bb1:		; preds = %bb1, %bb1.thread
-	%indvar = phi i32 [ 0, %bb1.thread ], [ %0, %bb1 ]	
-	%i.0.reg2mem.0 = add i32 %indvar, 2		
-	%0 = add i32 %indvar, 1		; <i32> [#uses=3]
-	%1 = getelementptr [1000 x i32]* @fib, i32 0, i32 %0		
-	%2 = load i32* %1, align 4		; <i32> [#uses=1]
-	%3 = getelementptr [1000 x i32]* @fib, i32 0, i32 %indvar	
-	%4 = load i32* %3, align 4		; <i32> [#uses=1]
-	%5 = add i32 %4, %2		; <i32> [#uses=1]
-	%6 = and i32 %5, 65535		; <i32> [#uses=1]
-	%7 = getelementptr [1000 x i32]* @fib, i32 0, i32 %i.0.reg2mem.0
-	store i32 %6, i32* %7, align 4
-	%exitcond = icmp eq i32 %0, 998		; <i1> [#uses=1]
-	br i1 %exitcond, label %return, label %bb1
+[PRE OF READONLY CALL]
+loadpre5.c
 
-This is basically:
-  LOAD fib[i+1]
-  LOAD fib[i]
-  STORE fib[i+2]
+[TURN SELECT INTO BRANCH]
+loadpre14.c loadpre15.c 
 
-instead of handling this as a loop or other xform, all we'd need to do is teach
-load PRE to phi translate the %0 add (i+1) into the predecessor as (i'+1+1) =
-(i'+2) (where i' is the previous iteration of i).  This would find the store
-which feeds it.
+actually a conditional increment: loadpre18.c loadpre19.c
 
-predcom-2.c is apparently the same as predcom-1.c
-predcom-3.c is very similar but needs loads feeding each other instead of
-store->load.
-predcom-4.c seems the same as the rest.
 
+//===---------------------------------------------------------------------===//
+
+[SCALAR PRE]
+There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
+GCC testsuite.
 
 //===---------------------------------------------------------------------===//
 
-Other simple load PRE cases:
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35287 [LPRE crit edge splitting]
+There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
+GCC testsuite.  For example, we get the first example in predcom-1.c, but 
+miss the second one:
+
+unsigned fib[1000];
+unsigned avg[1000];
+
+__attribute__ ((noinline))
+void count_averages(int n) {
+  int i;
+  for (i = 1; i < n; i++)
+    avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
+}
+
+which compiles into two loads instead of one in the loop.
+
+predcom-2.c is the same as predcom-1.c
+
+predcom-3.c is very similar but needs loads feeding each other instead of
+store->load.
 
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34677 (licm does this, LPRE crit edge)
-  llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | opt -mem2reg -simplifycfg -gvn | llvm-dis
 
 //===---------------------------------------------------------------------===//
 
@@ -1282,7 +1304,7 @@ Interesting missed case because of control flow flattening (should be 2 loads):
 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | 
              opt -mem2reg -gvn -instcombine | llvm-dis
-we miss it because we need 1) GEP PHI TRAN, 2) CRIT EDGE 3) MULTIPLE DIFFERENT
+we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
 
 //===---------------------------------------------------------------------===//
@@ -1682,3 +1704,50 @@ need all but the bottom two bits from %A, and if we gave that mask to SDB it
 would delete the or instruction for us.
 
 //===---------------------------------------------------------------------===//
+
+FunctionAttrs is not marking this function as readnone (just readonly):
+$ clang t.c -emit-llvm -S -o - -O0 | opt -mem2reg -S -functionattrs
+
+int t(int a, int b, int c) {
+ int *p;
+ if (a)
+   p = &a;
+ else
+   p = &c;
+ return *p;
+}
+
+This is because we codegen this to:
+
+define i32 @t(i32 %a, i32 %b, i32 %c) nounwind readonly ssp {
+entry:
+  %a.addr = alloca i32                            ; <i32*> [#uses=3]
+  %c.addr = alloca i32                            ; <i32*> [#uses=2]
+...
+
+if.end:
+  %p.0 = phi i32* [ %a.addr, %if.then ], [ %c.addr, %if.else ]
+  %tmp2 = load i32* %p.0                          ; <i32> [#uses=1]
+  ret i32 %tmp2
+}
+
+And functionattrs doesn't realize that the p.0 load points to function local
+memory.
+
+Also, functionattrs doesn't know about memcpy/memset.  This function should be
+marked readnone, since it only twiddles local memory, but functionattrs doesn't
+handle memset/memcpy/memmove aggressively:
+
+struct X { int *p; int *q; };
+int foo() {
+ int i = 0, j = 1;
+ struct X x, y;
+ int **p;
+ y.p = &i;
+ x.q = &j;
+ p = __builtin_memcpy (&x, &y, sizeof (int *));
+ return **p;
+}
+
+//===---------------------------------------------------------------------===//
+
diff --git a/libclamav/c++/llvm/lib/Target/TargetData.cpp b/libclamav/c++/llvm/lib/Target/TargetData.cpp
index fc71bc3..9434a19 100644
--- a/libclamav/c++/llvm/lib/Target/TargetData.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetData.cpp
@@ -117,14 +117,6 @@ TargetAlignElem::operator==(const TargetAlignElem &rhs) const {
           && TypeBitWidth == rhs.TypeBitWidth);
 }
 
-std::ostream &
-TargetAlignElem::dump(std::ostream &os) const {
-  return os << AlignType
-            << TypeBitWidth
-            << ":" << (int) (ABIAlign * 8)
-            << ":" << (int) (PrefAlign * 8);
-}
-
 const TargetAlignElem TargetData::InvalidAlignmentElem =
                 TargetAlignElem::get((AlignTypeEnum) -1, 0, 0, 0);
 
@@ -323,11 +315,10 @@ unsigned TargetData::getAlignmentInfo(AlignTypeEnum AlignType,
                  : Alignments[BestMatchIdx].PrefAlign;
 }
 
-typedef DenseMap<const StructType*, StructLayout*> LayoutInfoTy;
-
-namespace llvm {
+namespace {
 
 class StructLayoutMap : public AbstractTypeUser {
+  typedef DenseMap<const StructType*, StructLayout*> LayoutInfoTy;
   LayoutInfoTy LayoutInfo;
 
   /// refineAbstractType - The callback method invoked when an abstract type is
@@ -336,19 +327,11 @@ class StructLayoutMap : public AbstractTypeUser {
   ///
   virtual void refineAbstractType(const DerivedType *OldTy,
                                   const Type *) {
-    const StructType *STy = dyn_cast<const StructType>(OldTy);
-    if (!STy) {
-      OldTy->removeAbstractTypeUser(this);
-      return;
-    }
-
-    StructLayout *SL = LayoutInfo[STy];
-    if (SL) {
-      SL->~StructLayout();
-      free(SL);
-      LayoutInfo[STy] = NULL;
-    }
-
+    const StructType *STy = cast<const StructType>(OldTy);
+    LayoutInfoTy::iterator Iter = LayoutInfo.find(STy);
+    Iter->second->~StructLayout();
+    free(Iter->second);
+    LayoutInfo.erase(Iter);
     OldTy->removeAbstractTypeUser(this);
   }
 
@@ -358,70 +341,43 @@ class StructLayoutMap : public AbstractTypeUser {
   /// This method notifies ATU's when this occurs for a type.
   ///
   virtual void typeBecameConcrete(const DerivedType *AbsTy) {
-    const StructType *STy = dyn_cast<const StructType>(AbsTy);
-    if (!STy) {
-      AbsTy->removeAbstractTypeUser(this);
-      return;
-    }
-
-    StructLayout *SL = LayoutInfo[STy];
-    if (SL) {
-      SL->~StructLayout();
-      free(SL);
-      LayoutInfo[STy] = NULL;
-    }
-
+    const StructType *STy = cast<const StructType>(AbsTy);
+    LayoutInfoTy::iterator Iter = LayoutInfo.find(STy);
+    Iter->second->~StructLayout();
+    free(Iter->second);
+    LayoutInfo.erase(Iter);
     AbsTy->removeAbstractTypeUser(this);
   }
 
-  bool insert(const Type *Ty) {
-    if (Ty->isAbstract())
-      Ty->addAbstractTypeUser(this);
-    return true;
-  }
-
 public:
   virtual ~StructLayoutMap() {
     // Remove any layouts.
     for (LayoutInfoTy::iterator
-           I = LayoutInfo.begin(), E = LayoutInfo.end(); I != E; ++I)
-      if (StructLayout *SL = I->second) {
-        SL->~StructLayout();
-        free(SL);
-      }
-  }
+           I = LayoutInfo.begin(), E = LayoutInfo.end(); I != E; ++I) {
+      const Type *Key = I->first;
+      StructLayout *Value = I->second;
 
-  inline LayoutInfoTy::iterator begin() {
-    return LayoutInfo.begin();
-  }
-  inline LayoutInfoTy::iterator end() {
-    return LayoutInfo.end();
-  }
-  inline LayoutInfoTy::const_iterator begin() const {
-    return LayoutInfo.begin();
-  }
-  inline LayoutInfoTy::const_iterator end() const {
-    return LayoutInfo.end();
-  }
+      if (Key->isAbstract())
+        Key->removeAbstractTypeUser(this);
 
-  LayoutInfoTy::iterator find(const StructType *&Val) {
-    return LayoutInfo.find(Val);
-  }
-  LayoutInfoTy::const_iterator find(const StructType *&Val) const {
-    return LayoutInfo.find(Val);
+      Value->~StructLayout();
+      free(Value);
+    }
   }
 
-  bool erase(const StructType *&Val) {
-    return LayoutInfo.erase(Val);
-  }
-  bool erase(LayoutInfoTy::iterator I) {
-    return LayoutInfo.erase(I);
+  void InvalidateEntry(const StructType *Ty) {
+    LayoutInfoTy::iterator I = LayoutInfo.find(Ty);
+    if (I == LayoutInfo.end()) return;
+
+    I->second->~StructLayout();
+    free(I->second);
+    LayoutInfo.erase(I);
+
+    if (Ty->isAbstract())
+      Ty->removeAbstractTypeUser(this);
   }
 
-  StructLayout *&operator[](const Type *Key) {
-    const StructType *STy = dyn_cast<const StructType>(Key);
-    assert(STy && "Trying to access the struct layout map with a non-struct!");
-    insert(STy);
+  StructLayout *&operator[](const StructType *STy) {
     return LayoutInfo[STy];
   }
 
@@ -429,17 +385,18 @@ public:
   virtual void dump() const {}
 };
 
-} // end namespace llvm
+} // end anonymous namespace
 
 TargetData::~TargetData() {
-  delete LayoutMap;
+  delete static_cast<StructLayoutMap*>(LayoutMap);
 }
 
 const StructLayout *TargetData::getStructLayout(const StructType *Ty) const {
   if (!LayoutMap)
     LayoutMap = new StructLayoutMap();
   
-  StructLayout *&SL = (*LayoutMap)[Ty];
+  StructLayoutMap *STM = static_cast<StructLayoutMap*>(LayoutMap);
+  StructLayout *&SL = (*STM)[Ty];
   if (SL) return SL;
 
   // Otherwise, create the struct layout.  Because it is variable length, we 
@@ -453,6 +410,10 @@ const StructLayout *TargetData::getStructLayout(const StructType *Ty) const {
   SL = L;
   
   new (L) StructLayout(Ty, *this);
+
+  if (Ty->isAbstract())
+    Ty->addAbstractTypeUser(STM);
+
   return L;
 }
 
@@ -463,15 +424,10 @@ const StructLayout *TargetData::getStructLayout(const StructType *Ty) const {
 void TargetData::InvalidateStructLayoutInfo(const StructType *Ty) const {
   if (!LayoutMap) return;  // No cache.
   
-  DenseMap<const StructType*, StructLayout*>::iterator I = LayoutMap->find(Ty);
-  if (I == LayoutMap->end()) return;
-  
-  I->second->~StructLayout();
-  free(I->second);
-  LayoutMap->erase(I);
+  StructLayoutMap *STM = static_cast<StructLayoutMap*>(LayoutMap);
+  STM->InvalidateEntry(Ty);
 }
 
-
 std::string TargetData::getStringRepresentation() const {
   std::string Result;
   raw_string_ostream OS(Result);
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86.h b/libclamav/c++/llvm/lib/Target/X86/X86.h
index a167118..684c61f 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86.h
@@ -62,11 +62,6 @@ MCCodeEmitter *createX86MCCodeEmitter(const Target &, TargetMachine &TM);
 ///
 FunctionPass *createEmitX86CodeToMemory();
 
-/// createX86MaxStackAlignmentCalculatorPass - This function returns a pass
-/// which calculates maximal stack alignment required for function
-///
-FunctionPass *createX86MaxStackAlignmentCalculatorPass();
-
 extern Target TheX86_32Target, TheX86_64Target;
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.h b/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.h
index afd5525..5017af2 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.h
@@ -16,6 +16,7 @@
 
 #include "llvm/CodeGen/MachineModuleInfo.h"
 #include "llvm/ADT/StringSet.h"
+#include "X86MachineFunctionInfo.h"
 
 namespace llvm {
   class X86MachineFunctionInfo;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp b/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp
index a2fe9b0..044bd4b 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp
@@ -289,7 +289,7 @@ bool FPS::processBasicBlock(MachineFunction &MF, MachineBasicBlock &BB) {
         while (Start != BB.begin() && prior(Start) != PrevI) --Start;
         errs() << "Inserted instructions:\n\t";
         Start->print(errs(), &MF.getTarget());
-        while (++Start != next(I)) {}
+        while (++Start != llvm::next(I)) {}
       }
       dumpStack();
     );
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
index 8567ca4..8c3b707 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -595,6 +595,7 @@ X86TargetLowering::X86TargetLowering(X86TargetMachine &TM)
     setOperationAction(ISD::FP_TO_SINT, (MVT::SimpleValueType)VT, Expand);
     setOperationAction(ISD::UINT_TO_FP, (MVT::SimpleValueType)VT, Expand);
     setOperationAction(ISD::SINT_TO_FP, (MVT::SimpleValueType)VT, Expand);
+    setOperationAction(ISD::SIGN_EXTEND_INREG, (MVT::SimpleValueType)VT,Expand);
   }
 
   // FIXME: In order to prevent SSE instructions being expanded to MMX ones
@@ -975,6 +976,19 @@ X86TargetLowering::X86TargetLowering(X86TargetMachine &TM)
 
   computeRegisterProperties();
 
+  // Divide and reminder operations have no vector equivalent and can
+  // trap. Do a custom widening for these operations in which we never
+  // generate more divides/remainder than the original vector width.
+  for (unsigned VT = (unsigned)MVT::FIRST_VECTOR_VALUETYPE;
+       VT <= (unsigned)MVT::LAST_VECTOR_VALUETYPE; ++VT) {
+    if (!isTypeLegal((MVT::SimpleValueType)VT)) {
+      setOperationAction(ISD::SDIV, (MVT::SimpleValueType) VT, Custom);
+      setOperationAction(ISD::UDIV, (MVT::SimpleValueType) VT, Custom);
+      setOperationAction(ISD::SREM, (MVT::SimpleValueType) VT, Custom);
+      setOperationAction(ISD::UREM, (MVT::SimpleValueType) VT, Custom);
+    }
+  }
+
   // FIXME: These should be based on subtarget info. Plus, the values should
   // be smaller when we are in optimizing for size mode.
   maxStoresPerMemset = 16; // For @llvm.memset -> sequence of stores
@@ -3331,6 +3345,82 @@ static SDValue getVShift(bool isLeft, EVT VT, SDValue SrcOp,
 }
 
 SDValue
+X86TargetLowering::LowerAsSplatVectorLoad(SDValue SrcOp, EVT VT, DebugLoc dl,
+                                          SelectionDAG &DAG) {
+  
+  // Check if the scalar load can be widened into a vector load. And if
+  // the address is "base + cst" see if the cst can be "absorbed" into
+  // the shuffle mask.
+  if (LoadSDNode *LD = dyn_cast<LoadSDNode>(SrcOp)) {
+    SDValue Ptr = LD->getBasePtr();
+    if (!ISD::isNormalLoad(LD) || LD->isVolatile())
+      return SDValue();
+    EVT PVT = LD->getValueType(0);
+    if (PVT != MVT::i32 && PVT != MVT::f32)
+      return SDValue();
+
+    int FI = -1;
+    int64_t Offset = 0;
+    if (FrameIndexSDNode *FINode = dyn_cast<FrameIndexSDNode>(Ptr)) {
+      FI = FINode->getIndex();
+      Offset = 0;
+    } else if (Ptr.getOpcode() == ISD::ADD &&
+               isa<ConstantSDNode>(Ptr.getOperand(1)) &&
+               isa<FrameIndexSDNode>(Ptr.getOperand(0))) {
+      FI = cast<FrameIndexSDNode>(Ptr.getOperand(0))->getIndex();
+      Offset = Ptr.getConstantOperandVal(1);
+      Ptr = Ptr.getOperand(0);
+    } else {
+      return SDValue();
+    }
+
+    SDValue Chain = LD->getChain();
+    // Make sure the stack object alignment is at least 16.
+    MachineFrameInfo *MFI = DAG.getMachineFunction().getFrameInfo();
+    if (DAG.InferPtrAlignment(Ptr) < 16) {
+      if (MFI->isFixedObjectIndex(FI)) {
+        // Can't change the alignment. Reference stack + offset explicitly
+        // if stack pointer is at least 16-byte aligned.
+        unsigned StackAlign = Subtarget->getStackAlignment();
+        if (StackAlign < 16)
+          return SDValue();
+        Offset = MFI->getObjectOffset(FI) + Offset;
+        SDValue StackPtr = DAG.getCopyFromReg(Chain, dl, X86StackPtr,
+                                              getPointerTy());
+        Ptr = DAG.getNode(ISD::ADD, dl, getPointerTy(), StackPtr,
+                          DAG.getConstant(Offset & ~15, getPointerTy()));
+        Offset %= 16;
+      } else {
+        MFI->setObjectAlignment(FI, 16);
+      }
+    }
+
+    // (Offset % 16) must be multiple of 4. Then address is then
+    // Ptr + (Offset & ~15).
+    if (Offset < 0)
+      return SDValue();
+    if ((Offset % 16) & 3)
+      return SDValue();
+    int64_t StartOffset = Offset & ~15;
+    if (StartOffset)
+      Ptr = DAG.getNode(ISD::ADD, Ptr.getDebugLoc(), Ptr.getValueType(),
+                        Ptr,DAG.getConstant(StartOffset, Ptr.getValueType()));
+
+    int EltNo = (Offset - StartOffset) >> 2;
+    int Mask[4] = { EltNo, EltNo, EltNo, EltNo };
+    EVT VT = (PVT == MVT::i32) ? MVT::v4i32 : MVT::v4f32;
+    SDValue V1 = DAG.getLoad(VT, dl, Chain, Ptr,LD->getSrcValue(),0);
+    // Canonicalize it to a v4i32 shuffle.
+    V1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::v4i32, V1);
+    return DAG.getNode(ISD::BIT_CONVERT, dl, VT,
+                       DAG.getVectorShuffle(MVT::v4i32, dl, V1,
+                                            DAG.getUNDEF(MVT::v4i32), &Mask[0]));
+  }
+
+  return SDValue();
+}
+
+SDValue
 X86TargetLowering::LowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG) {
   DebugLoc dl = Op.getDebugLoc();
   // All zero's are handled with pxor, all one's are handled with pcmpeqd.
@@ -3473,8 +3563,19 @@ X86TargetLowering::LowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG) {
   }
 
   // Splat is obviously ok. Let legalizer expand it to a shuffle.
-  if (Values.size() == 1)
+  if (Values.size() == 1) {
+    if (EVTBits == 32) {
+      // Instead of a shuffle like this:
+      // shuffle (scalar_to_vector (load (ptr + 4))), undef, <0, 0, 0, 0>
+      // Check if it's possible to issue this instead.
+      // shuffle (vload ptr)), undef, <1, 1, 1, 1>
+      unsigned Idx = CountTrailingZeros_32(NonZeros);
+      SDValue Item = Op.getOperand(Idx);
+      if (Op.getNode()->isOnlyUserOf(Item.getNode()))
+        return LowerAsSplatVectorLoad(Item, VT, dl, DAG);
+    }
     return SDValue();
+  }
 
   // A vector full of immediates; various special cases are already
   // handled, so this is best done with a single constant-pool load.
@@ -4265,7 +4366,7 @@ X86TargetLowering::LowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG) {
   unsigned ShAmt = 0;
   SDValue ShVal;
   bool isShift = getSubtarget()->hasSSE2() &&
-  isVectorShift(SVOp, DAG, isLeft, ShVal, ShAmt);
+    isVectorShift(SVOp, DAG, isLeft, ShVal, ShAmt);
   if (isShift && ShVal.hasOneUse()) {
     // If the shifted value has multiple uses, it may be cheaper to use
     // v_set0 + movlhps or movhlps, etc.
@@ -4802,6 +4903,7 @@ static SDValue
 GetTLSADDR(SelectionDAG &DAG, SDValue Chain, GlobalAddressSDNode *GA,
            SDValue *InFlag, const EVT PtrVT, unsigned ReturnReg,
            unsigned char OperandFlags) {
+  MachineFrameInfo *MFI = DAG.getMachineFunction().getFrameInfo();
   SDVTList NodeTys = DAG.getVTList(MVT::Other, MVT::Flag);
   DebugLoc dl = GA->getDebugLoc();
   SDValue TGA = DAG.getTargetGlobalAddress(GA->getGlobal(),
@@ -4815,6 +4917,10 @@ GetTLSADDR(SelectionDAG &DAG, SDValue Chain, GlobalAddressSDNode *GA,
     SDValue Ops[]  = { Chain, TGA };
     Chain = DAG.getNode(X86ISD::TLSADDR, dl, NodeTys, Ops, 2);
   }
+
+  // TLSADDR will be codegen'ed as call. Inform MFI that function has calls.
+  MFI->setHasCalls(true);
+
   SDValue Flag = Chain.getValue(1);
   return DAG.getCopyFromReg(Chain, dl, ReturnReg, PtrVT, Flag);
 }
@@ -7170,6 +7276,14 @@ void X86TargetLowering::ReplaceNodeResults(SDNode *N,
     Results.push_back(edx.getValue(1));
     return;
   }
+  case ISD::SDIV:
+  case ISD::UDIV:
+  case ISD::SREM:
+  case ISD::UREM: {
+    EVT WidenVT = getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
+    Results.push_back(DAG.UnrollVectorOp(N, WidenVT.getVectorNumElements()));
+    return;
+  }
   case ISD::ATOMIC_CMP_SWAP: {
     EVT T = N->getValueType(0);
     assert (T == MVT::i64 && "Only know how to expand i64 Cmp and Swap");
@@ -8306,16 +8420,6 @@ bool X86TargetLowering::isGAPlusOffset(SDNode *N,
   return TargetLowering::isGAPlusOffset(N, GA, Offset);
 }
 
-static bool isBaseAlignmentOfN(unsigned N, SDNode *Base,
-                               const TargetLowering &TLI) {
-  GlobalValue *GV;
-  int64_t Offset = 0;
-  if (TLI.isGAPlusOffset(Base, GV, Offset))
-    return (GV->getAlignment() >= N && (Offset % N) == 0);
-  // DAG combine handles the stack object case.
-  return false;
-}
-
 static bool EltsFromConsecutiveLoads(ShuffleVectorSDNode *N, unsigned NumElems,
                                      EVT EltVT, LoadSDNode *&LDBase,
                                      unsigned &LastLoadedElt,
@@ -8345,7 +8449,7 @@ static bool EltsFromConsecutiveLoads(ShuffleVectorSDNode *N, unsigned NumElems,
       continue;
 
     LoadSDNode *LD = cast<LoadSDNode>(Elt);
-    if (!TLI.isConsecutiveLoad(LD, LDBase, EltVT.getSizeInBits()/8, i, MFI))
+    if (!DAG.isConsecutiveLoad(LD, LDBase, EltVT.getSizeInBits()/8, i))
       return false;
     LastLoadedElt = i;
   }
@@ -8378,7 +8482,7 @@ static SDValue PerformShuffleCombine(SDNode *N, SelectionDAG &DAG,
     return SDValue();
 
   if (LastLoadedElt == NumElems - 1) {
-    if (isBaseAlignmentOfN(16, LD->getBasePtr().getNode(), TLI))
+    if (DAG.InferPtrAlignment(LD->getBasePtr()) >= 16)
       return DAG.getLoad(VT, dl, LD->getChain(), LD->getBasePtr(),
                          LD->getSrcValue(), LD->getSrcValueOffset(),
                          LD->isVolatile());
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
index 7b4ab62..89b773d 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
@@ -626,7 +626,9 @@ namespace llvm {
 
     std::pair<SDValue,SDValue> FP_TO_INTHelper(SDValue Op, SelectionDAG &DAG,
                                                bool isSigned);
-    
+
+    SDValue LowerAsSplatVectorLoad(SDValue SrcOp, EVT VT, DebugLoc dl,
+                                   SelectionDAG &DAG);
     SDValue LowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG);
     SDValue LowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG);
     SDValue LowerEXTRACT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG);
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td b/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
index a01534b..b5fa862 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
@@ -1663,7 +1663,7 @@ def : Pat<(X86tcret GR64:$dst, imm:$off),
           (TCRETURNri64 GR64:$dst, imm:$off)>;
 
 def : Pat<(X86tcret (i64 tglobaladdr:$dst), imm:$off),
-          (TCRETURNdi64 texternalsym:$dst, imm:$off)>;
+          (TCRETURNdi64 tglobaladdr:$dst, imm:$off)>;
 
 def : Pat<(X86tcret (i64 texternalsym:$dst), imm:$off),
           (TCRETURNdi64 texternalsym:$dst, imm:$off)>;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
index a37013d..d45dcce 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
@@ -783,12 +783,14 @@ unsigned X86InstrInfo::isLoadFromStackSlotPostFE(const MachineInstr *MI,
     if ((Reg = isLoadFromStackSlot(MI, FrameIndex)))
       return Reg;
     // Check for post-frame index elimination operations
-    return hasLoadFromStackSlot(MI, FrameIndex);
+    const MachineMemOperand *Dummy;
+    return hasLoadFromStackSlot(MI, Dummy, FrameIndex);
   }
   return 0;
 }
 
 bool X86InstrInfo::hasLoadFromStackSlot(const MachineInstr *MI,
+                                        const MachineMemOperand *&MMO,
                                         int &FrameIndex) const {
   for (MachineInstr::mmo_iterator o = MI->memoperands_begin(),
          oe = MI->memoperands_end();
@@ -798,6 +800,7 @@ bool X86InstrInfo::hasLoadFromStackSlot(const MachineInstr *MI,
       if (const FixedStackPseudoSourceValue *Value =
           dyn_cast<const FixedStackPseudoSourceValue>((*o)->getValue())) {
         FrameIndex = Value->getFrameIndex();
+        MMO = *o;
         return true;
       }
   }
@@ -819,12 +822,14 @@ unsigned X86InstrInfo::isStoreToStackSlotPostFE(const MachineInstr *MI,
     if ((Reg = isStoreToStackSlot(MI, FrameIndex)))
       return Reg;
     // Check for post-frame index elimination operations
-    return hasStoreToStackSlot(MI, FrameIndex);
+    const MachineMemOperand *Dummy;
+    return hasStoreToStackSlot(MI, Dummy, FrameIndex);
   }
   return 0;
 }
 
 bool X86InstrInfo::hasStoreToStackSlot(const MachineInstr *MI,
+                                       const MachineMemOperand *&MMO,
                                        int &FrameIndex) const {
   for (MachineInstr::mmo_iterator o = MI->memoperands_begin(),
          oe = MI->memoperands_end();
@@ -834,6 +839,7 @@ bool X86InstrInfo::hasStoreToStackSlot(const MachineInstr *MI,
       if (const FixedStackPseudoSourceValue *Value =
           dyn_cast<const FixedStackPseudoSourceValue>((*o)->getValue())) {
         FrameIndex = Value->getFrameIndex();
+        MMO = *o;
         return true;
       }
   }
@@ -1052,6 +1058,107 @@ static bool hasLiveCondCodeDef(MachineInstr *MI) {
   return false;
 }
 
+/// convertToThreeAddressWithLEA - Helper for convertToThreeAddress when 16-bit
+/// 16-bit LEA is disabled, use 32-bit LEA to form 3-address code by promoting
+/// to a 32-bit superregister and then truncating back down to a 16-bit
+/// subregister.
+MachineInstr *
+X86InstrInfo::convertToThreeAddressWithLEA(unsigned MIOpc,
+                                           MachineFunction::iterator &MFI,
+                                           MachineBasicBlock::iterator &MBBI,
+                                           LiveVariables *LV) const {
+  MachineInstr *MI = MBBI;
+  unsigned Dest = MI->getOperand(0).getReg();
+  unsigned Src = MI->getOperand(1).getReg();
+  bool isDead = MI->getOperand(0).isDead();
+  bool isKill = MI->getOperand(1).isKill();
+
+  unsigned Opc = TM.getSubtarget<X86Subtarget>().is64Bit()
+    ? X86::LEA64_32r : X86::LEA32r;
+  MachineRegisterInfo &RegInfo = MFI->getParent()->getRegInfo();
+  unsigned leaInReg = RegInfo.createVirtualRegister(&X86::GR32RegClass);
+  unsigned leaOutReg = RegInfo.createVirtualRegister(&X86::GR32RegClass);
+            
+  // Build and insert into an implicit UNDEF value. This is OK because
+  // well be shifting and then extracting the lower 16-bits. 
+  BuildMI(*MFI, MBBI, MI->getDebugLoc(), get(X86::IMPLICIT_DEF), leaInReg);
+  MachineInstr *InsMI =
+    BuildMI(*MFI, MBBI, MI->getDebugLoc(), get(X86::INSERT_SUBREG),leaInReg)
+    .addReg(leaInReg)
+    .addReg(Src, getKillRegState(isKill))
+    .addImm(X86::SUBREG_16BIT);
+
+  MachineInstrBuilder MIB = BuildMI(*MFI, MBBI, MI->getDebugLoc(),
+                                    get(Opc), leaOutReg);
+  switch (MIOpc) {
+  default:
+    llvm_unreachable(0);
+    break;
+  case X86::SHL16ri: {
+    unsigned ShAmt = MI->getOperand(2).getImm();
+    MIB.addReg(0).addImm(1 << ShAmt)
+       .addReg(leaInReg, RegState::Kill).addImm(0);
+    break;
+  }
+  case X86::INC16r:
+  case X86::INC64_16r:
+    addLeaRegOffset(MIB, leaInReg, true, 1);
+    break;
+  case X86::DEC16r:
+  case X86::DEC64_16r:
+    addLeaRegOffset(MIB, leaInReg, true, -1);
+    break;
+  case X86::ADD16ri:
+  case X86::ADD16ri8:
+    addLeaRegOffset(MIB, leaInReg, true, MI->getOperand(2).getImm());    
+    break;
+  case X86::ADD16rr: {
+    unsigned Src2 = MI->getOperand(2).getReg();
+    bool isKill2 = MI->getOperand(2).isKill();
+    unsigned leaInReg2 = 0;
+    MachineInstr *InsMI2 = 0;
+    if (Src == Src2) {
+      // ADD16rr %reg1028<kill>, %reg1028
+      // just a single insert_subreg.
+      addRegReg(MIB, leaInReg, true, leaInReg, false);
+    } else {
+      leaInReg2 = RegInfo.createVirtualRegister(&X86::GR32RegClass);
+      // Build and insert into an implicit UNDEF value. This is OK because
+      // well be shifting and then extracting the lower 16-bits. 
+      BuildMI(*MFI, MIB, MI->getDebugLoc(), get(X86::IMPLICIT_DEF), leaInReg2);
+      InsMI2 =
+        BuildMI(*MFI, MIB, MI->getDebugLoc(), get(X86::INSERT_SUBREG),leaInReg2)
+        .addReg(leaInReg2)
+        .addReg(Src2, getKillRegState(isKill2))
+        .addImm(X86::SUBREG_16BIT);
+      addRegReg(MIB, leaInReg, true, leaInReg2, true);
+    }
+    if (LV && isKill2 && InsMI2)
+      LV->replaceKillInstruction(Src2, MI, InsMI2);
+    break;
+  }
+  }
+
+  MachineInstr *NewMI = MIB;
+  MachineInstr *ExtMI =
+    BuildMI(*MFI, MBBI, MI->getDebugLoc(), get(X86::EXTRACT_SUBREG))
+    .addReg(Dest, RegState::Define | getDeadRegState(isDead))
+    .addReg(leaOutReg, RegState::Kill)
+    .addImm(X86::SUBREG_16BIT);
+
+  if (LV) {
+    // Update live variables
+    LV->getVarInfo(leaInReg).Kills.push_back(NewMI);
+    LV->getVarInfo(leaOutReg).Kills.push_back(ExtMI);
+    if (isKill)
+      LV->replaceKillInstruction(Src, MI, InsMI);
+    if (isDead)
+      LV->replaceKillInstruction(Dest, MI, ExtMI);
+  }
+
+  return ExtMI;
+}
+
 /// convertToThreeAddress - This method must be implemented by targets that
 /// set the M_CONVERTIBLE_TO_3_ADDR flag.  When this flag is set, the target
 /// may be able to convert a two-address instruction into a true
@@ -1131,51 +1238,13 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     unsigned ShAmt = MI->getOperand(2).getImm();
     if (ShAmt == 0 || ShAmt >= 4) return 0;
 
-    if (DisableLEA16) {
-      // If 16-bit LEA is disabled, use 32-bit LEA via subregisters.
-      MachineRegisterInfo &RegInfo = MFI->getParent()->getRegInfo();
-      unsigned Opc = TM.getSubtarget<X86Subtarget>().is64Bit()
-        ? X86::LEA64_32r : X86::LEA32r;
-      unsigned leaInReg = RegInfo.createVirtualRegister(&X86::GR32RegClass);
-      unsigned leaOutReg = RegInfo.createVirtualRegister(&X86::GR32RegClass);
-            
-      // Build and insert into an implicit UNDEF value. This is OK because
-      // well be shifting and then extracting the lower 16-bits. 
-      BuildMI(*MFI, MBBI, MI->getDebugLoc(), get(X86::IMPLICIT_DEF), leaInReg);
-      MachineInstr *InsMI =
-        BuildMI(*MFI, MBBI, MI->getDebugLoc(), get(X86::INSERT_SUBREG),leaInReg)
-        .addReg(leaInReg)
-        .addReg(Src, getKillRegState(isKill))
-        .addImm(X86::SUBREG_16BIT);
-      
-      NewMI = BuildMI(*MFI, MBBI, MI->getDebugLoc(), get(Opc), leaOutReg)
-        .addReg(0).addImm(1 << ShAmt)
-        .addReg(leaInReg, RegState::Kill)
-        .addImm(0);
-      
-      MachineInstr *ExtMI =
-        BuildMI(*MFI, MBBI, MI->getDebugLoc(), get(X86::EXTRACT_SUBREG))
-        .addReg(Dest, RegState::Define | getDeadRegState(isDead))
-        .addReg(leaOutReg, RegState::Kill)
-        .addImm(X86::SUBREG_16BIT);
-
-      if (LV) {
-        // Update live variables
-        LV->getVarInfo(leaInReg).Kills.push_back(NewMI);
-        LV->getVarInfo(leaOutReg).Kills.push_back(ExtMI);
-        if (isKill)
-          LV->replaceKillInstruction(Src, MI, InsMI);
-        if (isDead)
-          LV->replaceKillInstruction(Dest, MI, ExtMI);
-      }
-      return ExtMI;
-    } else {
-      NewMI = BuildMI(MF, MI->getDebugLoc(), get(X86::LEA16r))
-        .addReg(Dest, RegState::Define | getDeadRegState(isDead))
-        .addReg(0).addImm(1 << ShAmt)
-        .addReg(Src, getKillRegState(isKill))
-        .addImm(0);
-    }
+    if (DisableLEA16)
+      return convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV);
+    NewMI = BuildMI(MF, MI->getDebugLoc(), get(X86::LEA16r))
+      .addReg(Dest, RegState::Define | getDeadRegState(isDead))
+      .addReg(0).addImm(1 << ShAmt)
+      .addReg(Src, getKillRegState(isKill))
+      .addImm(0);
     break;
   }
   default: {
@@ -1202,7 +1271,8 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     }
     case X86::INC16r:
     case X86::INC64_16r:
-      if (DisableLEA16) return 0;
+      if (DisableLEA16)
+        return convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV);
       assert(MI->getNumOperands() >= 2 && "Unknown inc instruction!");
       NewMI = addRegOffset(BuildMI(MF, MI->getDebugLoc(), get(X86::LEA16r))
                            .addReg(Dest, RegState::Define |
@@ -1223,7 +1293,8 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     }
     case X86::DEC16r:
     case X86::DEC64_16r:
-      if (DisableLEA16) return 0;
+      if (DisableLEA16)
+        return convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV);
       assert(MI->getNumOperands() >= 2 && "Unknown dec instruction!");
       NewMI = addRegOffset(BuildMI(MF, MI->getDebugLoc(), get(X86::LEA16r))
                            .addReg(Dest, RegState::Define |
@@ -1246,7 +1317,8 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
       break;
     }
     case X86::ADD16rr: {
-      if (DisableLEA16) return 0;
+      if (DisableLEA16)
+        return convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV);
       assert(MI->getNumOperands() >= 3 && "Unknown add instruction!");
       unsigned Src2 = MI->getOperand(2).getReg();
       bool isKill2 = MI->getOperand(2).isKill();
@@ -1261,56 +1333,32 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     case X86::ADD64ri32:
     case X86::ADD64ri8:
       assert(MI->getNumOperands() >= 3 && "Unknown add instruction!");
-      if (MI->getOperand(2).isImm())
-        NewMI = addLeaRegOffset(BuildMI(MF, MI->getDebugLoc(), get(X86::LEA64r))
-                                .addReg(Dest, RegState::Define |
-                                        getDeadRegState(isDead)),
-                                Src, isKill, MI->getOperand(2).getImm());
+      NewMI = addLeaRegOffset(BuildMI(MF, MI->getDebugLoc(), get(X86::LEA64r))
+                              .addReg(Dest, RegState::Define |
+                                      getDeadRegState(isDead)),
+                              Src, isKill, MI->getOperand(2).getImm());
       break;
     case X86::ADD32ri:
-    case X86::ADD32ri8:
+    case X86::ADD32ri8: {
       assert(MI->getNumOperands() >= 3 && "Unknown add instruction!");
-      if (MI->getOperand(2).isImm()) {
-        unsigned Opc = is64Bit ? X86::LEA64_32r : X86::LEA32r;
-        NewMI = addLeaRegOffset(BuildMI(MF, MI->getDebugLoc(), get(Opc))
-                                .addReg(Dest, RegState::Define |
-                                        getDeadRegState(isDead)),
+      unsigned Opc = is64Bit ? X86::LEA64_32r : X86::LEA32r;
+      NewMI = addLeaRegOffset(BuildMI(MF, MI->getDebugLoc(), get(Opc))
+                              .addReg(Dest, RegState::Define |
+                                      getDeadRegState(isDead)),
                                 Src, isKill, MI->getOperand(2).getImm());
-      }
       break;
+    }
     case X86::ADD16ri:
     case X86::ADD16ri8:
-      if (DisableLEA16) return 0;
+      if (DisableLEA16)
+        return convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV);
       assert(MI->getNumOperands() >= 3 && "Unknown add instruction!");
-      if (MI->getOperand(2).isImm())
-        NewMI = addRegOffset(BuildMI(MF, MI->getDebugLoc(), get(X86::LEA16r))
-                             .addReg(Dest, RegState::Define |
-                                     getDeadRegState(isDead)),
-                             Src, isKill, MI->getOperand(2).getImm());
-      break;
-    case X86::SHL16ri:
-      if (DisableLEA16) return 0;
-    case X86::SHL32ri:
-    case X86::SHL64ri: {
-      assert(MI->getNumOperands() >= 3 && MI->getOperand(2).isImm() &&
-             "Unknown shl instruction!");
-      unsigned ShAmt = MI->getOperand(2).getImm();
-      if (ShAmt == 1 || ShAmt == 2 || ShAmt == 3) {
-        X86AddressMode AM;
-        AM.Scale = 1 << ShAmt;
-        AM.IndexReg = Src;
-        unsigned Opc = MIOpc == X86::SHL64ri ? X86::LEA64r
-          : (MIOpc == X86::SHL32ri
-             ? (is64Bit ? X86::LEA64_32r : X86::LEA32r) : X86::LEA16r);
-        NewMI = addFullAddress(BuildMI(MF, MI->getDebugLoc(), get(Opc))
-                               .addReg(Dest, RegState::Define |
-                                       getDeadRegState(isDead)), AM);
-        if (isKill)
-          NewMI->getOperand(3).setIsKill(true);
-      }
+      NewMI = addLeaRegOffset(BuildMI(MF, MI->getDebugLoc(), get(X86::LEA16r))
+                              .addReg(Dest, RegState::Define |
+                                      getDeadRegState(isDead)),
+                              Src, isKill, MI->getOperand(2).getImm());
       break;
     }
-    }
   }
   }
 
@@ -1587,8 +1635,8 @@ bool X86InstrInfo::AnalyzeBranch(MachineBasicBlock &MBB,
       }
 
       // If the block has any instructions after a JMP, delete them.
-      while (next(I) != MBB.end())
-        next(I)->eraseFromParent();
+      while (llvm::next(I) != MBB.end())
+        llvm::next(I)->eraseFromParent();
       Cond.clear();
       FBB = 0;
       // Delete the JMP if it's equivalent to a fall-through.
@@ -2713,27 +2761,6 @@ unsigned X86InstrInfo::getOpcodeAfterMemoryUnfold(unsigned Opc,
   return I->second.first;
 }
 
-bool X86InstrInfo::BlockHasNoFallThrough(const MachineBasicBlock &MBB) const {
-  if (MBB.empty()) return false;
-  
-  switch (MBB.back().getOpcode()) {
-  case X86::TCRETURNri:
-  case X86::TCRETURNdi:
-  case X86::RET:     // Return.
-  case X86::RETI:
-  case X86::TAILJMPd:
-  case X86::TAILJMPr:
-  case X86::TAILJMPm:
-  case X86::JMP:     // Uncond branch.
-  case X86::JMP32r:  // Indirect branch.
-  case X86::JMP64r:  // Indirect branch (64-bit).
-  case X86::JMP32m:  // Indirect branch through mem.
-  case X86::JMP64m:  // Indirect branch through mem (64-bit).
-    return true;
-  default: return false;
-  }
-}
-
 bool X86InstrInfo::
 ReverseBranchCondition(SmallVectorImpl<MachineOperand> &Cond) const {
   assert(Cond.size() == 1 && "Invalid X86 branch condition!");
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h
index 3d4c2f6..b83441d 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h
@@ -457,11 +457,14 @@ public:
 
   /// hasLoadFromStackSlot - If the specified machine instruction has
   /// a load from a stack slot, return true along with the FrameIndex
-  /// of the loaded stack slot.  If not, return false.  Unlike
+  /// of the loaded stack slot and the machine mem operand containing
+  /// the reference.  If not, return false.  Unlike
   /// isLoadFromStackSlot, this returns true for any instructions that
   /// loads from the stack.  This is a hint only and may not catch all
   /// cases.
-  bool hasLoadFromStackSlot(const MachineInstr *MI, int &FrameIndex) const;
+  bool hasLoadFromStackSlot(const MachineInstr *MI,
+                            const MachineMemOperand *&MMO,
+                            int &FrameIndex) const;
 
   unsigned isStoreToStackSlot(const MachineInstr *MI, int &FrameIndex) const;
   /// isStoreToStackSlotPostFE - Check for post-frame ptr elimination
@@ -472,11 +475,13 @@ public:
 
   /// hasStoreToStackSlot - If the specified machine instruction has a
   /// store to a stack slot, return true along with the FrameIndex of
-  /// the loaded stack slot.  If not, return false.  Unlike
-  /// isStoreToStackSlot, this returns true for any instructions that
-  /// loads from the stack.  This is a hint only and may not catch all
-  /// cases.
-  bool hasStoreToStackSlot(const MachineInstr *MI, int &FrameIndex) const;
+  /// the loaded stack slot and the machine mem operand containing the
+  /// reference.  If not, return false.  Unlike isStoreToStackSlot,
+  /// this returns true for any instructions that loads from the
+  /// stack.  This is a hint only and may not catch all cases.
+  bool hasStoreToStackSlot(const MachineInstr *MI,
+                           const MachineMemOperand *&MMO,
+                           int &FrameIndex) const;
 
   bool isReallyTriviallyReMaterializable(const MachineInstr *MI,
                                          AliasAnalysis *AA) const;
@@ -595,7 +600,6 @@ public:
                                       bool UnfoldLoad, bool UnfoldStore,
                                       unsigned *LoadRegIndex = 0) const;
   
-  virtual bool BlockHasNoFallThrough(const MachineBasicBlock &MBB) const;
   virtual
   bool ReverseBranchCondition(SmallVectorImpl<MachineOperand> &Cond) const;
 
@@ -632,9 +636,12 @@ public:
   ///
   unsigned getGlobalBaseReg(MachineFunction *MF) const;
 
-  virtual bool isProfitableToDuplicateIndirectBranch() const { return true; }
-
 private:
+  MachineInstr * convertToThreeAddressWithLEA(unsigned MIOpc,
+                                              MachineFunction::iterator &MFI,
+                                              MachineBasicBlock::iterator &MBBI,
+                                              LiveVariables *LV) const;
+
   MachineInstr* foldMemoryOperandImpl(MachineFunction &MF,
                                      MachineInstr* MI,
                                      unsigned OpNum,
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
index 1cf5529..90ef1f4 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
@@ -718,7 +718,6 @@ def TCRETURNri : I<0, Pseudo, (outs), (ins GR32:$dst, i32imm:$offset, variable_o
                  []>;
 
 let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
-
   def TAILJMPd : IBr<0xE9, (ins i32imm_pcrel:$dst), "jmp\t$dst  # TAILCALL",
                  []>;
 let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
index dfdd4ce..62841f8 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
@@ -2083,7 +2083,7 @@ def PSHUFDmi : PDIi8<0x70, MRMSrcMem,
                      (outs VR128:$dst), (ins i128mem:$src1, i8imm:$src2),
                      "pshufd\t{$src2, $src1, $dst|$dst, $src1, $src2}",
                      [(set VR128:$dst, (v4i32 (pshufd:$src2
-                                             (bc_v4i32(memopv2i64 addr:$src1)),
+                                             (bc_v4i32 (memopv2i64 addr:$src1)),
                                              (undef))))]>;
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp
index f577fcf..d96aafd 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp
@@ -423,21 +423,6 @@ BitVector X86RegisterInfo::getReservedRegs(const MachineFunction &MF) const {
 // Stack Frame Processing methods
 //===----------------------------------------------------------------------===//
 
-static unsigned calculateMaxStackAlignment(const MachineFrameInfo *FFI) {
-  unsigned MaxAlign = 0;
-
-  for (int i = FFI->getObjectIndexBegin(),
-         e = FFI->getObjectIndexEnd(); i != e; ++i) {
-    if (FFI->isDeadObjectIndex(i))
-      continue;
-
-    unsigned Align = FFI->getObjectAlignment(i);
-    MaxAlign = std::max(MaxAlign, Align);
-  }
-
-  return MaxAlign;
-}
-
 /// hasFP - Return true if the specified function should have a dedicated frame
 /// pointer register.  This is true if the function has variable sized allocas
 /// or if frame pointer elimination is disabled.
@@ -638,10 +623,7 @@ X86RegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
 
   // Calculate and set max stack object alignment early, so we can decide
   // whether we will need stack realignment (and thus FP).
-  unsigned MaxAlign = std::max(MFI->getMaxAlignment(),
-                               calculateMaxStackAlignment(MFI));
-
-  MFI->setMaxAlignment(MaxAlign);
+  MFI->calculateMaxStackAlignment();
 
   X86MachineFunctionInfo *X86FI = MF.getInfo<X86MachineFunctionInfo>();
   int32_t TailCallReturnAddrDelta = X86FI->getTCReturnAddrDelta();
@@ -741,7 +723,7 @@ void mergeSPUpdatesDown(MachineBasicBlock &MBB,
 
   if (MBBI == MBB.end()) return;
 
-  MachineBasicBlock::iterator NI = next(MBBI);
+  MachineBasicBlock::iterator NI = llvm::next(MBBI);
   if (NI == MBB.end()) return;
 
   unsigned Opc = NI->getOpcode();
@@ -775,7 +757,7 @@ static int mergeSPUpdates(MachineBasicBlock &MBB,
     return 0;
 
   MachineBasicBlock::iterator PI = doMergeWithPrevious ? prior(MBBI) : MBBI;
-  MachineBasicBlock::iterator NI = doMergeWithPrevious ? 0 : next(MBBI);
+  MachineBasicBlock::iterator NI = doMergeWithPrevious ? 0 : llvm::next(MBBI);
   unsigned Opc = PI->getOpcode();
   int Offset = 0;
 
@@ -1001,7 +983,7 @@ void X86RegisterInfo::emitPrologue(MachineFunction &MF) const {
     }
 
     // Mark the FramePtr as live-in in every block except the entry.
-    for (MachineFunction::iterator I = next(MF.begin()), E = MF.end();
+    for (MachineFunction::iterator I = llvm::next(MF.begin()), E = MF.end();
          I != E; ++I)
       I->addLiveIn(FramePtr);
 
@@ -1262,7 +1244,7 @@ void X86RegisterInfo::emitEpilogue(MachineFunction &MF,
     else if (RetOpcode== X86::TCRETURNri64)
       BuildMI(MBB, MBBI, DL, TII.get(X86::TAILJMPr64), JumpTarget.getReg());
     else
-       BuildMI(MBB, MBBI, DL, TII.get(X86::TAILJMPr), JumpTarget.getReg());
+      BuildMI(MBB, MBBI, DL, TII.get(X86::TAILJMPr), JumpTarget.getReg());
 
     // Delete the pseudo instruction TCRETURN.
     MBB.erase(MBBI);
@@ -1482,45 +1464,3 @@ unsigned getX86SubSuperRegister(unsigned Reg, EVT VT, bool High) {
 }
 
 #include "X86GenRegisterInfo.inc"
-
-namespace {
-  struct MSAC : public MachineFunctionPass {
-    static char ID;
-    MSAC() : MachineFunctionPass(&ID) {}
-
-    virtual bool runOnMachineFunction(MachineFunction &MF) {
-      MachineFrameInfo *FFI = MF.getFrameInfo();
-      MachineRegisterInfo &RI = MF.getRegInfo();
-
-      // Calculate max stack alignment of all already allocated stack objects.
-      unsigned MaxAlign = calculateMaxStackAlignment(FFI);
-
-      // Be over-conservative: scan over all vreg defs and find, whether vector
-      // registers are used. If yes - there is probability, that vector register
-      // will be spilled and thus stack needs to be aligned properly.
-      for (unsigned RegNum = TargetRegisterInfo::FirstVirtualRegister;
-           RegNum < RI.getLastVirtReg(); ++RegNum)
-        MaxAlign = std::max(MaxAlign, RI.getRegClass(RegNum)->getAlignment());
-
-      if (FFI->getMaxAlignment() == MaxAlign)
-        return false;
-
-      FFI->setMaxAlignment(MaxAlign);
-      return true;
-    }
-
-    virtual const char *getPassName() const {
-      return "X86 Maximal Stack Alignment Calculator";
-    }
-
-    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
-      AU.setPreservesCFG();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
-  };
-
-  char MSAC::ID = 0;
-}
-
-FunctionPass*
-llvm::createX86MaxStackAlignmentCalculatorPass() { return new MSAC(); }
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp
index 661f560..75cdbad 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp
@@ -367,5 +367,5 @@ bool X86Subtarget::enablePostRAScheduler(
             RegClassVector& CriticalPathRCs) const {
   Mode = TargetSubtarget::ANTIDEP_CRITICAL;
   CriticalPathRCs.clear();
-  return OptLevel >= CodeGenOpt::Default;
+  return OptLevel >= CodeGenOpt::Aggressive;
 }
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
index 0cda8bc..0152121 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
@@ -163,7 +163,7 @@ bool X86TargetMachine::addPreRegAlloc(PassManagerBase &PM,
                                       CodeGenOpt::Level OptLevel) {
   // Calculate and set max stack object alignment early, so we can decide
   // whether we will need stack realignment (and thus FP).
-  PM.add(createX86MaxStackAlignmentCalculatorPass());
+  PM.add(createMaxStackAlignmentCalculatorPass());
   return false;  // -print-machineinstr shouldn't print after this.
 }
 
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp
index 4635d0e..1793bbf 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp
@@ -2493,29 +2493,28 @@ bool GlobalOpt::OptimizeGlobalAliases(Module &M) {
       Changed = true;
     }
 
-    // If the aliasee has internal linkage, give it the name and linkage
-    // of the alias, and delete the alias.  This turns:
-    //   define internal ... @f(...)
-    //   @a = alias ... @f
-    // into:
-    //   define ... @a(...)
-    if (!Target->hasLocalLinkage())
-      continue;
-
-    // The transform is only useful if the alias does not have internal linkage.
-    if (J->hasLocalLinkage())
-      continue;
+    // If the alias is externally visible, we may still be able to simplify it.
+    if (!J->hasLocalLinkage()) {
+      // If the aliasee has internal linkage, give it the name and linkage
+      // of the alias, and delete the alias.  This turns:
+      //   define internal ... @f(...)
+      //   @a = alias ... @f
+      // into:
+      //   define ... @a(...)
+      if (!Target->hasLocalLinkage())
+        continue;
 
-    // Do not perform the transform if multiple aliases potentially target the
-    // aliasee.  This check also ensures that it is safe to replace the section
-    // and other attributes of the aliasee with those of the alias.
-    if (!hasOneUse)
-      continue;
+      // Do not perform the transform if multiple aliases potentially target the
+      // aliasee.  This check also ensures that it is safe to replace the section
+      // and other attributes of the aliasee with those of the alias.
+      if (!hasOneUse)
+        continue;
 
-    // Give the aliasee the name, linkage and other attributes of the alias.
-    Target->takeName(J);
-    Target->setLinkage(J->getLinkage());
-    Target->GlobalValue::copyAttributesFrom(J);
+      // Give the aliasee the name, linkage and other attributes of the alias.
+      Target->takeName(J);
+      Target->setLinkage(J->getLinkage());
+      Target->GlobalValue::copyAttributesFrom(J);
+    }
 
     // Delete the alias.
     M.getAliasList().erase(J);
diff --git a/libclamav/c++/llvm/lib/Transforms/Instrumentation/MaximumSpanningTree.h b/libclamav/c++/llvm/lib/Transforms/Instrumentation/MaximumSpanningTree.h
index 2951dbc..829da6b 100644
--- a/libclamav/c++/llvm/lib/Transforms/Instrumentation/MaximumSpanningTree.h
+++ b/libclamav/c++/llvm/lib/Transforms/Instrumentation/MaximumSpanningTree.h
@@ -15,6 +15,7 @@
 #ifndef LLVM_ANALYSIS_MAXIMUMSPANNINGTREE_H
 #define LLVM_ANALYSIS_MAXIMUMSPANNINGTREE_H
 
+#include "llvm/BasicBlock.h"
 #include "llvm/ADT/EquivalenceClasses.h"
 #include <vector>
 #include <algorithm>
@@ -33,6 +34,18 @@ namespace llvm {
                       typename MaximumSpanningTree<CT>::EdgeWeight Y) const {
         if (X.second > Y.second) return true;
         if (X.second < Y.second) return false;
+        if (const BasicBlock *BBX = dyn_cast<BasicBlock>(X.first.first)) {
+          if (const BasicBlock *BBY = dyn_cast<BasicBlock>(Y.first.first)) {
+            if (BBX->size() > BBY->size()) return true;
+            if (BBX->size() < BBY->size()) return false;
+          }
+        }
+        if (const BasicBlock *BBX = dyn_cast<BasicBlock>(X.first.second)) {
+          if (const BasicBlock *BBY = dyn_cast<BasicBlock>(Y.first.second)) {
+            if (BBX->size() > BBY->size()) return true;
+            if (BBX->size() < BBY->size()) return false;
+          }
+        }
         return false;
       }
     };
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
index 9ca90c3..e4c4ae5 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
@@ -21,7 +21,6 @@
 #include "llvm/InlineAsm.h"
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Pass.h"
 #include "llvm/Analysis/ProfileInfo.h"
 #include "llvm/Target/TargetData.h"
@@ -563,7 +562,7 @@ static bool IsNonLocalValue(Value *V, BasicBlock *BB) {
   return false;
 }
 
-/// OptimizeMemoryInst - Load and Store Instructions have often have
+/// OptimizeMemoryInst - Load and Store Instructions often have
 /// addressing modes that can do significant amounts of computation.  As such,
 /// instruction selection will try to get the load or store to do as much
 /// computation as possible for the program.  The problem is that isel can only
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
index b0988b5..1cfde8f 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
@@ -85,9 +85,14 @@ static bool doesClobberMemory(Instruction *I) {
     return true;
   if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(I)) {
     switch (II->getIntrinsicID()) {
-    default: return false;
-    case Intrinsic::memset: case Intrinsic::memmove: case Intrinsic::memcpy:
-    case Intrinsic::init_trampoline: case Intrinsic::lifetime_end: return true;
+    default:
+      return false;
+    case Intrinsic::memset:
+    case Intrinsic::memmove:
+    case Intrinsic::memcpy:
+    case Intrinsic::init_trampoline:
+    case Intrinsic::lifetime_end:
+      return true;
     }
   }
   return false;
@@ -111,14 +116,13 @@ static Value *getPointerOperand(Instruction *I) {
     return SI->getPointerOperand();
   if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(I))
     return MI->getOperand(1);
-  IntrinsicInst *II = cast<IntrinsicInst>(I);
-  switch (II->getIntrinsicID()) {
-    default:
-      assert(false && "Unexpected intrinsic!");
-    case Intrinsic::init_trampoline:
-      return II->getOperand(1);
-    case Intrinsic::lifetime_end:
-      return II->getOperand(2);
+  
+  switch (cast<IntrinsicInst>(I)->getIntrinsicID()) {
+  default: assert(false && "Unexpected intrinsic!");
+  case Intrinsic::init_trampoline:
+    return I->getOperand(1);
+  case Intrinsic::lifetime_end:
+    return I->getOperand(2);
   }
 }
 
@@ -135,15 +139,13 @@ static unsigned getStoreSize(Instruction *I, const TargetData *TD) {
   if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(I)) {
     Len = MI->getLength();
   } else {
-    IntrinsicInst *II = cast<IntrinsicInst>(I);
-    switch (II->getIntrinsicID()) {
-      default:
-        assert(false && "Unexpected intrinsic!");
-      case Intrinsic::init_trampoline:
-        return -1u;
-      case Intrinsic::lifetime_end:
-        Len = II->getOperand(1);
-        break;
+    switch (cast<IntrinsicInst>(I)->getIntrinsicID()) {
+    default: assert(false && "Unexpected intrinsic!");
+    case Intrinsic::init_trampoline:
+      return -1u;
+    case Intrinsic::lifetime_end:
+      Len = I->getOperand(1);
+      break;
     }
   }
   if (ConstantInt *LenCI = dyn_cast<ConstantInt>(Len))
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
index 72eb900..dcc9dd4 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
@@ -31,15 +31,18 @@
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/AliasAnalysis.h"
+#include "llvm/Analysis/ConstantFolding.h"
+#include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/MemoryBuiltins.h"
 #include "llvm/Analysis/MemoryDependenceAnalysis.h"
+#include "llvm/Analysis/PHITransAddr.h"
 #include "llvm/Support/CFG.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/GetElementPtrTypeIterator.h"
+#include "llvm/Support/IRBuilder.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
@@ -488,21 +491,21 @@ uint32_t ValueTable::lookup_or_add_call(CallInst* C) {
     // Check to see if we have a single dominating call instruction that is
     // identical to C.
     for (unsigned i = 0, e = deps.size(); i != e; ++i) {
-      const MemoryDependenceAnalysis::NonLocalDepEntry *I = &deps[i];
+      const NonLocalDepEntry *I = &deps[i];
       // Ignore non-local dependencies.
-      if (I->second.isNonLocal())
+      if (I->getResult().isNonLocal())
         continue;
 
       // We don't handle non-depedencies.  If we already have a call, reject
       // instruction dependencies.
-      if (I->second.isClobber() || cdep != 0) {
+      if (I->getResult().isClobber() || cdep != 0) {
         cdep = 0;
         break;
       }
 
-      CallInst *NonLocalDepCall = dyn_cast<CallInst>(I->second.getInst());
+      CallInst *NonLocalDepCall = dyn_cast<CallInst>(I->getResult().getInst());
       // FIXME: All duplicated with non-local case.
-      if (NonLocalDepCall && DT->properlyDominates(I->first, C->getParent())){
+      if (NonLocalDepCall && DT->properlyDominates(I->getBB(), C->getParent())){
         cdep = NonLocalDepCall;
         continue;
       }
@@ -987,27 +990,27 @@ static Value *GetBaseWithConstantOffset(Value *Ptr, int64_t &Offset,
 }
 
 
-/// AnalyzeLoadFromClobberingStore - This function is called when we have a
-/// memdep query of a load that ends up being a clobbering store.  This means
-/// that the store *may* provide bits used by the load but we can't be sure
-/// because the pointers don't mustalias.  Check this case to see if there is
-/// anything more we can do before we give up.  This returns -1 if we have to
-/// give up, or a byte number in the stored value of the piece that feeds the
-/// load.
-static int AnalyzeLoadFromClobberingStore(LoadInst *L, StoreInst *DepSI,
+/// AnalyzeLoadFromClobberingWrite - This function is called when we have a
+/// memdep query of a load that ends up being a clobbering memory write (store,
+/// memset, memcpy, memmove).  This means that the write *may* provide bits used
+/// by the load but we can't be sure because the pointers don't mustalias.
+///
+/// Check this case to see if there is anything more we can do before we give
+/// up.  This returns -1 if we have to give up, or a byte number in the stored
+/// value of the piece that feeds the load.
+static int AnalyzeLoadFromClobberingWrite(const Type *LoadTy, Value *LoadPtr,
+                                          Value *WritePtr,
+                                          uint64_t WriteSizeInBits,
                                           const TargetData &TD) {
   // If the loaded or stored value is an first class array or struct, don't try
   // to transform them.  We need to be able to bitcast to integer.
-  if (isa<StructType>(L->getType()) || isa<ArrayType>(L->getType()) ||
-      isa<StructType>(DepSI->getOperand(0)->getType()) ||
-      isa<ArrayType>(DepSI->getOperand(0)->getType()))
+  if (isa<StructType>(LoadTy) || isa<ArrayType>(LoadTy))
     return -1;
   
   int64_t StoreOffset = 0, LoadOffset = 0;
-  Value *StoreBase = 
-    GetBaseWithConstantOffset(DepSI->getPointerOperand(), StoreOffset, TD);
+  Value *StoreBase = GetBaseWithConstantOffset(WritePtr, StoreOffset, TD);
   Value *LoadBase = 
-    GetBaseWithConstantOffset(L->getPointerOperand(), LoadOffset, TD);
+    GetBaseWithConstantOffset(LoadPtr, LoadOffset, TD);
   if (StoreBase != LoadBase)
     return -1;
   
@@ -1018,12 +1021,10 @@ static int AnalyzeLoadFromClobberingStore(LoadInst *L, StoreInst *DepSI,
 #if 0
     errs() << "STORE/LOAD DEP WITH COMMON POINTER MISSED:\n"
     << "Base       = " << *StoreBase << "\n"
-    << "Store Ptr  = " << *DepSI->getPointerOperand() << "\n"
-    << "Store Offs = " << StoreOffset << " - " << *DepSI << "\n"
-    << "Load Ptr   = " << *L->getPointerOperand() << "\n"
-    << "Load Offs  = " << LoadOffset << " - " << *L << "\n\n";
-    errs() << "'" << L->getParent()->getParent()->getName() << "'"
-    << *L->getParent();
+    << "Store Ptr  = " << *WritePtr << "\n"
+    << "Store Offs = " << StoreOffset << "\n"
+    << "Load Ptr   = " << *LoadPtr << "\n";
+    abort();
 #endif
     return -1;
   }
@@ -1033,12 +1034,11 @@ static int AnalyzeLoadFromClobberingStore(LoadInst *L, StoreInst *DepSI,
   // must have gotten confused.
   // FIXME: Investigate cases where this bails out, e.g. rdar://7238614. Then
   // remove this check, as it is duplicated with what we have below.
-  uint64_t StoreSize = TD.getTypeSizeInBits(DepSI->getOperand(0)->getType());
-  uint64_t LoadSize = TD.getTypeSizeInBits(L->getType());
+  uint64_t LoadSize = TD.getTypeSizeInBits(LoadTy);
   
-  if ((StoreSize & 7) | (LoadSize & 7))
+  if ((WriteSizeInBits & 7) | (LoadSize & 7))
     return -1;
-  StoreSize >>= 3;  // Convert to bytes.
+  uint64_t StoreSize = WriteSizeInBits >> 3;  // Convert to bytes.
   LoadSize >>= 3;
   
   
@@ -1052,12 +1052,10 @@ static int AnalyzeLoadFromClobberingStore(LoadInst *L, StoreInst *DepSI,
 #if 0
     errs() << "STORE LOAD DEP WITH COMMON BASE:\n"
     << "Base       = " << *StoreBase << "\n"
-    << "Store Ptr  = " << *DepSI->getPointerOperand() << "\n"
-    << "Store Offs = " << StoreOffset << " - " << *DepSI << "\n"
-    << "Load Ptr   = " << *L->getPointerOperand() << "\n"
-    << "Load Offs  = " << LoadOffset << " - " << *L << "\n\n";
-    errs() << "'" << L->getParent()->getParent()->getName() << "'"
-    << *L->getParent();
+    << "Store Ptr  = " << *WritePtr << "\n"
+    << "Store Offs = " << StoreOffset << "\n"
+    << "Load Ptr   = " << *LoadPtr << "\n";
+    abort();
 #endif
     return -1;
   }
@@ -1075,6 +1073,66 @@ static int AnalyzeLoadFromClobberingStore(LoadInst *L, StoreInst *DepSI,
   return LoadOffset-StoreOffset;
 }  
 
+/// AnalyzeLoadFromClobberingStore - This function is called when we have a
+/// memdep query of a load that ends up being a clobbering store.
+static int AnalyzeLoadFromClobberingStore(const Type *LoadTy, Value *LoadPtr,
+                                          StoreInst *DepSI,
+                                          const TargetData &TD) {
+  // Cannot handle reading from store of first-class aggregate yet.
+  if (isa<StructType>(DepSI->getOperand(0)->getType()) ||
+      isa<ArrayType>(DepSI->getOperand(0)->getType()))
+    return -1;
+
+  Value *StorePtr = DepSI->getPointerOperand();
+  uint64_t StoreSize = TD.getTypeSizeInBits(DepSI->getOperand(0)->getType());
+  return AnalyzeLoadFromClobberingWrite(LoadTy, LoadPtr,
+                                        StorePtr, StoreSize, TD);
+}
+
+static int AnalyzeLoadFromClobberingMemInst(const Type *LoadTy, Value *LoadPtr,
+                                            MemIntrinsic *MI,
+                                            const TargetData &TD) {
+  // If the mem operation is a non-constant size, we can't handle it.
+  ConstantInt *SizeCst = dyn_cast<ConstantInt>(MI->getLength());
+  if (SizeCst == 0) return -1;
+  uint64_t MemSizeInBits = SizeCst->getZExtValue()*8;
+
+  // If this is memset, we just need to see if the offset is valid in the size
+  // of the memset..
+  if (MI->getIntrinsicID() == Intrinsic::memset)
+    return AnalyzeLoadFromClobberingWrite(LoadTy, LoadPtr, MI->getDest(),
+                                          MemSizeInBits, TD);
+  
+  // If we have a memcpy/memmove, the only case we can handle is if this is a
+  // copy from constant memory.  In that case, we can read directly from the
+  // constant memory.
+  MemTransferInst *MTI = cast<MemTransferInst>(MI);
+  
+  Constant *Src = dyn_cast<Constant>(MTI->getSource());
+  if (Src == 0) return -1;
+  
+  GlobalVariable *GV = dyn_cast<GlobalVariable>(Src->getUnderlyingObject());
+  if (GV == 0 || !GV->isConstant()) return -1;
+  
+  // See if the access is within the bounds of the transfer.
+  int Offset = AnalyzeLoadFromClobberingWrite(LoadTy, LoadPtr,
+                                              MI->getDest(), MemSizeInBits, TD);
+  if (Offset == -1)
+    return Offset;
+  
+  // Otherwise, see if we can constant fold a load from the constant with the
+  // offset applied as appropriate.
+  Src = ConstantExpr::getBitCast(Src,
+                                 llvm::Type::getInt8PtrTy(Src->getContext()));
+  Constant *OffsetCst = 
+    ConstantInt::get(Type::getInt64Ty(Src->getContext()), (unsigned)Offset);
+  Src = ConstantExpr::getGetElementPtr(Src, &OffsetCst, 1);
+  Src = ConstantExpr::getBitCast(Src, PointerType::getUnqual(LoadTy));
+  if (ConstantFoldLoadFromConstPtr(Src, &TD))
+    return Offset;
+  return -1;
+}
+                                            
 
 /// GetStoreValueForLoad - This function is called when we have a
 /// memdep query of a load that ends up being a clobbering store.  This means
@@ -1089,50 +1147,134 @@ static Value *GetStoreValueForLoad(Value *SrcVal, unsigned Offset,
   uint64_t StoreSize = TD.getTypeSizeInBits(SrcVal->getType())/8;
   uint64_t LoadSize = TD.getTypeSizeInBits(LoadTy)/8;
   
+  IRBuilder<> Builder(InsertPt->getParent(), InsertPt);
   
   // Compute which bits of the stored value are being used by the load.  Convert
   // to an integer type to start with.
   if (isa<PointerType>(SrcVal->getType()))
-    SrcVal = new PtrToIntInst(SrcVal, TD.getIntPtrType(Ctx), "tmp", InsertPt);
+    SrcVal = Builder.CreatePtrToInt(SrcVal, TD.getIntPtrType(Ctx), "tmp");
   if (!isa<IntegerType>(SrcVal->getType()))
-    SrcVal = new BitCastInst(SrcVal, IntegerType::get(Ctx, StoreSize*8),
-                             "tmp", InsertPt);
+    SrcVal = Builder.CreateBitCast(SrcVal, IntegerType::get(Ctx, StoreSize*8),
+                                   "tmp");
   
   // Shift the bits to the least significant depending on endianness.
   unsigned ShiftAmt;
-  if (TD.isLittleEndian()) {
+  if (TD.isLittleEndian())
     ShiftAmt = Offset*8;
-  } else {
+  else
     ShiftAmt = (StoreSize-LoadSize-Offset)*8;
-  }
   
   if (ShiftAmt)
-    SrcVal = BinaryOperator::CreateLShr(SrcVal,
-                ConstantInt::get(SrcVal->getType(), ShiftAmt), "tmp", InsertPt);
+    SrcVal = Builder.CreateLShr(SrcVal, ShiftAmt, "tmp");
   
   if (LoadSize != StoreSize)
-    SrcVal = new TruncInst(SrcVal, IntegerType::get(Ctx, LoadSize*8),
-                           "tmp", InsertPt);
+    SrcVal = Builder.CreateTrunc(SrcVal, IntegerType::get(Ctx, LoadSize*8),
+                                 "tmp");
   
   return CoerceAvailableValueToLoadType(SrcVal, LoadTy, InsertPt, TD);
 }
 
+/// GetMemInstValueForLoad - This function is called when we have a
+/// memdep query of a load that ends up being a clobbering mem intrinsic.
+static Value *GetMemInstValueForLoad(MemIntrinsic *SrcInst, unsigned Offset,
+                                     const Type *LoadTy, Instruction *InsertPt,
+                                     const TargetData &TD){
+  LLVMContext &Ctx = LoadTy->getContext();
+  uint64_t LoadSize = TD.getTypeSizeInBits(LoadTy)/8;
+
+  IRBuilder<> Builder(InsertPt->getParent(), InsertPt);
+  
+  // We know that this method is only called when the mem transfer fully
+  // provides the bits for the load.
+  if (MemSetInst *MSI = dyn_cast<MemSetInst>(SrcInst)) {
+    // memset(P, 'x', 1234) -> splat('x'), even if x is a variable, and
+    // independently of what the offset is.
+    Value *Val = MSI->getValue();
+    if (LoadSize != 1)
+      Val = Builder.CreateZExt(Val, IntegerType::get(Ctx, LoadSize*8));
+    
+    Value *OneElt = Val;
+    
+    // Splat the value out to the right number of bits.
+    for (unsigned NumBytesSet = 1; NumBytesSet != LoadSize; ) {
+      // If we can double the number of bytes set, do it.
+      if (NumBytesSet*2 <= LoadSize) {
+        Value *ShVal = Builder.CreateShl(Val, NumBytesSet*8);
+        Val = Builder.CreateOr(Val, ShVal);
+        NumBytesSet <<= 1;
+        continue;
+      }
+      
+      // Otherwise insert one byte at a time.
+      Value *ShVal = Builder.CreateShl(Val, 1*8);
+      Val = Builder.CreateOr(OneElt, ShVal);
+      ++NumBytesSet;
+    }
+    
+    return CoerceAvailableValueToLoadType(Val, LoadTy, InsertPt, TD);
+  }
+ 
+  // Otherwise, this is a memcpy/memmove from a constant global.
+  MemTransferInst *MTI = cast<MemTransferInst>(SrcInst);
+  Constant *Src = cast<Constant>(MTI->getSource());
+
+  // Otherwise, see if we can constant fold a load from the constant with the
+  // offset applied as appropriate.
+  Src = ConstantExpr::getBitCast(Src,
+                                 llvm::Type::getInt8PtrTy(Src->getContext()));
+  Constant *OffsetCst = 
+  ConstantInt::get(Type::getInt64Ty(Src->getContext()), (unsigned)Offset);
+  Src = ConstantExpr::getGetElementPtr(Src, &OffsetCst, 1);
+  Src = ConstantExpr::getBitCast(Src, PointerType::getUnqual(LoadTy));
+  return ConstantFoldLoadFromConstPtr(Src, &TD);
+}
+
+
+
 struct AvailableValueInBlock {
   /// BB - The basic block in question.
   BasicBlock *BB;
+  enum ValType {
+    SimpleVal,  // A simple offsetted value that is accessed.
+    MemIntrin   // A memory intrinsic which is loaded from.
+  };
+  
   /// V - The value that is live out of the block.
-  Value *V;
-  /// Offset - The byte offset in V that is interesting for the load query.
+  PointerIntPair<Value *, 1, ValType> Val;
+  
+  /// Offset - The byte offset in Val that is interesting for the load query.
   unsigned Offset;
   
   static AvailableValueInBlock get(BasicBlock *BB, Value *V,
                                    unsigned Offset = 0) {
     AvailableValueInBlock Res;
     Res.BB = BB;
-    Res.V = V;
+    Res.Val.setPointer(V);
+    Res.Val.setInt(SimpleVal);
     Res.Offset = Offset;
     return Res;
   }
+
+  static AvailableValueInBlock getMI(BasicBlock *BB, MemIntrinsic *MI,
+                                     unsigned Offset = 0) {
+    AvailableValueInBlock Res;
+    Res.BB = BB;
+    Res.Val.setPointer(MI);
+    Res.Val.setInt(MemIntrin);
+    Res.Offset = Offset;
+    return Res;
+  }
+  
+  bool isSimpleValue() const { return Val.getInt() == SimpleVal; }
+  Value *getSimpleValue() const {
+    assert(isSimpleValue() && "Wrong accessor");
+    return Val.getPointer();
+  }
+  
+  MemIntrinsic *getMemIntrinValue() const {
+    assert(!isSimpleValue() && "Wrong accessor");
+    return cast<MemIntrinsic>(Val.getPointer());
+  }
 };
 
 /// ConstructSSAForLoadSet - Given a set of loads specified by ValuesPerBlock,
@@ -1149,30 +1291,33 @@ static Value *ConstructSSAForLoadSet(LoadInst *LI,
   const Type *LoadTy = LI->getType();
   
   for (unsigned i = 0, e = ValuesPerBlock.size(); i != e; ++i) {
-    BasicBlock *BB = ValuesPerBlock[i].BB;
-    Value *AvailableVal = ValuesPerBlock[i].V;
-    unsigned Offset = ValuesPerBlock[i].Offset;
+    const AvailableValueInBlock &AV = ValuesPerBlock[i];
+    BasicBlock *BB = AV.BB;
     
     if (SSAUpdate.HasValueForBlock(BB))
       continue;
-    
-    if (AvailableVal->getType() != LoadTy) {
-      assert(TD && "Need target data to handle type mismatch case");
-      AvailableVal = GetStoreValueForLoad(AvailableVal, Offset, LoadTy,
-                                          BB->getTerminator(), *TD);
-      
-      if (Offset) {
-        DEBUG(errs() << "GVN COERCED NONLOCAL VAL:\n"
-              << *ValuesPerBlock[i].V << '\n'
+
+    unsigned Offset = AV.Offset;
+
+    Value *AvailableVal;
+    if (AV.isSimpleValue()) {
+      AvailableVal = AV.getSimpleValue();
+      if (AvailableVal->getType() != LoadTy) {
+        assert(TD && "Need target data to handle type mismatch case");
+        AvailableVal = GetStoreValueForLoad(AvailableVal, Offset, LoadTy,
+                                            BB->getTerminator(), *TD);
+        
+        DEBUG(errs() << "GVN COERCED NONLOCAL VAL:\nOffset: " << Offset << "  "
+              << *AV.getSimpleValue() << '\n'
               << *AvailableVal << '\n' << "\n\n\n");
       }
-      
-      
-      DEBUG(errs() << "GVN COERCED NONLOCAL VAL:\n"
-            << *ValuesPerBlock[i].V << '\n'
+    } else {
+      AvailableVal = GetMemInstValueForLoad(AV.getMemIntrinValue(), Offset,
+                                            LoadTy, BB->getTerminator(), *TD);
+      DEBUG(errs() << "GVN COERCED NONLOCAL MEM INTRIN:\nOffset: " << Offset
+            << "  " << *AV.getMemIntrinValue() << '\n'
             << *AvailableVal << '\n' << "\n\n\n");
     }
-    
     SSAUpdate.AddAvailableValue(BB, AvailableVal);
   }
   
@@ -1187,12 +1332,18 @@ static Value *ConstructSSAForLoadSet(LoadInst *LI,
   return V;
 }
 
+static bool isLifetimeStart(Instruction *Inst) {
+  if (IntrinsicInst* II = dyn_cast<IntrinsicInst>(Inst))
+    return II->getIntrinsicID() == Intrinsic::lifetime_start;
+  return false;
+}
+
 /// processNonLocalLoad - Attempt to eliminate a load whose dependencies are
 /// non-local by performing PHI construction.
 bool GVN::processNonLocalLoad(LoadInst *LI,
                               SmallVectorImpl<Instruction*> &toErase) {
   // Find the non-local dependencies of the load.
-  SmallVector<MemoryDependenceAnalysis::NonLocalDepEntry, 64> Deps;
+  SmallVector<NonLocalDepEntry, 64> Deps;
   MD->getNonLocalPointerDependency(LI->getOperand(0), true, LI->getParent(),
                                    Deps);
   //DEBUG(errs() << "INVESTIGATING NONLOCAL LOAD: "
@@ -1206,11 +1357,11 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
 
   // If we had a phi translation failure, we'll have a single entry which is a
   // clobber in the current block.  Reject this early.
-  if (Deps.size() == 1 && Deps[0].second.isClobber()) {
+  if (Deps.size() == 1 && Deps[0].getResult().isClobber()) {
     DEBUG(
       errs() << "GVN: non-local load ";
       WriteAsOperand(errs(), LI);
-      errs() << " is clobbered by " << *Deps[0].second.getInst() << '\n';
+      errs() << " is clobbered by " << *Deps[0].getResult().getInst() << '\n';
     );
     return false;
   }
@@ -1225,18 +1376,24 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
   const TargetData *TD = 0;
   
   for (unsigned i = 0, e = Deps.size(); i != e; ++i) {
-    BasicBlock *DepBB = Deps[i].first;
-    MemDepResult DepInfo = Deps[i].second;
+    BasicBlock *DepBB = Deps[i].getBB();
+    MemDepResult DepInfo = Deps[i].getResult();
 
     if (DepInfo.isClobber()) {
+      // The address being loaded in this non-local block may not be the same as
+      // the pointer operand of the load if PHI translation occurs.  Make sure
+      // to consider the right address.
+      Value *Address = Deps[i].getAddress();
+      
       // If the dependence is to a store that writes to a superset of the bits
       // read by the load, we can extract the bits we need for the load from the
       // stored value.
       if (StoreInst *DepSI = dyn_cast<StoreInst>(DepInfo.getInst())) {
         if (TD == 0)
           TD = getAnalysisIfAvailable<TargetData>();
-        if (TD) {
-          int Offset = AnalyzeLoadFromClobberingStore(LI, DepSI, *TD);
+        if (TD && Address) {
+          int Offset = AnalyzeLoadFromClobberingStore(LI->getType(), Address,
+                                                      DepSI, *TD);
           if (Offset != -1) {
             ValuesPerBlock.push_back(AvailableValueInBlock::get(DepBB,
                                                            DepSI->getOperand(0),
@@ -1245,8 +1402,23 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
           }
         }
       }
+
+      // If the clobbering value is a memset/memcpy/memmove, see if we can
+      // forward a value on from it.
+      if (MemIntrinsic *DepMI = dyn_cast<MemIntrinsic>(DepInfo.getInst())) {
+        if (TD == 0)
+          TD = getAnalysisIfAvailable<TargetData>();
+        if (TD && Address) {
+          int Offset = AnalyzeLoadFromClobberingMemInst(LI->getType(), Address,
+                                                        DepMI, *TD);
+          if (Offset != -1) {
+            ValuesPerBlock.push_back(AvailableValueInBlock::getMI(DepBB, DepMI,
+                                                                  Offset));
+            continue;
+          }            
+        }
+      }
       
-      // FIXME: Handle memset/memcpy.
       UnavailableBlocks.push_back(DepBB);
       continue;
     }
@@ -1254,21 +1426,14 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
     Instruction *DepInst = DepInfo.getInst();
 
     // Loading the allocation -> undef.
-    if (isa<AllocaInst>(DepInst) || isMalloc(DepInst)) {
+    if (isa<AllocaInst>(DepInst) || isMalloc(DepInst) ||
+        // Loading immediately after lifetime begin -> undef.
+        isLifetimeStart(DepInst)) {
       ValuesPerBlock.push_back(AvailableValueInBlock::get(DepBB,
                                              UndefValue::get(LI->getType())));
       continue;
     }
     
-    // Loading immediately after lifetime begin or end -> undef.
-    if (IntrinsicInst* II = dyn_cast<IntrinsicInst>(DepInst)) {
-      if (II->getIntrinsicID() == Intrinsic::lifetime_start ||
-          II->getIntrinsicID() == Intrinsic::lifetime_end) {
-        ValuesPerBlock.push_back(AvailableValueInBlock::get(DepBB,
-                                             UndefValue::get(LI->getType())));
-      }
-    }
-
     if (StoreInst *S = dyn_cast<StoreInst>(DepInst)) {
       // Reject loads and stores that are to the same address but are of
       // different types if we have to.
@@ -1378,19 +1543,25 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
   // to eliminate LI even if we insert uses in the other predecessors, we will
   // end up increasing code size.  Reject this by scanning for LI.
   for (unsigned i = 0, e = ValuesPerBlock.size(); i != e; ++i)
-    if (ValuesPerBlock[i].V == LI)
+    if (ValuesPerBlock[i].isSimpleValue() &&
+        ValuesPerBlock[i].getSimpleValue() == LI)
       return false;
 
+  // FIXME: It is extremely unclear what this loop is doing, other than
+  // artificially restricting loadpre.
   if (isSinglePred) {
     bool isHot = false;
-    for (unsigned i = 0, e = ValuesPerBlock.size(); i != e; ++i)
-      if (Instruction *I = dyn_cast<Instruction>(ValuesPerBlock[i].V))
+    for (unsigned i = 0, e = ValuesPerBlock.size(); i != e; ++i) {
+      const AvailableValueInBlock &AV = ValuesPerBlock[i];
+      if (AV.isSimpleValue())
         // "Hot" Instruction is in some loop (because it dominates its dep.
         // instruction).
-        if (DT->dominates(LI, I)) {
-          isHot = true;
-          break;
-        }
+        if (Instruction *I = dyn_cast<Instruction>(AV.getSimpleValue()))
+          if (DT->dominates(LI, I)) {
+            isHot = true;
+            break;
+          }
+    }
 
     // We are interested only in "hot" instructions. We don't want to do any
     // mis-optimizations here.
@@ -1432,31 +1603,45 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
     return false;
   }
   
-  // If the loaded pointer is PHI node defined in this block, do PHI translation
-  // to get its value in the predecessor.
-  Value *LoadPtr = MD->PHITranslatePointer(LI->getOperand(0),
-                                           LoadBB, UnavailablePred, TD);
-  // Make sure the value is live in the predecessor.  MemDep found a computation
-  // of LPInst with the right value, but that does not dominate UnavailablePred,
-  // then we can't use it.
-  if (Instruction *LPInst = dyn_cast_or_null<Instruction>(LoadPtr))
-    if (!DT->dominates(LPInst->getParent(), UnavailablePred))
-      LoadPtr = 0;
-
-  // If we don't have a computation of this phi translated value, try to insert
-  // one.
-  if (LoadPtr == 0) {
-    LoadPtr = MD->InsertPHITranslatedPointer(LI->getOperand(0),
-                                             LoadBB, UnavailablePred, TD);
-    if (LoadPtr == 0) {
-      DEBUG(errs() << "COULDN'T INSERT PHI TRANSLATED VALUE OF: "
-                   << *LI->getOperand(0) << "\n");
-      return false;
-    }
+  // Do PHI translation to get its value in the predecessor if necessary.  The
+  // returned pointer (if non-null) is guaranteed to dominate UnavailablePred.
+  //
+  SmallVector<Instruction*, 8> NewInsts;
+  
+  // If all preds have a single successor, then we know it is safe to insert the
+  // load on the pred (?!?), so we can insert code to materialize the pointer if
+  // it is not available.
+  PHITransAddr Address(LI->getOperand(0), TD);
+  Value *LoadPtr = 0;
+  if (allSingleSucc) {
+    LoadPtr = Address.PHITranslateWithInsertion(LoadBB, UnavailablePred,
+                                                *DT, NewInsts);
+  } else {
+    Address.PHITranslateValue(LoadBB, UnavailablePred);
+    LoadPtr = Address.getAddr();
     
-    // FIXME: This inserts a computation, but we don't tell scalar GVN
-    // optimization stuff about it.  How do we do this?
-    DEBUG(errs() << "INSERTED PHI TRANSLATED VALUE: " << *LoadPtr << "\n");
+    // Make sure the value is live in the predecessor.
+    if (Instruction *Inst = dyn_cast_or_null<Instruction>(LoadPtr))
+      if (!DT->dominates(Inst->getParent(), UnavailablePred))
+        LoadPtr = 0;
+  }
+
+  // If we couldn't find or insert a computation of this phi translated value,
+  // we fail PRE.
+  if (LoadPtr == 0) {
+    assert(NewInsts.empty() && "Shouldn't insert insts on failure");
+    DEBUG(errs() << "COULDN'T INSERT PHI TRANSLATED VALUE OF: "
+                 << *LI->getOperand(0) << "\n");
+    return false;
+  }
+
+  // Assign value numbers to these new instructions.
+  for (unsigned i = 0, e = NewInsts.size(); i != e; ++i) {
+    // FIXME: We really _ought_ to insert these value numbers into their 
+    // parent's availability map.  However, in doing so, we risk getting into
+    // ordering issues.  If a block hasn't been processed yet, we would be
+    // marking a value as AVAIL-IN, which isn't what we intend.
+    VN.lookup_or_add(NewInsts[i]);
   }
   
   // Make sure it is valid to move this load here.  We have to watch out for:
@@ -1469,14 +1654,20 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
   // we do not have this case.  Otherwise, check that the load is safe to
   // put anywhere; this can be improved, but should be conservatively safe.
   if (!allSingleSucc &&
-      !isSafeToLoadUnconditionally(LoadPtr, UnavailablePred->getTerminator()))
+      // FIXME: REEVALUTE THIS.
+      !isSafeToLoadUnconditionally(LoadPtr, UnavailablePred->getTerminator())) {
+    assert(NewInsts.empty() && "Should not have inserted instructions");
     return false;
+  }
 
   // Okay, we can eliminate this load by inserting a reload in the predecessor
   // and using PHI construction to get the value in the other predecessors, do
   // it.
   DEBUG(errs() << "GVN REMOVING PRE LOAD: " << *LI << '\n');
-
+  DEBUG(if (!NewInsts.empty())
+          errs() << "INSERTED " << NewInsts.size() << " INSTS: "
+                 << *NewInsts.back() << '\n');
+  
   Value *NewLoad = new LoadInst(LoadPtr, LI->getName()+".pre", false,
                                 LI->getAlignment(),
                                 UnavailablePred->getTerminator());
@@ -1511,11 +1702,6 @@ bool GVN::processLoad(LoadInst *L, SmallVectorImpl<Instruction*> &toErase) {
 
   // If the value isn't available, don't do anything!
   if (Dep.isClobber()) {
-    // FIXME: We should handle memset/memcpy/memmove as dependent instructions
-    // to forward the value if available.
-    //if (isa<MemIntrinsic>(Dep.getInst()))
-    //errs() << "LOAD DEPENDS ON MEM: " << *L << "\n" << *Dep.getInst()<<"\n\n";
-    
     // Check to see if we have something like this:
     //   store i32 123, i32* %P
     //   %A = bitcast i32* %P to i8*
@@ -1526,25 +1712,42 @@ bool GVN::processLoad(LoadInst *L, SmallVectorImpl<Instruction*> &toErase) {
     // a common base + constant offset, and if the previous store (or memset)
     // completely covers this load.  This sort of thing can happen in bitfield
     // access code.
+    Value *AvailVal = 0;
     if (StoreInst *DepSI = dyn_cast<StoreInst>(Dep.getInst()))
       if (const TargetData *TD = getAnalysisIfAvailable<TargetData>()) {
-        int Offset = AnalyzeLoadFromClobberingStore(L, DepSI, *TD);
-        if (Offset != -1) {
-          Value *AvailVal = GetStoreValueForLoad(DepSI->getOperand(0), Offset,
-                                                 L->getType(), L, *TD);
-          DEBUG(errs() << "GVN COERCED STORE BITS:\n" << *DepSI << '\n'
-                       << *AvailVal << '\n' << *L << "\n\n\n");
-    
-          // Replace the load!
-          L->replaceAllUsesWith(AvailVal);
-          if (isa<PointerType>(AvailVal->getType()))
-            MD->invalidateCachedPointerInfo(AvailVal);
-          toErase.push_back(L);
-          NumGVNLoad++;
-          return true;
-        }
+        int Offset = AnalyzeLoadFromClobberingStore(L->getType(),
+                                                    L->getPointerOperand(),
+                                                    DepSI, *TD);
+        if (Offset != -1)
+          AvailVal = GetStoreValueForLoad(DepSI->getOperand(0), Offset,
+                                          L->getType(), L, *TD);
       }
     
+    // If the clobbering value is a memset/memcpy/memmove, see if we can forward
+    // a value on from it.
+    if (MemIntrinsic *DepMI = dyn_cast<MemIntrinsic>(Dep.getInst())) {
+      if (const TargetData *TD = getAnalysisIfAvailable<TargetData>()) {
+        int Offset = AnalyzeLoadFromClobberingMemInst(L->getType(),
+                                                      L->getPointerOperand(),
+                                                      DepMI, *TD);
+        if (Offset != -1)
+          AvailVal = GetMemInstValueForLoad(DepMI, Offset, L->getType(), L,*TD);
+      }
+    }
+        
+    if (AvailVal) {
+      DEBUG(errs() << "GVN COERCED INST:\n" << *Dep.getInst() << '\n'
+            << *AvailVal << '\n' << *L << "\n\n\n");
+      
+      // Replace the load!
+      L->replaceAllUsesWith(AvailVal);
+      if (isa<PointerType>(AvailVal->getType()))
+        MD->invalidateCachedPointerInfo(AvailVal);
+      toErase.push_back(L);
+      NumGVNLoad++;
+      return true;
+    }
+        
     DEBUG(
       // fast print dep, using operator<< on instruction would be too slow
       errs() << "GVN: load ";
@@ -1629,11 +1832,10 @@ bool GVN::processLoad(LoadInst *L, SmallVectorImpl<Instruction*> &toErase) {
     return true;
   }
   
-  // If this load occurs either right after a lifetime begin or a lifetime end,
+  // If this load occurs either right after a lifetime begin,
   // then the loaded value is undefined.
   if (IntrinsicInst* II = dyn_cast<IntrinsicInst>(DepInst)) {
-    if (II->getIntrinsicID() == Intrinsic::lifetime_start ||
-        II->getIntrinsicID() == Intrinsic::lifetime_end) {
+    if (II->getIntrinsicID() == Intrinsic::lifetime_start) {
       L->replaceAllUsesWith(UndefValue::get(L->getType()));
       toErase.push_back(L);
       NumGVNLoad++;
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
index 95563b0..2b4b66b 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
@@ -2163,8 +2163,8 @@ bool InstCombiner::WillNotOverflowSignedAdd(Value *LHS, Value *RHS) {
   
   // Add has the property that adding any two 2's complement numbers can only 
   // have one carry bit which can change a sign.  As such, if LHS and RHS each
-  // have at least two sign bits, we know that the addition of the two values will
-  // sign extend fine.
+  // have at least two sign bits, we know that the addition of the two values
+  // will sign extend fine.
   if (ComputeNumSignBits(LHS) > 1 && ComputeNumSignBits(RHS) > 1)
     return true;
   
@@ -2184,15 +2184,12 @@ Instruction *InstCombiner::visitAdd(BinaryOperator &I) {
   bool Changed = SimplifyCommutative(I);
   Value *LHS = I.getOperand(0), *RHS = I.getOperand(1);
 
-  if (Constant *RHSC = dyn_cast<Constant>(RHS)) {
-    // X + undef -> undef
-    if (isa<UndefValue>(RHS))
-      return ReplaceInstUsesWith(I, RHS);
-
-    // X + 0 --> X
-    if (RHSC->isNullValue())
-      return ReplaceInstUsesWith(I, LHS);
+  if (Value *V = SimplifyAddInst(LHS, RHS, I.hasNoSignedWrap(),
+                                 I.hasNoUnsignedWrap(), TD))
+    return ReplaceInstUsesWith(I, V);
 
+  
+  if (Constant *RHSC = dyn_cast<Constant>(RHS)) {
     if (ConstantInt *CI = dyn_cast<ConstantInt>(RHSC)) {
       // X + (signbit) --> X ^ signbit
       const APInt& Val = CI->getValue();
@@ -4070,6 +4067,21 @@ Value *InstCombiner::FoldLogicalPlusAnd(Value *LHS, Value *RHS,
 /// FoldAndOfICmps - Fold (icmp)&(icmp) if possible.
 Instruction *InstCombiner::FoldAndOfICmps(Instruction &I,
                                           ICmpInst *LHS, ICmpInst *RHS) {
+  // (icmp eq A, null) & (icmp eq B, null) -->
+  //     (icmp eq (ptrtoint(A)|ptrtoint(B)), 0)
+  if (TD &&
+      LHS->getPredicate() == ICmpInst::ICMP_EQ &&
+      RHS->getPredicate() == ICmpInst::ICMP_EQ &&
+      isa<ConstantPointerNull>(LHS->getOperand(1)) &&
+      isa<ConstantPointerNull>(RHS->getOperand(1))) {
+    const Type *IntPtrTy = TD->getIntPtrType(I.getContext());
+    Value *A = Builder->CreatePtrToInt(LHS->getOperand(0), IntPtrTy);
+    Value *B = Builder->CreatePtrToInt(RHS->getOperand(0), IntPtrTy);
+    Value *NewOr = Builder->CreateOr(A, B);
+    return new ICmpInst(ICmpInst::ICMP_EQ, NewOr,
+                        Constant::getNullValue(IntPtrTy));
+  }
+  
   Value *Val, *Val2;
   ConstantInt *LHSCst, *RHSCst;
   ICmpInst::Predicate LHSCC, RHSCC;
@@ -4081,12 +4093,20 @@ Instruction *InstCombiner::FoldAndOfICmps(Instruction &I,
                          m_ConstantInt(RHSCst))))
     return 0;
   
-  // (icmp ult A, C) & (icmp ult B, C) --> (icmp ult (A|B), C)
-  // where C is a power of 2
-  if (LHSCst == RHSCst && LHSCC == RHSCC && LHSCC == ICmpInst::ICMP_ULT &&
-      LHSCst->getValue().isPowerOf2()) {
-    Value *NewOr = Builder->CreateOr(Val, Val2);
-    return new ICmpInst(LHSCC, NewOr, LHSCst);
+  if (LHSCst == RHSCst && LHSCC == RHSCC) {
+    // (icmp ult A, C) & (icmp ult B, C) --> (icmp ult (A|B), C)
+    // where C is a power of 2
+    if (LHSCC == ICmpInst::ICMP_ULT &&
+        LHSCst->getValue().isPowerOf2()) {
+      Value *NewOr = Builder->CreateOr(Val, Val2);
+      return new ICmpInst(LHSCC, NewOr, LHSCst);
+    }
+    
+    // (icmp eq A, 0) & (icmp eq B, 0) --> (icmp eq (A|B), 0)
+    if (LHSCC == ICmpInst::ICMP_EQ && LHSCst->isZero()) {
+      Value *NewOr = Builder->CreateOr(Val, Val2);
+      return new ICmpInst(LHSCC, NewOr, LHSCst);
+    }
   }
   
   // From here on, we only handle:
@@ -4322,7 +4342,6 @@ Instruction *InstCombiner::visitAnd(BinaryOperator &I) {
 
   if (Value *V = SimplifyAndInst(Op0, Op1, TD))
     return ReplaceInstUsesWith(I, V);
-    
 
   // See if we can simplify any instructions used by the instruction whose sole 
   // purpose is to compute bits we don't care about.
@@ -4743,16 +4762,37 @@ static Instruction *MatchSelectFromAndOr(Value *A, Value *B,
 /// FoldOrOfICmps - Fold (icmp)|(icmp) if possible.
 Instruction *InstCombiner::FoldOrOfICmps(Instruction &I,
                                          ICmpInst *LHS, ICmpInst *RHS) {
+  // (icmp ne A, null) | (icmp ne B, null) -->
+  //     (icmp ne (ptrtoint(A)|ptrtoint(B)), 0)
+  if (TD &&
+      LHS->getPredicate() == ICmpInst::ICMP_NE &&
+      RHS->getPredicate() == ICmpInst::ICMP_NE &&
+      isa<ConstantPointerNull>(LHS->getOperand(1)) &&
+      isa<ConstantPointerNull>(RHS->getOperand(1))) {
+    const Type *IntPtrTy = TD->getIntPtrType(I.getContext());
+    Value *A = Builder->CreatePtrToInt(LHS->getOperand(0), IntPtrTy);
+    Value *B = Builder->CreatePtrToInt(RHS->getOperand(0), IntPtrTy);
+    Value *NewOr = Builder->CreateOr(A, B);
+    return new ICmpInst(ICmpInst::ICMP_NE, NewOr,
+                        Constant::getNullValue(IntPtrTy));
+  }
+  
   Value *Val, *Val2;
   ConstantInt *LHSCst, *RHSCst;
   ICmpInst::Predicate LHSCC, RHSCC;
   
   // This only handles icmp of constants: (icmp1 A, C1) | (icmp2 B, C2).
-  if (!match(LHS, m_ICmp(LHSCC, m_Value(Val),
-             m_ConstantInt(LHSCst))) ||
-      !match(RHS, m_ICmp(RHSCC, m_Value(Val2),
-             m_ConstantInt(RHSCst))))
+  if (!match(LHS, m_ICmp(LHSCC, m_Value(Val), m_ConstantInt(LHSCst))) ||
+      !match(RHS, m_ICmp(RHSCC, m_Value(Val2), m_ConstantInt(RHSCst))))
     return 0;
+
+  
+  // (icmp ne A, 0) | (icmp ne B, 0) --> (icmp ne (A|B), 0)
+  if (LHSCst == RHSCst && LHSCC == RHSCC &&
+      LHSCC == ICmpInst::ICMP_NE && LHSCst->isZero()) {
+    Value *NewOr = Builder->CreateOr(Val, Val2);
+    return new ICmpInst(LHSCC, NewOr, LHSCst);
+  }
   
   // From here on, we only handle:
   //    (icmp1 A, C1) | (icmp2 A, C2) --> something simpler.
@@ -8545,25 +8585,36 @@ Instruction *InstCombiner::transformZExtICmp(ICmpInst *ICI, Instruction &CI,
   if (ICI->isEquality() && CI.getType() == ICI->getOperand(0)->getType()) {
     if (const IntegerType *ITy = dyn_cast<IntegerType>(CI.getType())) {
       uint32_t BitWidth = ITy->getBitWidth();
-      if (BitWidth > 1) {
-        Value *LHS = ICI->getOperand(0);
-        Value *RHS = ICI->getOperand(1);
-
-        APInt KnownZeroLHS(BitWidth, 0), KnownOneLHS(BitWidth, 0);
-        APInt KnownZeroRHS(BitWidth, 0), KnownOneRHS(BitWidth, 0);
-        APInt TypeMask(APInt::getHighBitsSet(BitWidth, BitWidth-1));
-        ComputeMaskedBits(LHS, TypeMask, KnownZeroLHS, KnownOneLHS);
-        ComputeMaskedBits(RHS, TypeMask, KnownZeroRHS, KnownOneRHS);
-
-        if (KnownZeroLHS.countLeadingOnes() == BitWidth-1 &&
-            KnownZeroRHS.countLeadingOnes() == BitWidth-1) {
+      Value *LHS = ICI->getOperand(0);
+      Value *RHS = ICI->getOperand(1);
+
+      APInt KnownZeroLHS(BitWidth, 0), KnownOneLHS(BitWidth, 0);
+      APInt KnownZeroRHS(BitWidth, 0), KnownOneRHS(BitWidth, 0);
+      APInt TypeMask(APInt::getAllOnesValue(BitWidth));
+      ComputeMaskedBits(LHS, TypeMask, KnownZeroLHS, KnownOneLHS);
+      ComputeMaskedBits(RHS, TypeMask, KnownZeroRHS, KnownOneRHS);
+
+      if (KnownZeroLHS == KnownZeroRHS && KnownOneLHS == KnownOneRHS) {
+        APInt KnownBits = KnownZeroLHS | KnownOneLHS;
+        APInt UnknownBit = ~KnownBits;
+        if (UnknownBit.countPopulation() == 1) {
           if (!DoXform) return ICI;
 
-          Value *Xor = Builder->CreateXor(LHS, RHS);
+          Value *Result = Builder->CreateXor(LHS, RHS);
+
+          // Mask off any bits that are set and won't be shifted away.
+          if (KnownOneLHS.uge(UnknownBit))
+            Result = Builder->CreateAnd(Result,
+                                        ConstantInt::get(ITy, UnknownBit));
+
+          // Shift the bit we're testing down to the lsb.
+          Result = Builder->CreateLShr(
+               Result, ConstantInt::get(ITy, UnknownBit.countTrailingZeros()));
+
           if (ICI->getPredicate() == ICmpInst::ICMP_EQ)
-            Xor = Builder->CreateXor(Xor, ConstantInt::get(ITy, 1));
-          Xor->takeName(ICI);
-          return ReplaceInstUsesWith(CI, Xor);
+            Result = Builder->CreateXor(Result, ConstantInt::get(ITy, 1));
+          Result->takeName(ICI);
+          return ReplaceInstUsesWith(CI, Result);
         }
       }
     }
@@ -9894,9 +9945,9 @@ Instruction *InstCombiner::visitCallInst(CallInst &CI) {
         // Create a simple add instruction, and insert it into the struct.
         Instruction *Add = BinaryOperator::CreateAdd(LHS, RHS, "", &CI);
         Worklist.Add(Add);
-        Constant *V[2];
-        V[0] = UndefValue::get(LHS->getType());
-        V[1] = ConstantInt::getTrue(*Context);
+        Constant *V[] = {
+          UndefValue::get(LHS->getType()), ConstantInt::getTrue(*Context)
+        };
         Constant *Struct = ConstantStruct::get(*Context, V, 2, false);
         return InsertValueInst::Create(Struct, Add, 0);
       }
@@ -9906,9 +9957,9 @@ Instruction *InstCombiner::visitCallInst(CallInst &CI) {
         // Create a simple add instruction, and insert it into the struct.
         Instruction *Add = BinaryOperator::CreateNUWAdd(LHS, RHS, "", &CI);
         Worklist.Add(Add);
-        Constant *V[2];
-        V[0] = UndefValue::get(LHS->getType());
-        V[1] = ConstantInt::getFalse(*Context);
+        Constant *V[] = {
+          UndefValue::get(LHS->getType()), ConstantInt::getFalse(*Context)
+        };
         Constant *Struct = ConstantStruct::get(*Context, V, 2, false);
         return InsertValueInst::Create(Struct, Add, 0);
       }
@@ -9933,7 +9984,8 @@ Instruction *InstCombiner::visitCallInst(CallInst &CI) {
       // X + 0 -> {X, false}
       if (RHS->isZero()) {
         Constant *V[] = {
-          UndefValue::get(II->getType()), ConstantInt::getFalse(*Context)
+          UndefValue::get(II->getOperand(0)->getType()),
+          ConstantInt::getFalse(*Context)
         };
         Constant *Struct = ConstantStruct::get(*Context, V, 2, false);
         return InsertValueInst::Create(Struct, II->getOperand(1), 0);
@@ -9952,7 +10004,8 @@ Instruction *InstCombiner::visitCallInst(CallInst &CI) {
       // X - 0 -> {X, false}
       if (RHS->isZero()) {
         Constant *V[] = {
-          UndefValue::get(II->getType()), ConstantInt::getFalse(*Context)
+          UndefValue::get(II->getOperand(1)->getType()),
+          ConstantInt::getFalse(*Context)
         };
         Constant *Struct = ConstantStruct::get(*Context, V, 2, false);
         return InsertValueInst::Create(Struct, II->getOperand(1), 0);
@@ -9981,11 +10034,12 @@ Instruction *InstCombiner::visitCallInst(CallInst &CI) {
       
       // X * 1 -> {X, false}
       if (RHSI->equalsInt(1)) {
-        Constant *V[2];
-        V[0] = UndefValue::get(II->getType());
-        V[1] = ConstantInt::getFalse(*Context);
+        Constant *V[] = {
+          UndefValue::get(II->getOperand(1)->getType()),
+          ConstantInt::getFalse(*Context)
+        };
         Constant *Struct = ConstantStruct::get(*Context, V, 2, false);
-        return InsertValueInst::Create(Struct, II->getOperand(1), 1);
+        return InsertValueInst::Create(Struct, II->getOperand(1), 0);
       }
     }
     break;
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
index 5864113..d58b9c9 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
@@ -158,12 +158,18 @@ bool JumpThreading::runOnFunction(Function &F) {
           if (BBI->isTerminator()) {
             // Since TryToSimplifyUncondBranchFromEmptyBlock may delete the
             // block, we have to make sure it isn't in the LoopHeaders set.  We
-            // reinsert afterward in the rare case when the block isn't deleted.
+            // reinsert afterward if needed.
             bool ErasedFromLoopHeaders = LoopHeaders.erase(BB);
+            BasicBlock *Succ = BI->getSuccessor(0);
             
-            if (TryToSimplifyUncondBranchFromEmptyBlock(BB))
+            if (TryToSimplifyUncondBranchFromEmptyBlock(BB)) {
               Changed = true;
-            else if (ErasedFromLoopHeaders)
+              // If we deleted BB and BB was the header of a loop, then the
+              // successor is now the header of the loop.
+              BB = Succ;
+            }
+            
+            if (ErasedFromLoopHeaders)
               LoopHeaders.insert(BB);
           }
         }
@@ -712,6 +718,11 @@ bool JumpThreading::ProcessSwitchOnDuplicateCond(BasicBlock *PredBB,
       if (PredSI->getSuccessor(PredCase) != DestBB &&
           DestSI->getSuccessor(i) != DestBB)
         continue;
+      
+      // Do not forward this if it already goes to this destination, this would
+      // be an infinite loop.
+      if (PredSI->getSuccessor(PredCase) == DestSucc)
+        continue;
 
       // Otherwise, we're safe to make the change.  Make sure that the edge from
       // DestSI to DestSucc is not critical and has no PHI nodes.
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp
index 5511387..42a8fdc 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LICM.cpp
@@ -160,16 +160,17 @@ namespace {
 
       // Because the exit block is not in the loop, we know we have to get _at
       // least_ its immediate dominator.
-      do {
-        // Get next Immediate Dominator.
-        IDom = IDom->getIDom();
-
+      IDom = IDom->getIDom();
+      
+      while (IDom && IDom != BlockInLoopNode) {
         // If we have got to the header of the loop, then the instructions block
         // did not dominate the exit node, so we can't hoist it.
         if (IDom->getBlock() == LoopHeader)
           return false;
 
-      } while (IDom != BlockInLoopNode);
+        // Get next Immediate Dominator.
+        IDom = IDom->getIDom();
+      };
 
       return true;
     }
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
index 38d267a..b7adfdc 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
@@ -404,12 +404,13 @@ bool LoopUnswitch::IsTrivialUnswitchCondition(Value *Cond, Constant **Val,
 bool LoopUnswitch::UnswitchIfProfitable(Value *LoopCond, Constant *Val){
 
   initLoopData();
-  Function *F = loopHeader->getParent();
 
   // If LoopSimplify was unable to form a preheader, don't do any unswitching.
   if (!loopPreheader)
     return false;
 
+  Function *F = loopHeader->getParent();
+
   // If the condition is trivial, always unswitch.  There is no code growth for
   // this case.
   if (!IsTrivialUnswitchCondition(LoopCond)) {
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
index 001267a..db87874 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
@@ -19,7 +19,6 @@
 #include "llvm/Constants.h"
 #include "llvm/DerivedTypes.h"
 #include "llvm/Function.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Operator.h"
 #include "llvm/Value.h"
 #include "llvm/ADT/DenseMap.h"
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
index 047d279..4b686cc 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
@@ -102,25 +102,27 @@ namespace {
 
     int isSafeAllocaToScalarRepl(AllocaInst *AI);
 
-    void isSafeUseOfAllocation(Instruction *User, AllocaInst *AI,
-                               AllocaInfo &Info);
-    void isSafeElementUse(Value *Ptr, bool isFirstElt, AllocaInst *AI,
+    void isSafeForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
+                             uint64_t ArrayOffset, AllocaInfo &Info);
+    void isSafeGEP(GetElementPtrInst *GEPI, AllocaInst *AI, uint64_t &Offset,
+                   uint64_t &ArrayOffset, AllocaInfo &Info);
+    void isSafeMemAccess(AllocaInst *AI, uint64_t Offset, uint64_t ArrayOffset,
+                         uint64_t MemSize, const Type *MemOpType, bool isStore,
                          AllocaInfo &Info);
-    void isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocaInst *AI,
-                                        unsigned OpNo, AllocaInfo &Info);
-    void isSafeUseOfBitCastedAllocation(BitCastInst *User, AllocaInst *AI,
-                                        AllocaInfo &Info);
+    bool TypeHasComponent(const Type *T, uint64_t Offset, uint64_t Size);
+    unsigned FindElementAndOffset(const Type *&T, uint64_t &Offset);
     
     void DoScalarReplacement(AllocaInst *AI, 
                              std::vector<AllocaInst*> &WorkList);
     void CleanupGEP(GetElementPtrInst *GEP);
-    void CleanupAllocaUsers(AllocaInst *AI);
+    void CleanupAllocaUsers(Value *V);
     AllocaInst *AddNewAlloca(Function &F, const Type *Ty, AllocaInst *Base);
     
-    void RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocaInst *AI,
-                                    SmallVector<AllocaInst*, 32> &NewElts);
-    
-    void RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
+    void RewriteForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
+                              SmallVector<AllocaInst*, 32> &NewElts);
+    void RewriteGEP(GetElementPtrInst *GEPI, AllocaInst *AI, uint64_t Offset,
+                    SmallVector<AllocaInst*, 32> &NewElts);
+    void RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
                                       AllocaInst *AI,
                                       SmallVector<AllocaInst*, 32> &NewElts);
     void RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
@@ -360,154 +362,12 @@ void SROA::DoScalarReplacement(AllocaInst *AI,
     }
   }
 
-  // Now that we have created the alloca instructions that we want to use,
-  // expand the getelementptr instructions to use them.
-  //
-  while (!AI->use_empty()) {
-    Instruction *User = cast<Instruction>(AI->use_back());
-    if (BitCastInst *BCInst = dyn_cast<BitCastInst>(User)) {
-      RewriteBitCastUserOfAlloca(BCInst, AI, ElementAllocas);
-      BCInst->eraseFromParent();
-      continue;
-    }
-    
-    // Replace:
-    //   %res = load { i32, i32 }* %alloc
-    // with:
-    //   %load.0 = load i32* %alloc.0
-    //   %insert.0 insertvalue { i32, i32 } zeroinitializer, i32 %load.0, 0 
-    //   %load.1 = load i32* %alloc.1
-    //   %insert = insertvalue { i32, i32 } %insert.0, i32 %load.1, 1 
-    // (Also works for arrays instead of structs)
-    if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
-      Value *Insert = UndefValue::get(LI->getType());
-      for (unsigned i = 0, e = ElementAllocas.size(); i != e; ++i) {
-        Value *Load = new LoadInst(ElementAllocas[i], "load", LI);
-        Insert = InsertValueInst::Create(Insert, Load, i, "insert", LI);
-      }
-      LI->replaceAllUsesWith(Insert);
-      LI->eraseFromParent();
-      continue;
-    }
-
-    // Replace:
-    //   store { i32, i32 } %val, { i32, i32 }* %alloc
-    // with:
-    //   %val.0 = extractvalue { i32, i32 } %val, 0 
-    //   store i32 %val.0, i32* %alloc.0
-    //   %val.1 = extractvalue { i32, i32 } %val, 1 
-    //   store i32 %val.1, i32* %alloc.1
-    // (Also works for arrays instead of structs)
-    if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
-      Value *Val = SI->getOperand(0);
-      for (unsigned i = 0, e = ElementAllocas.size(); i != e; ++i) {
-        Value *Extract = ExtractValueInst::Create(Val, i, Val->getName(), SI);
-        new StoreInst(Extract, ElementAllocas[i], SI);
-      }
-      SI->eraseFromParent();
-      continue;
-    }
-    
-    GetElementPtrInst *GEPI = cast<GetElementPtrInst>(User);
-    // We now know that the GEP is of the form: GEP <ptr>, 0, <cst>
-    unsigned Idx =
-       (unsigned)cast<ConstantInt>(GEPI->getOperand(2))->getZExtValue();
-
-    assert(Idx < ElementAllocas.size() && "Index out of range?");
-    AllocaInst *AllocaToUse = ElementAllocas[Idx];
-
-    Value *RepValue;
-    if (GEPI->getNumOperands() == 3) {
-      // Do not insert a new getelementptr instruction with zero indices, only
-      // to have it optimized out later.
-      RepValue = AllocaToUse;
-    } else {
-      // We are indexing deeply into the structure, so we still need a
-      // getelement ptr instruction to finish the indexing.  This may be
-      // expanded itself once the worklist is rerun.
-      //
-      SmallVector<Value*, 8> NewArgs;
-      NewArgs.push_back(Constant::getNullValue(
-                                           Type::getInt32Ty(AI->getContext())));
-      NewArgs.append(GEPI->op_begin()+3, GEPI->op_end());
-      RepValue = GetElementPtrInst::Create(AllocaToUse, NewArgs.begin(),
-                                           NewArgs.end(), "", GEPI);
-      RepValue->takeName(GEPI);
-    }
-    
-    // If this GEP is to the start of the aggregate, check for memcpys.
-    if (Idx == 0 && GEPI->hasAllZeroIndices())
-      RewriteBitCastUserOfAlloca(GEPI, AI, ElementAllocas);
-
-    // Move all of the users over to the new GEP.
-    GEPI->replaceAllUsesWith(RepValue);
-    // Delete the old GEP
-    GEPI->eraseFromParent();
-  }
-
-  // Finally, delete the Alloca instruction
-  AI->eraseFromParent();
+  // Now that we have created the new alloca instructions, rewrite all the
+  // uses of the old alloca.
+  RewriteForScalarRepl(AI, AI, 0, ElementAllocas);
   NumReplaced++;
 }
-
-
-/// isSafeElementUse - Check to see if this use is an allowed use for a
-/// getelementptr instruction of an array aggregate allocation.  isFirstElt
-/// indicates whether Ptr is known to the start of the aggregate.
-///
-void SROA::isSafeElementUse(Value *Ptr, bool isFirstElt, AllocaInst *AI,
-                            AllocaInfo &Info) {
-  for (Value::use_iterator I = Ptr->use_begin(), E = Ptr->use_end();
-       I != E; ++I) {
-    Instruction *User = cast<Instruction>(*I);
-    switch (User->getOpcode()) {
-    case Instruction::Load:  break;
-    case Instruction::Store:
-      // Store is ok if storing INTO the pointer, not storing the pointer
-      if (User->getOperand(0) == Ptr) return MarkUnsafe(Info);
-      break;
-    case Instruction::GetElementPtr: {
-      GetElementPtrInst *GEP = cast<GetElementPtrInst>(User);
-      bool AreAllZeroIndices = isFirstElt;
-      if (GEP->getNumOperands() > 1) {
-        if (!isa<ConstantInt>(GEP->getOperand(1)) ||
-            !cast<ConstantInt>(GEP->getOperand(1))->isZero())
-          // Using pointer arithmetic to navigate the array.
-          return MarkUnsafe(Info);
-       
-        if (AreAllZeroIndices)
-          AreAllZeroIndices = GEP->hasAllZeroIndices();
-      }
-      isSafeElementUse(GEP, AreAllZeroIndices, AI, Info);
-      if (Info.isUnsafe) return;
-      break;
-    }
-    case Instruction::BitCast:
-      if (isFirstElt) {
-        isSafeUseOfBitCastedAllocation(cast<BitCastInst>(User), AI, Info);
-        if (Info.isUnsafe) return;
-        break;
-      }
-      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
-      return MarkUnsafe(Info);
-    case Instruction::Call:
-      if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(User)) {
-        if (isFirstElt) {
-          isSafeMemIntrinsicOnAllocation(MI, AI, I.getOperandNo(), Info);
-          if (Info.isUnsafe) return;
-          break;
-        }
-      }
-      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
-      return MarkUnsafe(Info);
-    default:
-      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
-      return MarkUnsafe(Info);
-    }
-  }
-  return;  // All users look ok :)
-}
-
+    
 /// AllUsersAreLoads - Return true if all users of this value are loads.
 static bool AllUsersAreLoads(Value *Ptr) {
   for (Value::use_iterator I = Ptr->use_begin(), E = Ptr->use_end();
@@ -517,218 +377,371 @@ static bool AllUsersAreLoads(Value *Ptr) {
   return true;
 }
 
-/// isSafeUseOfAllocation - Check to see if this user is an allowed use for an
-/// aggregate allocation.
-///
-void SROA::isSafeUseOfAllocation(Instruction *User, AllocaInst *AI,
-                                 AllocaInfo &Info) {
-  if (BitCastInst *C = dyn_cast<BitCastInst>(User))
-    return isSafeUseOfBitCastedAllocation(C, AI, Info);
-
-  if (LoadInst *LI = dyn_cast<LoadInst>(User))
-    if (!LI->isVolatile())
-      return;// Loads (returning a first class aggregrate) are always rewritable
-
-  if (StoreInst *SI = dyn_cast<StoreInst>(User))
-    if (!SI->isVolatile() && SI->getOperand(0) != AI)
-      return;// Store is ok if storing INTO the pointer, not storing the pointer
- 
-  GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(User);
-  if (GEPI == 0)
-    return MarkUnsafe(Info);
-
-  gep_type_iterator I = gep_type_begin(GEPI), E = gep_type_end(GEPI);
+/// isSafeForScalarRepl - Check if instruction I is a safe use with regard to
+/// performing scalar replacement of alloca AI.  The results are flagged in
+/// the Info parameter.  Offset and ArrayOffset indicate the position within
+/// AI that is referenced by this instruction.
+void SROA::isSafeForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
+                               uint64_t ArrayOffset, AllocaInfo &Info) {
+  for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI!=E; ++UI) {
+    Instruction *User = cast<Instruction>(*UI);
 
-  // The GEP is not safe to transform if not of the form "GEP <ptr>, 0, <cst>".
-  if (I == E ||
-      I.getOperand() != Constant::getNullValue(I.getOperand()->getType())) {
-    return MarkUnsafe(Info);
+    if (BitCastInst *BC = dyn_cast<BitCastInst>(User)) {
+      isSafeForScalarRepl(BC, AI, Offset, ArrayOffset, Info);
+    } else if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(User)) {
+      uint64_t GEPArrayOffset = ArrayOffset;
+      uint64_t GEPOffset = Offset;
+      isSafeGEP(GEPI, AI, GEPOffset, GEPArrayOffset, Info);
+      if (!Info.isUnsafe)
+        isSafeForScalarRepl(GEPI, AI, GEPOffset, GEPArrayOffset, Info);
+    } else if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(UI)) {
+      ConstantInt *Length = dyn_cast<ConstantInt>(MI->getLength());
+      if (Length)
+        isSafeMemAccess(AI, Offset, ArrayOffset, Length->getZExtValue(), 0,
+                        UI.getOperandNo() == 1, Info);
+      else
+        MarkUnsafe(Info);
+    } else if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
+      if (!LI->isVolatile()) {
+        const Type *LIType = LI->getType();
+        isSafeMemAccess(AI, Offset, ArrayOffset, TD->getTypeAllocSize(LIType),
+                        LIType, false, Info);
+      } else
+        MarkUnsafe(Info);
+    } else if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
+      // Store is ok if storing INTO the pointer, not storing the pointer
+      if (!SI->isVolatile() && SI->getOperand(0) != I) {
+        const Type *SIType = SI->getOperand(0)->getType();
+        isSafeMemAccess(AI, Offset, ArrayOffset, TD->getTypeAllocSize(SIType),
+                        SIType, true, Info);
+      } else
+        MarkUnsafe(Info);
+    } else if (isa<DbgInfoIntrinsic>(UI)) {
+      // If one user is DbgInfoIntrinsic then check if all users are
+      // DbgInfoIntrinsics.
+      if (OnlyUsedByDbgInfoIntrinsics(I)) {
+        Info.needsCleanup = true;
+        return;
+      }
+      MarkUnsafe(Info);
+    } else {
+      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
+      MarkUnsafe(Info);
+    }
+    if (Info.isUnsafe) return;
   }
+}
 
-  ++I;
-  if (I == E) return MarkUnsafe(Info);  // ran out of GEP indices??
+/// isSafeGEP - Check if a GEP instruction can be handled for scalar
+/// replacement.  It is safe when all the indices are constant, in-bounds
+/// references, and when the resulting offset corresponds to an element within
+/// the alloca type.  The results are flagged in the Info parameter.  Upon
+/// return, Offset is adjusted as specified by the GEP indices.  For the
+/// special case of a variable index to a 2-element array, ArrayOffset is set
+/// to the array element size.
+void SROA::isSafeGEP(GetElementPtrInst *GEPI, AllocaInst *AI,
+                     uint64_t &Offset, uint64_t &ArrayOffset,
+                     AllocaInfo &Info) {
+  gep_type_iterator GEPIt = gep_type_begin(GEPI), E = gep_type_end(GEPI);
+  if (GEPIt == E)
+    return;
+
+  // The first GEP index must be zero.
+  if (!isa<ConstantInt>(GEPIt.getOperand()) ||
+      !cast<ConstantInt>(GEPIt.getOperand())->isZero())
+    return MarkUnsafe(Info);
+  if (++GEPIt == E)
+    return;
 
-  bool IsAllZeroIndices = true;
-  
   // If the first index is a non-constant index into an array, see if we can
   // handle it as a special case.
-  if (const ArrayType *AT = dyn_cast<ArrayType>(*I)) {
-    if (!isa<ConstantInt>(I.getOperand())) {
-      IsAllZeroIndices = 0;
-      uint64_t NumElements = AT->getNumElements();
-      
-      // If this is an array index and the index is not constant, we cannot
-      // promote... that is unless the array has exactly one or two elements in
-      // it, in which case we CAN promote it, but we have to canonicalize this
-      // out if this is the only problem.
-      if ((NumElements == 1 || NumElements == 2) &&
-          AllUsersAreLoads(GEPI)) {
+  const Type *ArrayEltTy = 0;
+  if (ArrayOffset == 0 && Offset == 0) {
+    if (const ArrayType *AT = dyn_cast<ArrayType>(*GEPIt)) {
+      if (!isa<ConstantInt>(GEPIt.getOperand())) {
+        uint64_t NumElements = AT->getNumElements();
+
+        // If this is an array index and the index is not constant, we cannot
+        // promote... that is unless the array has exactly one or two elements
+        // in it, in which case we CAN promote it, but we have to canonicalize
+        // this out if this is the only problem.
+        if ((NumElements != 1 && NumElements != 2) || !AllUsersAreLoads(GEPI))
+          return MarkUnsafe(Info);
         Info.needsCleanup = true;
-        return;  // Canonicalization required!
+        ArrayOffset = TD->getTypeAllocSizeInBits(AT->getElementType());
+        ArrayEltTy = AT->getElementType();
+        ++GEPIt;
       }
-      return MarkUnsafe(Info);
     }
   }
- 
+
   // Walk through the GEP type indices, checking the types that this indexes
   // into.
-  for (; I != E; ++I) {
+  for (; GEPIt != E; ++GEPIt) {
     // Ignore struct elements, no extra checking needed for these.
-    if (isa<StructType>(*I))
+    if (isa<StructType>(*GEPIt))
       continue;
-    
-    ConstantInt *IdxVal = dyn_cast<ConstantInt>(I.getOperand());
-    if (!IdxVal) return MarkUnsafe(Info);
 
-    // Are all indices still zero?
-    IsAllZeroIndices &= IdxVal->isZero();
-    
-    if (const ArrayType *AT = dyn_cast<ArrayType>(*I)) {
+    ConstantInt *IdxVal = dyn_cast<ConstantInt>(GEPIt.getOperand());
+    if (!IdxVal)
+      return MarkUnsafe(Info);
+
+    if (const ArrayType *AT = dyn_cast<ArrayType>(*GEPIt)) {
       // This GEP indexes an array.  Verify that this is an in-range constant
       // integer. Specifically, consider A[0][i]. We cannot know that the user
       // isn't doing invalid things like allowing i to index an out-of-range
       // subscript that accesses A[1].  Because of this, we have to reject SROA
-      // of any accesses into structs where any of the components are variables. 
+      // of any accesses into structs where any of the components are variables.
       if (IdxVal->getZExtValue() >= AT->getNumElements())
         return MarkUnsafe(Info);
-    } else if (const VectorType *VT = dyn_cast<VectorType>(*I)) {
+    } else {
+      const VectorType *VT = dyn_cast<VectorType>(*GEPIt);
+      assert(VT && "unexpected type in GEP type iterator");
       if (IdxVal->getZExtValue() >= VT->getNumElements())
         return MarkUnsafe(Info);
     }
   }
-  
-  // If there are any non-simple uses of this getelementptr, make sure to reject
-  // them.
-  return isSafeElementUse(GEPI, IsAllZeroIndices, AI, Info);
-}
 
-/// isSafeMemIntrinsicOnAllocation - Return true if the specified memory
-/// intrinsic can be promoted by SROA.  At this point, we know that the operand
-/// of the memintrinsic is a pointer to the beginning of the allocation.
-void SROA::isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocaInst *AI,
-                                          unsigned OpNo, AllocaInfo &Info) {
-  // If not constant length, give up.
-  ConstantInt *Length = dyn_cast<ConstantInt>(MI->getLength());
-  if (!Length) return MarkUnsafe(Info);
-  
-  // If not the whole aggregate, give up.
-  if (Length->getZExtValue() !=
-      TD->getTypeAllocSize(AI->getType()->getElementType()))
-    return MarkUnsafe(Info);
-  
-  // We only know about memcpy/memset/memmove.
-  if (!isa<MemIntrinsic>(MI))
-    return MarkUnsafe(Info);
-  
-  // Otherwise, we can transform it.  Determine whether this is a memcpy/set
-  // into or out of the aggregate.
-  if (OpNo == 1)
-    Info.isMemCpyDst = true;
-  else {
-    assert(OpNo == 2);
-    Info.isMemCpySrc = true;
+  // All the indices are safe.  Now compute the offset due to this GEP and
+  // check if the alloca has a component element at that offset.
+  if (ArrayOffset == 0) {
+    SmallVector<Value*, 8> Indices(GEPI->op_begin() + 1, GEPI->op_end());
+    Offset += TD->getIndexedOffset(GEPI->getPointerOperandType(),
+                                   &Indices[0], Indices.size());
+  } else {
+    // Both array elements have the same type, so it suffices to check one of
+    // them.  Copy the GEP indices starting from the array index, but replace
+    // that variable index with a constant zero.
+    SmallVector<Value*, 8> Indices(GEPI->op_begin() + 2, GEPI->op_end());
+    Indices[0] = Constant::getNullValue(Type::getInt32Ty(GEPI->getContext()));
+    const Type *ArrayEltPtr = PointerType::getUnqual(ArrayEltTy);
+    Offset += TD->getIndexedOffset(ArrayEltPtr, &Indices[0], Indices.size());
   }
+  if (!TypeHasComponent(AI->getAllocatedType(), Offset, 0))
+    MarkUnsafe(Info);
 }
 
-/// isSafeUseOfBitCastedAllocation - Return true if all users of this bitcast
-/// are 
-void SROA::isSafeUseOfBitCastedAllocation(BitCastInst *BC, AllocaInst *AI,
-                                          AllocaInfo &Info) {
-  for (Value::use_iterator UI = BC->use_begin(), E = BC->use_end();
-       UI != E; ++UI) {
-    if (BitCastInst *BCU = dyn_cast<BitCastInst>(UI)) {
-      isSafeUseOfBitCastedAllocation(BCU, AI, Info);
-    } else if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(UI)) {
-      isSafeMemIntrinsicOnAllocation(MI, AI, UI.getOperandNo(), Info);
-    } else if (StoreInst *SI = dyn_cast<StoreInst>(UI)) {
-      if (SI->isVolatile())
-        return MarkUnsafe(Info);
-      
-      // If storing the entire alloca in one chunk through a bitcasted pointer
-      // to integer, we can transform it.  This happens (for example) when you
-      // cast a {i32,i32}* to i64* and store through it.  This is similar to the
-      // memcpy case and occurs in various "byval" cases and emulated memcpys.
-      if (isa<IntegerType>(SI->getOperand(0)->getType()) &&
-          TD->getTypeAllocSize(SI->getOperand(0)->getType()) ==
-          TD->getTypeAllocSize(AI->getType()->getElementType())) {
-        Info.isMemCpyDst = true;
-        continue;
+/// isSafeMemAccess - Check if a load/store/memcpy operates on the entire AI
+/// alloca or has an offset and size that corresponds to a component element
+/// within it.  The offset checked here may have been formed from a GEP with a
+/// pointer bitcasted to a different type.
+void SROA::isSafeMemAccess(AllocaInst *AI, uint64_t Offset,
+                           uint64_t ArrayOffset, uint64_t MemSize,
+                           const Type *MemOpType, bool isStore,
+                           AllocaInfo &Info) {
+  // Check if this is a load/store of the entire alloca.
+  if (Offset == 0 && ArrayOffset == 0 &&
+      MemSize == TD->getTypeAllocSize(AI->getAllocatedType())) {
+    bool UsesAggregateType = (MemOpType == AI->getAllocatedType());
+    // This is safe for MemIntrinsics (where MemOpType is 0), integer types
+    // (which are essentially the same as the MemIntrinsics, especially with
+    // regard to copying padding between elements), or references using the
+    // aggregate type of the alloca.
+    if (!MemOpType || isa<IntegerType>(MemOpType) || UsesAggregateType) {
+      if (!UsesAggregateType) {
+        if (isStore)
+          Info.isMemCpyDst = true;
+        else
+          Info.isMemCpySrc = true;
       }
-      return MarkUnsafe(Info);
-    } else if (LoadInst *LI = dyn_cast<LoadInst>(UI)) {
-      if (LI->isVolatile())
-        return MarkUnsafe(Info);
+      return;
+    }
+  }
+  // Check if the offset/size correspond to a component within the alloca type.
+  const Type *T = AI->getAllocatedType();
+  if (TypeHasComponent(T, Offset, MemSize) &&
+      (ArrayOffset == 0 || TypeHasComponent(T, Offset + ArrayOffset, MemSize)))
+    return;
 
-      // If loading the entire alloca in one chunk through a bitcasted pointer
-      // to integer, we can transform it.  This happens (for example) when you
-      // cast a {i32,i32}* to i64* and load through it.  This is similar to the
-      // memcpy case and occurs in various "byval" cases and emulated memcpys.
-      if (isa<IntegerType>(LI->getType()) &&
-          TD->getTypeAllocSize(LI->getType()) ==
-          TD->getTypeAllocSize(AI->getType()->getElementType())) {
-        Info.isMemCpySrc = true;
-        continue;
+  return MarkUnsafe(Info);
+}
+
+/// TypeHasComponent - Return true if T has a component type with the
+/// specified offset and size.  If Size is zero, do not check the size.
+bool SROA::TypeHasComponent(const Type *T, uint64_t Offset, uint64_t Size) {
+  const Type *EltTy;
+  uint64_t EltSize;
+  if (const StructType *ST = dyn_cast<StructType>(T)) {
+    const StructLayout *Layout = TD->getStructLayout(ST);
+    unsigned EltIdx = Layout->getElementContainingOffset(Offset);
+    EltTy = ST->getContainedType(EltIdx);
+    EltSize = TD->getTypeAllocSize(EltTy);
+    Offset -= Layout->getElementOffset(EltIdx);
+  } else if (const ArrayType *AT = dyn_cast<ArrayType>(T)) {
+    EltTy = AT->getElementType();
+    EltSize = TD->getTypeAllocSize(EltTy);
+    Offset %= EltSize;
+  } else {
+    return false;
+  }
+  if (Offset == 0 && (Size == 0 || EltSize == Size))
+    return true;
+  // Check if the component spans multiple elements.
+  if (Offset + Size > EltSize)
+    return false;
+  return TypeHasComponent(EltTy, Offset, Size);
+}
+
+/// RewriteForScalarRepl - Alloca AI is being split into NewElts, so rewrite
+/// the instruction I, which references it, to use the separate elements.
+/// Offset indicates the position within AI that is referenced by this
+/// instruction.
+void SROA::RewriteForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
+                                SmallVector<AllocaInst*, 32> &NewElts) {
+  for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI != E; ) {
+    Instruction *User = cast<Instruction>(*UI++);
+
+    if (BitCastInst *BC = dyn_cast<BitCastInst>(User)) {
+      if (BC->getOperand(0) == AI)
+        BC->setOperand(0, NewElts[0]);
+      // If the bitcast type now matches the operand type, it will be removed
+      // after processing its uses.
+      RewriteForScalarRepl(BC, AI, Offset, NewElts);
+    } else if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(User)) {
+      RewriteGEP(GEPI, AI, Offset, NewElts);
+    } else if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(User)) {
+      ConstantInt *Length = dyn_cast<ConstantInt>(MI->getLength());
+      uint64_t MemSize = Length->getZExtValue();
+      if (Offset == 0 &&
+          MemSize == TD->getTypeAllocSize(AI->getAllocatedType()))
+        RewriteMemIntrinUserOfAlloca(MI, I, AI, NewElts);
+    } else if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
+      const Type *LIType = LI->getType();
+      if (LIType == AI->getAllocatedType()) {
+        // Replace:
+        //   %res = load { i32, i32 }* %alloc
+        // with:
+        //   %load.0 = load i32* %alloc.0
+        //   %insert.0 insertvalue { i32, i32 } zeroinitializer, i32 %load.0, 0
+        //   %load.1 = load i32* %alloc.1
+        //   %insert = insertvalue { i32, i32 } %insert.0, i32 %load.1, 1
+        // (Also works for arrays instead of structs)
+        Value *Insert = UndefValue::get(LIType);
+        for (unsigned i = 0, e = NewElts.size(); i != e; ++i) {
+          Value *Load = new LoadInst(NewElts[i], "load", LI);
+          Insert = InsertValueInst::Create(Insert, Load, i, "insert", LI);
+        }
+        LI->replaceAllUsesWith(Insert);
+        LI->eraseFromParent();
+      } else if (isa<IntegerType>(LIType) &&
+                 TD->getTypeAllocSize(LIType) ==
+                 TD->getTypeAllocSize(AI->getAllocatedType())) {
+        // If this is a load of the entire alloca to an integer, rewrite it.
+        RewriteLoadUserOfWholeAlloca(LI, AI, NewElts);
       }
-      return MarkUnsafe(Info);
-    } else if (isa<DbgInfoIntrinsic>(UI)) {
-      // If one user is DbgInfoIntrinsic then check if all users are
-      // DbgInfoIntrinsics.
-      if (OnlyUsedByDbgInfoIntrinsics(BC)) {
-        Info.needsCleanup = true;
-        return;
+    } else if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
+      Value *Val = SI->getOperand(0);
+      const Type *SIType = Val->getType();
+      if (SIType == AI->getAllocatedType()) {
+        // Replace:
+        //   store { i32, i32 } %val, { i32, i32 }* %alloc
+        // with:
+        //   %val.0 = extractvalue { i32, i32 } %val, 0
+        //   store i32 %val.0, i32* %alloc.0
+        //   %val.1 = extractvalue { i32, i32 } %val, 1
+        //   store i32 %val.1, i32* %alloc.1
+        // (Also works for arrays instead of structs)
+        for (unsigned i = 0, e = NewElts.size(); i != e; ++i) {
+          Value *Extract = ExtractValueInst::Create(Val, i, Val->getName(), SI);
+          new StoreInst(Extract, NewElts[i], SI);
+        }
+        SI->eraseFromParent();
+      } else if (isa<IntegerType>(SIType) &&
+                 TD->getTypeAllocSize(SIType) ==
+                 TD->getTypeAllocSize(AI->getAllocatedType())) {
+        // If this is a store of the entire alloca from an integer, rewrite it.
+        RewriteStoreUserOfWholeAlloca(SI, AI, NewElts);
       }
-      else
-        MarkUnsafe(Info);
     }
-    else {
-      return MarkUnsafe(Info);
+  }
+  // Delete unused instructions and identity bitcasts.
+  if (I->use_empty())
+    I->eraseFromParent();
+  else if (BitCastInst *BC = dyn_cast<BitCastInst>(I)) {
+    if (BC->getDestTy() == BC->getSrcTy()) {
+      BC->replaceAllUsesWith(BC->getOperand(0));
+      BC->eraseFromParent();
     }
-    if (Info.isUnsafe) return;
   }
 }
 
-/// RewriteBitCastUserOfAlloca - BCInst (transitively) bitcasts AI, or indexes
-/// to its first element.  Transform users of the cast to use the new values
-/// instead.
-void SROA::RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocaInst *AI,
-                                      SmallVector<AllocaInst*, 32> &NewElts) {
-  Value::use_iterator UI = BCInst->use_begin(), UE = BCInst->use_end();
-  while (UI != UE) {
-    Instruction *User = cast<Instruction>(*UI++);
-    if (BitCastInst *BCU = dyn_cast<BitCastInst>(User)) {
-      RewriteBitCastUserOfAlloca(BCU, AI, NewElts);
-      if (BCU->use_empty()) BCU->eraseFromParent();
-      continue;
-    }
+/// FindElementAndOffset - Return the index of the element containing Offset
+/// within the specified type, which must be either a struct or an array.
+/// Sets T to the type of the element and Offset to the offset within that
+/// element.
+unsigned SROA::FindElementAndOffset(const Type *&T, uint64_t &Offset) {
+  unsigned Idx = 0;
+  if (const StructType *ST = dyn_cast<StructType>(T)) {
+    const StructLayout *Layout = TD->getStructLayout(ST);
+    Idx = Layout->getElementContainingOffset(Offset);
+    T = ST->getContainedType(Idx);
+    Offset -= Layout->getElementOffset(Idx);
+  } else {
+    const ArrayType *AT = dyn_cast<ArrayType>(T);
+    assert(AT && "unexpected type for scalar replacement");
+    T = AT->getElementType();
+    uint64_t EltSize = TD->getTypeAllocSize(T);
+    Idx = (unsigned)(Offset / EltSize);
+    Offset -= Idx * EltSize;
+  }
+  return Idx;
+}
 
-    if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(User)) {
-      // This must be memcpy/memmove/memset of the entire aggregate.
-      // Split into one per element.
-      RewriteMemIntrinUserOfAlloca(MI, BCInst, AI, NewElts);
-      continue;
+/// RewriteGEP - Check if this GEP instruction moves the pointer across
+/// elements of the alloca that are being split apart, and if so, rewrite
+/// the GEP to be relative to the new element.
+void SROA::RewriteGEP(GetElementPtrInst *GEPI, AllocaInst *AI, uint64_t Offset,
+                      SmallVector<AllocaInst*, 32> &NewElts) {
+  Instruction *Val = GEPI;
+
+  uint64_t OldOffset = Offset;
+  SmallVector<Value*, 8> Indices(GEPI->op_begin() + 1, GEPI->op_end());
+  Offset += TD->getIndexedOffset(GEPI->getPointerOperandType(),
+                                 &Indices[0], Indices.size());
+
+  const Type *T = AI->getAllocatedType();
+  unsigned OldIdx = FindElementAndOffset(T, OldOffset);
+  if (GEPI->getOperand(0) == AI)
+    OldIdx = ~0U; // Force the GEP to be rewritten.
+
+  T = AI->getAllocatedType();
+  uint64_t EltOffset = Offset;
+  unsigned Idx = FindElementAndOffset(T, EltOffset);
+
+  // If this GEP moves the pointer across elements of the alloca that are
+  // being split, then it needs to be rewritten.
+  if (Idx != OldIdx) {
+    const Type *i32Ty = Type::getInt32Ty(AI->getContext());
+    SmallVector<Value*, 8> NewArgs;
+    NewArgs.push_back(Constant::getNullValue(i32Ty));
+    while (EltOffset != 0) {
+      unsigned EltIdx = FindElementAndOffset(T, EltOffset);
+      NewArgs.push_back(ConstantInt::get(i32Ty, EltIdx));
     }
-      
-    if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
-      // If this is a store of the entire alloca from an integer, rewrite it.
-      RewriteStoreUserOfWholeAlloca(SI, AI, NewElts);
-      continue;
+    if (NewArgs.size() > 1) {
+      Val = GetElementPtrInst::CreateInBounds(NewElts[Idx], NewArgs.begin(),
+                                              NewArgs.end(), "", GEPI);
+      Val->takeName(GEPI);
+      if (Val->getType() != GEPI->getType())
+        Val = new BitCastInst(Val, GEPI->getType(), Val->getNameStr(), GEPI);
+    } else {
+      Val = NewElts[Idx];
+      // Insert a new bitcast.  If the types match, it will be removed after
+      // handling all of its uses.
+      Val = new BitCastInst(Val, GEPI->getType(), Val->getNameStr(), GEPI);
+      Val->takeName(GEPI);
     }
 
-    if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
-      // If this is a load of the entire alloca to an integer, rewrite it.
-      RewriteLoadUserOfWholeAlloca(LI, AI, NewElts);
-      continue;
-    }
-    
-    // Otherwise it must be some other user of a gep of the first pointer.  Just
-    // leave these alone.
-    continue;
+    GEPI->replaceAllUsesWith(Val);
+    GEPI->eraseFromParent();
   }
+
+  RewriteForScalarRepl(Val, AI, Offset, NewElts);
 }
 
 /// RewriteMemIntrinUserOfAlloca - MI is a memcpy/memset/memmove from or to AI.
 /// Rewrite it to copy or set the elements of the scalarized memory.
-void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
+void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
                                         AllocaInst *AI,
                                         SmallVector<AllocaInst*, 32> &NewElts) {
   
@@ -740,13 +753,17 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
   LLVMContext &Context = MI->getContext();
   unsigned MemAlignment = MI->getAlignment();
   if (MemTransferInst *MTI = dyn_cast<MemTransferInst>(MI)) { // memmove/memcopy
-    if (BCInst == MTI->getRawDest())
+    if (Inst == MTI->getRawDest())
       OtherPtr = MTI->getRawSource();
     else {
-      assert(BCInst == MTI->getRawSource());
+      assert(Inst == MTI->getRawSource());
       OtherPtr = MTI->getRawDest();
     }
   }
+
+  // Keep track of the other intrinsic argument, so it can be removed if it
+  // is dead when the intrinsic is replaced.
+  Value *PossiblyDead = OtherPtr;
   
   // If there is an other pointer, we want to convert it to the same pointer
   // type as AI has, so we can GEP through it safely.
@@ -773,7 +790,7 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
   // Process each element of the aggregate.
   Value *TheFn = MI->getOperand(0);
   const Type *BytePtrTy = MI->getRawDest()->getType();
-  bool SROADest = MI->getRawDest() == BCInst;
+  bool SROADest = MI->getRawDest() == Inst;
   
   Constant *Zero = Constant::getNullValue(Type::getInt32Ty(MI->getContext()));
 
@@ -785,9 +802,9 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
     if (OtherPtr) {
       Value *Idx[2] = { Zero,
                       ConstantInt::get(Type::getInt32Ty(MI->getContext()), i) };
-      OtherElt = GetElementPtrInst::Create(OtherPtr, Idx, Idx + 2,
+      OtherElt = GetElementPtrInst::CreateInBounds(OtherPtr, Idx, Idx + 2,
                                            OtherPtr->getNameStr()+"."+Twine(i),
-                                           MI);
+                                                   MI);
       uint64_t EltOffset;
       const PointerType *OtherPtrTy = cast<PointerType>(OtherPtr->getType());
       if (const StructType *ST =
@@ -900,9 +917,11 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
     }
   }
   MI->eraseFromParent();
+  if (PossiblyDead)
+    RecursivelyDeleteTriviallyDeadInstructions(PossiblyDead);
 }
 
-/// RewriteStoreUserOfWholeAlloca - We found an store of an integer that
+/// RewriteStoreUserOfWholeAlloca - We found a store of an integer that
 /// overwrites the entire allocation.  Extract out the pieces of the stored
 /// integer and store them individually.
 void SROA::RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
@@ -910,15 +929,9 @@ void SROA::RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
   // Extract each element out of the integer according to its structure offset
   // and store the element value to the individual alloca.
   Value *SrcVal = SI->getOperand(0);
-  const Type *AllocaEltTy = AI->getType()->getElementType();
+  const Type *AllocaEltTy = AI->getAllocatedType();
   uint64_t AllocaSizeBits = TD->getTypeAllocSizeInBits(AllocaEltTy);
   
-  // If this isn't a store of an integer to the whole alloca, it may be a store
-  // to the first element.  Just ignore the store in this case and normal SROA
-  // will handle it.
-  if (!isa<IntegerType>(SrcVal->getType()) ||
-      TD->getTypeAllocSizeInBits(SrcVal->getType()) != AllocaSizeBits)
-    return;
   // Handle tail padding by extending the operand
   if (TD->getTypeSizeInBits(SrcVal->getType()) != AllocaSizeBits)
     SrcVal = new ZExtInst(SrcVal,
@@ -1026,22 +1039,15 @@ void SROA::RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
   SI->eraseFromParent();
 }
 
-/// RewriteLoadUserOfWholeAlloca - We found an load of the entire allocation to
+/// RewriteLoadUserOfWholeAlloca - We found a load of the entire allocation to
 /// an integer.  Load the individual pieces to form the aggregate value.
 void SROA::RewriteLoadUserOfWholeAlloca(LoadInst *LI, AllocaInst *AI,
                                         SmallVector<AllocaInst*, 32> &NewElts) {
   // Extract each element out of the NewElts according to its structure offset
   // and form the result value.
-  const Type *AllocaEltTy = AI->getType()->getElementType();
+  const Type *AllocaEltTy = AI->getAllocatedType();
   uint64_t AllocaSizeBits = TD->getTypeAllocSizeInBits(AllocaEltTy);
   
-  // If this isn't a load of the whole alloca to an integer, it may be a load
-  // of the first element.  Just ignore the load in this case and normal SROA
-  // will handle it.
-  if (!isa<IntegerType>(LI->getType()) ||
-      TD->getTypeAllocSizeInBits(LI->getType()) != AllocaSizeBits)
-    return;
-  
   DEBUG(errs() << "PROMOTING LOAD OF WHOLE ALLOCA: " << *AI << '\n' << *LI
                << '\n');
   
@@ -1115,7 +1121,6 @@ void SROA::RewriteLoadUserOfWholeAlloca(LoadInst *LI, AllocaInst *AI,
   LI->eraseFromParent();
 }
 
-
 /// HasPadding - Return true if the specified type has any structure or
 /// alignment padding, false otherwise.
 static bool HasPadding(const Type *Ty, const TargetData &TD) {
@@ -1160,20 +1165,15 @@ static bool HasPadding(const Type *Ty, const TargetData &TD) {
 /// isSafeStructAllocaToScalarRepl - Check to see if the specified allocation of
 /// an aggregate can be broken down into elements.  Return 0 if not, 3 if safe,
 /// or 1 if safe after canonicalization has been performed.
-///
 int SROA::isSafeAllocaToScalarRepl(AllocaInst *AI) {
   // Loop over the use list of the alloca.  We can only transform it if all of
   // the users are safe to transform.
   AllocaInfo Info;
   
-  for (Value::use_iterator I = AI->use_begin(), E = AI->use_end();
-       I != E; ++I) {
-    isSafeUseOfAllocation(cast<Instruction>(*I), AI, Info);
-    if (Info.isUnsafe) {
-      DEBUG(errs() << "Cannot transform: " << *AI << "\n  due to user: "
-                   << **I << '\n');
-      return 0;
-    }
+  isSafeForScalarRepl(AI, AI, 0, 0, Info);
+  if (Info.isUnsafe) {
+    DEBUG(errs() << "Cannot transform: " << *AI << '\n');
+    return 0;
   }
   
   // Okay, we know all the users are promotable.  If the aggregate is a memcpy
@@ -1182,14 +1182,14 @@ int SROA::isSafeAllocaToScalarRepl(AllocaInst *AI) {
   // types, but may actually be used.  In these cases, we refuse to promote the
   // struct.
   if (Info.isMemCpySrc && Info.isMemCpyDst &&
-      HasPadding(AI->getType()->getElementType(), *TD))
+      HasPadding(AI->getAllocatedType(), *TD))
     return 0;
 
   // If we require cleanup, return 1, otherwise return 3.
   return Info.needsCleanup ? 1 : 3;
 }
 
-/// CleanupGEP - GEP is used by an Alloca, which can be prompted after the GEP
+/// CleanupGEP - GEP is used by an Alloca, which can be promoted after the GEP
 /// is canonicalized here.
 void SROA::CleanupGEP(GetElementPtrInst *GEPI) {
   gep_type_iterator I = gep_type_begin(GEPI);
@@ -1219,15 +1219,15 @@ void SROA::CleanupGEP(GetElementPtrInst *GEPI) {
   // Insert the new GEP instructions, which are properly indexed.
   SmallVector<Value*, 8> Indices(GEPI->op_begin()+1, GEPI->op_end());
   Indices[1] = Constant::getNullValue(Type::getInt32Ty(GEPI->getContext()));
-  Value *ZeroIdx = GetElementPtrInst::Create(GEPI->getOperand(0),
-                                             Indices.begin(),
-                                             Indices.end(),
-                                             GEPI->getName()+".0", GEPI);
+  Value *ZeroIdx = GetElementPtrInst::CreateInBounds(GEPI->getOperand(0),
+                                                     Indices.begin(),
+                                                     Indices.end(),
+                                                     GEPI->getName()+".0",GEPI);
   Indices[1] = ConstantInt::get(Type::getInt32Ty(GEPI->getContext()), 1);
-  Value *OneIdx = GetElementPtrInst::Create(GEPI->getOperand(0),
-                                            Indices.begin(),
-                                            Indices.end(),
-                                            GEPI->getName()+".1", GEPI);
+  Value *OneIdx = GetElementPtrInst::CreateInBounds(GEPI->getOperand(0),
+                                                    Indices.begin(),
+                                                    Indices.end(),
+                                                    GEPI->getName()+".1", GEPI);
   // Replace all loads of the variable index GEP with loads from both
   // indexes and a select.
   while (!GEPI->use_empty()) {
@@ -1238,22 +1238,24 @@ void SROA::CleanupGEP(GetElementPtrInst *GEPI) {
     LI->replaceAllUsesWith(R);
     LI->eraseFromParent();
   }
-  GEPI->eraseFromParent();
 }
 
-
 /// CleanupAllocaUsers - If SROA reported that it can promote the specified
 /// allocation, but only if cleaned up, perform the cleanups required.
-void SROA::CleanupAllocaUsers(AllocaInst *AI) {
+void SROA::CleanupAllocaUsers(Value *V) {
   // At this point, we know that the end result will be SROA'd and promoted, so
   // we can insert ugly code if required so long as sroa+mem2reg will clean it
   // up.
-  for (Value::use_iterator UI = AI->use_begin(), E = AI->use_end();
+  for (Value::use_iterator UI = V->use_begin(), E = V->use_end();
        UI != E; ) {
     User *U = *UI++;
-    if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(U))
+    if (isa<BitCastInst>(U)) {
+      CleanupAllocaUsers(U);
+    } else if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(U)) {
       CleanupGEP(GEPI);
-    else {
+      CleanupAllocaUsers(GEPI);
+      if (GEPI->use_empty()) GEPI->eraseFromParent();
+    } else {
       Instruction *I = cast<Instruction>(U);
       SmallVector<DbgInfoIntrinsic *, 2> DbgInUses;
       if (!isa<StoreInst>(I) && OnlyUsedByDbgInfoIntrinsics(I, &DbgInUses)) {
@@ -1321,7 +1323,7 @@ static void MergeInType(const Type *In, uint64_t Offset, const Type *&VecTy,
 }
 
 /// CanConvertToScalar - V is a pointer.  If we can convert the pointee and all
-/// its accesses to use a to single vector type, return true, and set VecTy to
+/// its accesses to a single vector type, return true and set VecTy to
 /// the new type.  If we could convert the alloca into a single promotable
 /// integer, return true but set VecTy to VoidTy.  Further, if the use is not a
 /// completely trivial use that mem2reg could promote, set IsNotTrivial.  Offset
@@ -1329,7 +1331,6 @@ static void MergeInType(const Type *In, uint64_t Offset, const Type *&VecTy,
 ///
 /// If we see at least one access to the value that is as a vector type, set the
 /// SawVec flag.
-///
 bool SROA::CanConvertToScalar(Value *V, bool &IsNotTrivial, const Type *&VecTy,
                               bool &SawVec, uint64_t Offset,
                               unsigned AllocaSize) {
@@ -1370,7 +1371,7 @@ bool SROA::CanConvertToScalar(Value *V, bool &IsNotTrivial, const Type *&VecTy,
       
       // Compute the offset that this GEP adds to the pointer.
       SmallVector<Value*, 8> Indices(GEP->op_begin()+1, GEP->op_end());
-      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getOperand(0)->getType(),
+      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getPointerOperandType(),
                                                 &Indices[0], Indices.size());
       // See if all uses can be converted.
       if (!CanConvertToScalar(GEP, IsNotTrivial, VecTy, SawVec,Offset+GEPOffset,
@@ -1412,7 +1413,6 @@ bool SROA::CanConvertToScalar(Value *V, bool &IsNotTrivial, const Type *&VecTy,
   return true;
 }
 
-
 /// ConvertUsesToScalar - Convert all of the users of Ptr to use the new alloca
 /// directly.  This happens when we are converting an "integer union" to a
 /// single integer scalar, or when we are converting a "vector union" to a
@@ -1433,7 +1433,7 @@ void SROA::ConvertUsesToScalar(Value *Ptr, AllocaInst *NewAI, uint64_t Offset) {
     if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(User)) {
       // Compute the offset that this GEP adds to the pointer.
       SmallVector<Value*, 8> Indices(GEP->op_begin()+1, GEP->op_end());
-      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getOperand(0)->getType(),
+      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getPointerOperandType(),
                                                 &Indices[0], Indices.size());
       ConvertUsesToScalar(GEP, NewAI, Offset+GEPOffset*8);
       GEP->eraseFromParent();
@@ -1455,7 +1455,8 @@ void SROA::ConvertUsesToScalar(Value *Ptr, AllocaInst *NewAI, uint64_t Offset) {
     if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
       assert(SI->getOperand(0) != Ptr && "Consistency error!");
       // FIXME: Remove once builder has Twine API.
-      Value *Old = Builder.CreateLoad(NewAI, (NewAI->getName()+".in").str().c_str());
+      Value *Old = Builder.CreateLoad(NewAI,
+                                      (NewAI->getName()+".in").str().c_str());
       Value *New = ConvertScalar_InsertValue(SI->getOperand(0), Old, Offset,
                                              Builder);
       Builder.CreateStore(New, NewAI);
@@ -1480,7 +1481,8 @@ void SROA::ConvertUsesToScalar(Value *Ptr, AllocaInst *NewAI, uint64_t Offset) {
             APVal |= APVal << 8;
         
         // FIXME: Remove once builder has Twine API.
-        Value *Old = Builder.CreateLoad(NewAI, (NewAI->getName()+".in").str().c_str());
+        Value *Old = Builder.CreateLoad(NewAI,
+                                        (NewAI->getName()+".in").str().c_str());
         Value *New = ConvertScalar_InsertValue(
                                     ConstantInt::get(User->getContext(), APVal),
                                                Old, Offset, Builder);
@@ -1653,7 +1655,6 @@ Value *SROA::ConvertScalar_ExtractValue(Value *FromVal, const Type *ToType,
   return FromVal;
 }
 
-
 /// ConvertScalar_InsertValue - Insert the value "SV" into the existing integer
 /// or vector value "Old" at the offset specified by Offset.
 ///
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyHalfPowrLibCalls.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyHalfPowrLibCalls.cpp
index 13077fe..5acd6aa 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyHalfPowrLibCalls.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyHalfPowrLibCalls.cpp
@@ -88,7 +88,7 @@ InlineHalfPowrs(const std::vector<Instruction *> &HalfPowrs,
     if (!isa<ReturnInst>(Body->getTerminator()))
       break;
 
-    Instruction *NextInst = next(BasicBlock::iterator(Call));
+    Instruction *NextInst = llvm::next(BasicBlock::iterator(Call));
 
     // Inline the call, taking care of what code ends up where.
     NewBlock = SplitBlock(NextInst->getParent(), NextInst, this);
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
index f9b929c..0d03e55 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
@@ -128,8 +128,7 @@ public:
 
 /// CastToCStr - Return V if it is an i8*, otherwise cast it to i8*.
 Value *LibCallOptimization::CastToCStr(Value *V, IRBuilder<> &B) {
-  return
-        B.CreateBitCast(V, Type::getInt8PtrTy(*Context), "cstr");
+  return B.CreateBitCast(V, Type::getInt8PtrTy(*Context), "cstr");
 }
 
 /// EmitStrLen - Emit a call to the strlen function to the builder, for the
@@ -157,27 +156,25 @@ Value *LibCallOptimization::EmitStrLen(Value *Ptr, IRBuilder<> &B) {
 Value *LibCallOptimization::EmitMemCpy(Value *Dst, Value *Src, Value *Len,
                                        unsigned Align, IRBuilder<> &B) {
   Module *M = Caller->getParent();
-  Intrinsic::ID IID = Intrinsic::memcpy;
-  const Type *Tys[1];
-  Tys[0] = Len->getType();
-  Value *MemCpy = Intrinsic::getDeclaration(M, IID, Tys, 1);
-  return B.CreateCall4(MemCpy, CastToCStr(Dst, B), CastToCStr(Src, B), Len,
+  const Type *Ty = Len->getType();
+  Value *MemCpy = Intrinsic::getDeclaration(M, Intrinsic::memcpy, &Ty, 1);
+  Dst = CastToCStr(Dst, B);
+  Src = CastToCStr(Src, B);
+  return B.CreateCall4(MemCpy, Dst, Src, Len,
                        ConstantInt::get(Type::getInt32Ty(*Context), Align));
 }
 
-/// EmitMemMOve - Emit a call to the memmove function to the builder.  This
+/// EmitMemMove - Emit a call to the memmove function to the builder.  This
 /// always expects that the size has type 'intptr_t' and Dst/Src are pointers.
 Value *LibCallOptimization::EmitMemMove(Value *Dst, Value *Src, Value *Len,
 					unsigned Align, IRBuilder<> &B) {
   Module *M = Caller->getParent();
-  Intrinsic::ID IID = Intrinsic::memmove;
-  const Type *Tys[1];
-  Tys[0] = TD->getIntPtrType(*Context);
-  Value *MemMove = Intrinsic::getDeclaration(M, IID, Tys, 1);
-  Value *D = CastToCStr(Dst, B);
-  Value *S = CastToCStr(Src, B);
+  const Type *Ty = TD->getIntPtrType(*Context);
+  Value *MemMove = Intrinsic::getDeclaration(M, Intrinsic::memmove, &Ty, 1);
+  Dst = CastToCStr(Dst, B);
+  Src = CastToCStr(Src, B);
   Value *A = ConstantInt::get(Type::getInt32Ty(*Context), Align);
-  return B.CreateCall4(MemMove, D, S, Len, A);
+  return B.CreateCall4(MemMove, Dst, Src, Len, A);
 }
 
 /// EmitMemChr - Emit a call to the memchr function.  This assumes that Ptr is
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/BasicBlockUtils.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/BasicBlockUtils.cpp
index 2974592..2962e84 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/BasicBlockUtils.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/BasicBlockUtils.cpp
@@ -16,7 +16,6 @@
 #include "llvm/Function.h"
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/Constant.h"
 #include "llvm/Type.h"
 #include "llvm/Analysis/AliasAnalysis.h"
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp
index aef0f5f..7a37aa3 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp
@@ -603,3 +603,65 @@ bool llvm::OnlyUsedByDbgInfoIntrinsics(Instruction *I,
   return true;
 }
 
+/// EliminateDuplicatePHINodes - Check for and eliminate duplicate PHI
+/// nodes in this block. This doesn't try to be clever about PHI nodes
+/// which differ only in the order of the incoming values, but instcombine
+/// orders them so it usually won't matter.
+///
+bool llvm::EliminateDuplicatePHINodes(BasicBlock *BB) {
+  bool Changed = false;
+
+  // This implementation doesn't currently consider undef operands
+  // specially. Theroetically, two phis which are identical except for
+  // one having an undef where the other doesn't could be collapsed.
+
+  // Map from PHI hash values to PHI nodes. If multiple PHIs have
+  // the same hash value, the element is the first PHI in the
+  // linked list in CollisionMap.
+  DenseMap<uintptr_t, PHINode *> HashMap;
+
+  // Maintain linked lists of PHI nodes with common hash values.
+  DenseMap<PHINode *, PHINode *> CollisionMap;
+
+  // Examine each PHI.
+  for (BasicBlock::iterator I = BB->begin();
+       PHINode *PN = dyn_cast<PHINode>(I++); ) {
+    // Compute a hash value on the operands. Instcombine will likely have sorted
+    // them, which helps expose duplicates, but we have to check all the
+    // operands to be safe in case instcombine hasn't run.
+    uintptr_t Hash = 0;
+    for (User::op_iterator I = PN->op_begin(), E = PN->op_end(); I != E; ++I) {
+      // This hash algorithm is quite weak as hash functions go, but it seems
+      // to do a good enough job for this particular purpose, and is very quick.
+      Hash ^= reinterpret_cast<uintptr_t>(static_cast<Value *>(*I));
+      Hash = (Hash << 7) | (Hash >> (sizeof(uintptr_t) * CHAR_BIT - 7));
+    }
+    // If we've never seen this hash value before, it's a unique PHI.
+    std::pair<DenseMap<uintptr_t, PHINode *>::iterator, bool> Pair =
+      HashMap.insert(std::make_pair(Hash, PN));
+    if (Pair.second) continue;
+    // Otherwise it's either a duplicate or a hash collision.
+    for (PHINode *OtherPN = Pair.first->second; ; ) {
+      if (OtherPN->isIdenticalTo(PN)) {
+        // A duplicate. Replace this PHI with its duplicate.
+        PN->replaceAllUsesWith(OtherPN);
+        PN->eraseFromParent();
+        Changed = true;
+        break;
+      }
+      // A non-duplicate hash collision.
+      DenseMap<PHINode *, PHINode *>::iterator I = CollisionMap.find(OtherPN);
+      if (I == CollisionMap.end()) {
+        // Set this PHI to be the head of the linked list of colliding PHIs.
+        PHINode *Old = Pair.first->second;
+        Pair.first->second = PN;
+        CollisionMap[PN] = Old;
+        break;
+      }
+      // Procede to the next PHI in the list.
+      OtherPN = I->second;
+    }
+  }
+
+  return Changed;
+}
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LowerSwitch.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LowerSwitch.cpp
index 8c18b59..743bb6e 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/LowerSwitch.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/LowerSwitch.cpp
@@ -244,7 +244,7 @@ unsigned LowerSwitch::Clusterify(CaseVector& Cases, SwitchInst *SI) {
 
   // Merge case into clusters
   if (Cases.size()>=2)
-    for (CaseItr I=Cases.begin(), J=next(Cases.begin()); J!=Cases.end(); ) {
+    for (CaseItr I=Cases.begin(), J=llvm::next(Cases.begin()); J!=Cases.end(); ) {
       int64_t nextValue = cast<ConstantInt>(J->Low)->getSExtValue();
       int64_t currentValue = cast<ConstantInt>(I->High)->getSExtValue();
       BasicBlock* nextBB = J->BB;
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
index 8a07c35..ba41bf9 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
@@ -295,10 +295,14 @@ Value *SSAUpdater::GetValueAtEndOfBlockInternal(BasicBlock *BB) {
       InsertedVal = SingularValue;
     }
 
+    // Either path through the 'if' should have set insertedVal -> SingularVal.
+    assert((InsertedVal == SingularValue || isa<UndefValue>(InsertedVal)) &&
+           "RAUW didn't change InsertedVal to be SingularVal");
+
     // Drop the entries we added in IncomingPredInfo to restore the stack.
     IncomingPredInfo.erase(IncomingPredInfo.begin()+FirstPredInfoEntry,
                            IncomingPredInfo.end());
-    return InsertedVal;
+    return SingularValue;
   }
 
   // Otherwise, we do need a PHI: insert one now if we don't already have one.
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
index 89b0bd9..d7ca45e 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
@@ -1589,69 +1589,6 @@ static bool SimplifyCondBranchToCondBranch(BranchInst *PBI, BranchInst *BI) {
   return true;
 }
 
-/// EliminateDuplicatePHINodes - Check for and eliminate duplicate PHI
-/// nodes in this block. This doesn't try to be clever about PHI nodes
-/// which differ only in the order of the incoming values, but instcombine
-/// orders them so it usually won't matter.
-///
-bool llvm::EliminateDuplicatePHINodes(BasicBlock *BB) {
-  bool Changed = false;
-  
-  // This implementation doesn't currently consider undef operands
-  // specially. Theroetically, two phis which are identical except for
-  // one having an undef where the other doesn't could be collapsed.
-
-  // Map from PHI hash values to PHI nodes. If multiple PHIs have
-  // the same hash value, the element is the first PHI in the
-  // linked list in CollisionMap.
-  DenseMap<uintptr_t, PHINode *> HashMap;
-
-  // Maintain linked lists of PHI nodes with common hash values.
-  DenseMap<PHINode *, PHINode *> CollisionMap;
-
-  // Examine each PHI.
-  for (BasicBlock::iterator I = BB->begin();
-       PHINode *PN = dyn_cast<PHINode>(I++); ) {
-    // Compute a hash value on the operands. Instcombine will likely have sorted
-    // them, which helps expose duplicates, but we have to check all the
-    // operands to be safe in case instcombine hasn't run.
-    uintptr_t Hash = 0;
-    for (User::op_iterator I = PN->op_begin(), E = PN->op_end(); I != E; ++I) {
-      // This hash algorithm is quite weak as hash functions go, but it seems
-      // to do a good enough job for this particular purpose, and is very quick.
-      Hash ^= reinterpret_cast<uintptr_t>(static_cast<Value *>(*I));
-      Hash = (Hash << 7) | (Hash >> (sizeof(uintptr_t) * CHAR_BIT - 7));
-    }
-    // If we've never seen this hash value before, it's a unique PHI.
-    std::pair<DenseMap<uintptr_t, PHINode *>::iterator, bool> Pair =
-      HashMap.insert(std::make_pair(Hash, PN));
-    if (Pair.second) continue;
-    // Otherwise it's either a duplicate or a hash collision.
-    for (PHINode *OtherPN = Pair.first->second; ; ) {
-      if (OtherPN->isIdenticalTo(PN)) {
-        // A duplicate. Replace this PHI with its duplicate.
-        PN->replaceAllUsesWith(OtherPN);
-        PN->eraseFromParent();
-        Changed = true;
-        break;
-      }
-      // A non-duplicate hash collision.
-      DenseMap<PHINode *, PHINode *>::iterator I = CollisionMap.find(OtherPN);
-      if (I == CollisionMap.end()) {
-        // Set this PHI to be the head of the linked list of colliding PHIs.
-        PHINode *Old = Pair.first->second;
-        Pair.first->second = PN;
-        CollisionMap[PN] = Old;
-        break;
-      }
-      // Procede to the next PHI in the list.
-      OtherPN = I->second;
-    }
-  }
-
-  return Changed;
-}
-
 /// SimplifyCFG - This function is used to do simplification of a CFG.  For
 /// example, it adjusts branches to branches to eliminate the extra hop, it
 /// eliminates unreachable basic blocks, and does other "peephole" optimization
diff --git a/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp b/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
index 82d7914..c765d96 100644
--- a/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
@@ -813,6 +813,11 @@ void SlotTracker::CreateFunctionSlot(const Value *V) {
 void SlotTracker::CreateMetadataSlot(const MDNode *N) {
   assert(N && "Can't insert a null Value into SlotTracker!");
 
+  // Don't insert if N contains an instruction.
+  for (unsigned i = 0, e = N->getNumElements(); i != e; ++i)
+    if (N->getElement(i) && isa<Instruction>(N->getElement(i)))
+      return;
+
   ValueMap::iterator I = mdnMap.find(N);
   if (I != mdnMap.end())
     return;
@@ -1227,6 +1232,25 @@ static void WriteAsOperandInternal(raw_ostream &Out, const Value *V,
   }
 
   if (const MDNode *N = dyn_cast<MDNode>(V)) {
+    if (Machine->getMetadataSlot(N) == -1) {
+      // Print metadata inline, not via slot reference number.
+      Out << "!{";
+      for (unsigned mi = 0, me = N->getNumElements(); mi != me; ++mi) {
+        const Value *Val = N->getElement(mi);
+        if (!Val)
+          Out << "null";
+        else {
+          TypePrinter->print(N->getElement(0)->getType(), Out);
+          Out << ' ';
+          WriteAsOperandInternal(Out, N->getElement(0), TypePrinter, Machine);
+        }
+        if (mi + 1 != me)
+          Out << ", ";
+      }
+      Out << '}';
+      return;
+    }
+  
     Out << '!' << Machine->getMetadataSlot(N);
     return;
   }
@@ -1636,6 +1660,7 @@ void AssemblyWriter::printFunction(const Function *F) {
   case CallingConv::ARM_APCS:     Out << "arm_apcscc "; break;
   case CallingConv::ARM_AAPCS:    Out << "arm_aapcscc "; break;
   case CallingConv::ARM_AAPCS_VFP:Out << "arm_aapcs_vfpcc "; break;
+  case CallingConv::MSP430_INTR:  Out << "msp430_intrcc "; break;
   default: Out << "cc" << F->getCallingConv() << " "; break;
   }
 
@@ -1903,6 +1928,7 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
     case CallingConv::ARM_APCS:     Out << " arm_apcscc "; break;
     case CallingConv::ARM_AAPCS:    Out << " arm_aapcscc "; break;
     case CallingConv::ARM_AAPCS_VFP:Out << " arm_aapcs_vfpcc "; break;
+    case CallingConv::MSP430_INTR:  Out << " msp430_intrcc "; break;
     default: Out << " cc" << CI->getCallingConv(); break;
     }
 
@@ -1953,6 +1979,7 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
     case CallingConv::ARM_APCS:     Out << " arm_apcscc "; break;
     case CallingConv::ARM_AAPCS:    Out << " arm_aapcscc "; break;
     case CallingConv::ARM_AAPCS_VFP:Out << " arm_aapcs_vfpcc "; break;
+    case CallingConv::MSP430_INTR:  Out << " msp430_intrcc "; break;
     default: Out << " cc" << II->getCallingConv(); break;
     }
 
diff --git a/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp b/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp
index 23d0557..c7f7f53 100644
--- a/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/BasicBlock.cpp
@@ -262,7 +262,7 @@ BasicBlock *BasicBlock::splitBasicBlock(iterator I, const Twine &BBName) {
   assert(I != InstList.end() &&
          "Trying to get me to create degenerate basic block!");
 
-  BasicBlock *InsertBefore = next(Function::iterator(this))
+  BasicBlock *InsertBefore = llvm::next(Function::iterator(this))
                                .getNodePtrUnchecked();
   BasicBlock *New = BasicBlock::Create(getContext(), BBName,
                                        getParent(), InsertBefore);
diff --git a/libclamav/c++/llvm/lib/VMCore/Constants.cpp b/libclamav/c++/llvm/lib/VMCore/Constants.cpp
index c622558..a62f75b 100644
--- a/libclamav/c++/llvm/lib/VMCore/Constants.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Constants.cpp
@@ -1560,7 +1560,7 @@ Constant *ConstantExpr::getGetElementPtrTy(const Type *ReqTy, Constant *C,
 
 Constant *ConstantExpr::getInBoundsGetElementPtrTy(const Type *ReqTy,
                                                    Constant *C,
-                                                   Value* const *Idxs,
+                                                   Value *const *Idxs,
                                                    unsigned NumIdx) {
   assert(GetElementPtrInst::getIndexedType(C->getType(), Idxs,
                                            Idxs+NumIdx) ==
diff --git a/libclamav/c++/llvm/lib/VMCore/Function.cpp b/libclamav/c++/llvm/lib/VMCore/Function.cpp
index 6cf2c81..88e1fe8 100644
--- a/libclamav/c++/llvm/lib/VMCore/Function.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Function.cpp
@@ -77,6 +77,13 @@ bool Argument::hasByValAttr() const {
   return getParent()->paramHasAttr(getArgNo()+1, Attribute::ByVal);
 }
 
+/// hasNestAttr - Return true if this argument has the nest attribute on
+/// it in its containing function.
+bool Argument::hasNestAttr() const {
+  if (!isa<PointerType>(getType())) return false;
+  return getParent()->paramHasAttr(getArgNo()+1, Attribute::Nest);
+}
+
 /// hasNoAliasAttr - Return true if this argument has the noalias attribute on
 /// it in its containing function.
 bool Argument::hasNoAliasAttr() const {
diff --git a/libclamav/c++/llvm/lib/VMCore/Metadata.cpp b/libclamav/c++/llvm/lib/VMCore/Metadata.cpp
index 854f86c..b80b6bf 100644
--- a/libclamav/c++/llvm/lib/VMCore/Metadata.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Metadata.cpp
@@ -42,7 +42,7 @@ MDString *MDString::get(LLVMContext &Context, const char *Str) {
   StringMapEntry<MDString *> &Entry = 
     pImpl->MDStringCache.GetOrCreateValue(Str ? StringRef(Str) : StringRef());
   MDString *&S = Entry.getValue();
-  if (!S) new MDString(Context, Entry.getKey());
+  if (!S) S = new MDString(Context, Entry.getKey());
   return S;
 }
 
diff --git a/libclamav/c++/llvm/lib/VMCore/PassManager.cpp b/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
index ae418a0..52e8a82 100644
--- a/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
@@ -738,9 +738,15 @@ void PMDataManager::removeNotPreservedAnalysis(Pass *P) {
       std::map<AnalysisID, Pass *>::iterator Info = I++;
       if (!dynamic_cast<ImmutablePass*>(Info->second) &&
           std::find(PreservedSet.begin(), PreservedSet.end(), Info->first) == 
-             PreservedSet.end())
+             PreservedSet.end()) {
         // Remove this analysis
+        if (PassDebugging >= Details) {
+          Pass *S = Info->second;
+          errs() << " -- '" <<  P->getPassName() << "' is not preserving '";
+          errs() << S->getPassName() << "'\n";
+        }
         InheritedAnalysis[Index]->erase(Info);
+      }
     }
   }
 }
@@ -1391,8 +1397,7 @@ MPPassManager::runOnModule(Module &M) {
   for (unsigned Index = 0; Index < getNumContainedPasses(); ++Index) {
     ModulePass *MP = getContainedPass(Index);
 
-    dumpPassInfo(MP, EXECUTION_MSG, ON_MODULE_MSG,
-                 M.getModuleIdentifier().c_str());
+    dumpPassInfo(MP, EXECUTION_MSG, ON_MODULE_MSG, M.getModuleIdentifier());
     dumpRequiredSet(MP);
 
     initializeAnalysisImpl(MP);
@@ -1406,13 +1411,13 @@ MPPassManager::runOnModule(Module &M) {
 
     if (Changed) 
       dumpPassInfo(MP, MODIFICATION_MSG, ON_MODULE_MSG,
-                   M.getModuleIdentifier().c_str());
+                   M.getModuleIdentifier());
     dumpPreservedSet(MP);
     
     verifyPreservedAnalysis(MP);
     removeNotPreservedAnalysis(MP);
     recordAvailableAnalysis(MP);
-    removeDeadPasses(MP, M.getModuleIdentifier().c_str(), ON_MODULE_MSG);
+    removeDeadPasses(MP, M.getModuleIdentifier(), ON_MODULE_MSG);
   }
 
   // Finalize on-the-fly passes
diff --git a/libclamav/c++/llvm/test/Analysis/BasicAA/modref.ll b/libclamav/c++/llvm/test/Analysis/BasicAA/modref.ll
index 3f642cf..4a61636 100644
--- a/libclamav/c++/llvm/test/Analysis/BasicAA/modref.ll
+++ b/libclamav/c++/llvm/test/Analysis/BasicAA/modref.ll
@@ -60,8 +60,8 @@ define i8 @test2a(i8* %P) {
   call void @llvm.memset.i8(i8* %P, i8 2, i8 127, i32 0)
   %A = load i8* %P2
   ret i8 %A
-; CHECK: %A = load i8* %P2
-; CHECK: ret i8 %A
+; CHECK-NOT: load
+; CHECK: ret i8 2
 }
 
 define void @test3(i8* %P, i8 %X) {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/scev-aa.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/scev-aa.ll
index 371d07c..e07aca2 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/scev-aa.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/scev-aa.ll
@@ -2,7 +2,7 @@
 ; RUN:   |& FileCheck %s
 
 ; At the time of this writing, -basicaa only misses the example of the form
-; A[i+(j+1)] != A[i+j].  However, it does get A[(i+j)+1] != A[i+j].
+; A[i+(j+1)] != A[i+j], which can arise from multi-dimensional array references.
 
 target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64"
 
diff --git a/libclamav/c++/llvm/test/CMakeLists.txt b/libclamav/c++/llvm/test/CMakeLists.txt
index d7037ab..5ad48ef 100644
--- a/libclamav/c++/llvm/test/CMakeLists.txt
+++ b/libclamav/c++/llvm/test/CMakeLists.txt
@@ -31,6 +31,8 @@ if(PYTHONINTERP_FOUND)
                 ${CMAKE_CURRENT_BINARY_DIR}/Unit/lit.site.cfg
     COMMAND ${PYTHON_EXECUTABLE}
                 ${LLVM_SOURCE_DIR}/utils/lit/lit.py
+                --param llvm_site_config=${CMAKE_CURRENT_BINARY_DIR}/lit.site.cfg
+                --param llvm_unit_site_config=${CMAKE_CURRENT_BINARY_DIR}/Unit/lit.site.cfg
                 -sv
                 ${CMAKE_CURRENT_BINARY_DIR}
                 DEPENDS
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-30-LiveVariablesBug.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-30-LiveVariablesBug.ll
new file mode 100644
index 0000000..efe74cf
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-11-30-LiveVariablesBug.ll
@@ -0,0 +1,41 @@
+; RUN: llc -mtriple=armv7-eabi -mcpu=cortex-a8 < %s
+; PR5614
+
+%"als" = type { i32 (...)** }
+%"av" = type { %"als" }
+%"c" = type { %"lsm", %"Vec3", %"av"*, float, i8, float, %"lsm", i8, %"Vec3", %"Vec3", %"Vec3", float, float, float, %"Vec3", %"Vec3" }
+%"lsm" = type { %"als", %"Vec3", %"Vec3", %"Vec3", %"Vec3" }
+%"Vec3" = type { float, float, float }
+
+define arm_aapcs_vfpcc void @foo(%"c"* %this, %"Vec3"* nocapture %adjustment) {
+entry:
+  switch i32 undef, label %return [
+    i32 1, label %bb
+    i32 2, label %bb72
+    i32 3, label %bb31
+    i32 4, label %bb79
+    i32 5, label %bb104
+  ]
+
+bb:                                               ; preds = %entry
+  ret void
+
+bb31:                                             ; preds = %entry
+  %0 = call arm_aapcs_vfpcc  %"Vec3" undef(%"lsm"* undef) ; <%"Vec3"> [#uses=1]
+  %mrv_gr69 = extractvalue %"Vec3" %0, 1 ; <float> [#uses=1]
+  %1 = fsub float %mrv_gr69, undef                ; <float> [#uses=1]
+  store float %1, float* undef, align 4
+  ret void
+
+bb72:                                             ; preds = %entry
+  ret void
+
+bb79:                                             ; preds = %entry
+  ret void
+
+bb104:                                            ; preds = %entry
+  ret void
+
+return:                                           ; preds = %entry
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-12-02-vtrn-undef.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-12-02-vtrn-undef.ll
new file mode 100644
index 0000000..a737591
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-12-02-vtrn-undef.ll
@@ -0,0 +1,19 @@
+; RUN: llc -mcpu=cortex-a8 < %s | FileCheck %s
+
+target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32-n32"
+target triple = "armv7-apple-darwin10"
+
+%struct.int16x8_t = type { <8 x i16> }
+%struct.int16x8x2_t = type { [2 x %struct.int16x8_t] }
+
+define arm_apcscc void @t(%struct.int16x8x2_t* noalias nocapture sret %agg.result, <8 x i16> %tmp.0, %struct.int16x8x2_t* nocapture %dst) nounwind {
+entry:
+;CHECK: vtrn.16
+  %0 = shufflevector <8 x i16> %tmp.0, <8 x i16> undef, <8 x i32> <i32 0, i32 0, i32 2, i32 2, i32 4, i32 4, i32 6, i32 6>
+  %1 = shufflevector <8 x i16> %tmp.0, <8 x i16> undef, <8 x i32> <i32 1, i32 1, i32 3, i32 3, i32 5, i32 5, i32 7, i32 7>
+  %agg.result1218.0 = getelementptr %struct.int16x8x2_t* %agg.result, i32 0, i32 0, i32 0, i32 0 ; <<8 x i16>*>
+  store <8 x i16> %0, <8 x i16>* %agg.result1218.0, align 16
+  %agg.result12.1.0 = getelementptr %struct.int16x8x2_t* %agg.result, i32 0, i32 0, i32 1, i32 0 ; <<8 x i16>*>
+  store <8 x i16> %1, <8 x i16>* %agg.result12.1.0, align 16
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/inlineasm3.ll b/libclamav/c++/llvm/test/CodeGen/ARM/inlineasm3.ll
new file mode 100644
index 0000000..5ebf2fb
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/inlineasm3.ll
@@ -0,0 +1,13 @@
+; RUN: llc < %s -march=arm -mattr=+neon | FileCheck %s
+
+%struct.int32x4_t = type { <4 x i32> }
+
+define arm_apcscc void @t() nounwind {
+entry:
+; CHECK: vmov.I64 q15, #0
+; CHECK: vmov.32 d30[0], r0
+; CHECK: vmov q0, q15
+  %tmp = alloca %struct.int32x4_t, align 16
+  call void asm sideeffect "vmov.I64 q15, #0\0Avmov.32 d30[0], $1\0Avmov ${0:q}, q15\0A", "=*w,r,~{d31},~{d30}"(%struct.int32x4_t* %tmp, i32 8192) nounwind
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2009-11-20-NewNode.ll b/libclamav/c++/llvm/test/CodeGen/Generic/2009-11-20-NewNode.ll
deleted file mode 100644
index cc499f0..0000000
--- a/libclamav/c++/llvm/test/CodeGen/Generic/2009-11-20-NewNode.ll
+++ /dev/null
@@ -1,38 +0,0 @@
-; RUN: llc -march=msp430 < %s
-; RUN: llc -march=pic16 < %s
-; PR5558
-; XFAIL: *
-
-define i64 @_strtoll_r(i16 %base) nounwind {
-entry:
-  br i1 undef, label %if.then, label %if.end27
-
-if.then:                                          ; preds = %do.end
-  br label %if.end27
-
-if.end27:                                         ; preds = %if.then, %do.end
-  %cond66 = select i1 undef, i64 -9223372036854775808, i64 9223372036854775807 ; <i64> [#uses=3]
-  %conv69 = sext i16 %base to i64                 ; <i64> [#uses=1]
-  %div = udiv i64 %cond66, %conv69                ; <i64> [#uses=1]
-  br label %for.cond
-
-for.cond:                                         ; preds = %if.end116, %if.end27
-  br i1 undef, label %if.then152, label %if.then93
-
-if.then93:                                        ; preds = %for.cond
-  br i1 undef, label %if.end116, label %if.then152
-
-if.end116:                                        ; preds = %if.then93
-  %cmp123 = icmp ugt i64 undef, %div              ; <i1> [#uses=1]
-  %or.cond = or i1 undef, %cmp123                 ; <i1> [#uses=0]
-  br label %for.cond
-
-if.then152:                                       ; preds = %if.then93, %for.cond
-  br i1 undef, label %if.end182, label %if.then172
-
-if.then172:                                       ; preds = %if.then152
-  ret i64 %cond66
-
-if.end182:                                        ; preds = %if.then152
-  ret i64 %cond66
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-12-01-LoopIVUsers.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-12-01-LoopIVUsers.ll
new file mode 100644
index 0000000..79ad0a9
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/2009-12-01-LoopIVUsers.ll
@@ -0,0 +1,128 @@
+; RUN: opt < %s -std-compile-opts | \
+; RUN:   llc -mtriple=thumbv7-apple-darwin10 -mattr=+neon | FileCheck %s
+
+define arm_apcscc void @fred(i32 %three_by_three, i8* %in, double %dt1, i32 %x_size, i32 %y_size, i8* %bp) nounwind {
+entry:
+; -- The loop following the load should only use a single add-literation
+;    instruction.
+; CHECK: ldr.64
+; CHECK: adds r{{[0-9]+}}, #1
+; CHECK-NOT: adds r{{[0-9]+}}, #1
+; CHECK: subsections_via_symbols
+
+
+  %three_by_three_addr = alloca i32               ; <i32*> [#uses=2]
+  %in_addr = alloca i8*                           ; <i8**> [#uses=2]
+  %dt_addr = alloca float                         ; <float*> [#uses=4]
+  %x_size_addr = alloca i32                       ; <i32*> [#uses=2]
+  %y_size_addr = alloca i32                       ; <i32*> [#uses=1]
+  %bp_addr = alloca i8*                           ; <i8**> [#uses=1]
+  %tmp_image = alloca i8*                         ; <i8**> [#uses=0]
+  %out = alloca i8*                               ; <i8**> [#uses=1]
+  %cp = alloca i8*                                ; <i8**> [#uses=0]
+  %dpt = alloca i8*                               ; <i8**> [#uses=4]
+  %dp = alloca i8*                                ; <i8**> [#uses=2]
+  %ip = alloca i8*                                ; <i8**> [#uses=0]
+  %centre = alloca i32                            ; <i32*> [#uses=0]
+  %tmp = alloca i32                               ; <i32*> [#uses=0]
+  %brightness = alloca i32                        ; <i32*> [#uses=0]
+  %area = alloca i32                              ; <i32*> [#uses=0]
+  %y = alloca i32                                 ; <i32*> [#uses=0]
+  %x = alloca i32                                 ; <i32*> [#uses=2]
+  %j = alloca i32                                 ; <i32*> [#uses=6]
+  %i = alloca i32                                 ; <i32*> [#uses=1]
+  %mask_size = alloca i32                         ; <i32*> [#uses=5]
+  %increment = alloca i32                         ; <i32*> [#uses=1]
+  %n_max = alloca i32                             ; <i32*> [#uses=4]
+  %temp = alloca float                            ; <float*> [#uses=1]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  store i32 %three_by_three, i32* %three_by_three_addr
+  store i8* %in, i8** %in_addr
+  %dt = fptrunc double %dt1 to float              ; <float> [#uses=1]
+  store float %dt, float* %dt_addr
+  store i32 %x_size, i32* %x_size_addr
+  store i32 %y_size, i32* %y_size_addr
+  store i8* %bp, i8** %bp_addr
+  %0 = load i8** %in_addr, align 4                ; <i8*> [#uses=1]
+  store i8* %0, i8** %out, align 4
+  %1 = call arm_apcscc  i32 (...)* @foo() nounwind ; <i32> [#uses=1]
+  store i32 %1, i32* %i, align 4
+  %2 = load i32* %three_by_three_addr, align 4    ; <i32> [#uses=1]
+  %3 = icmp eq i32 %2, 0                          ; <i1> [#uses=1]
+  br i1 %3, label %bb, label %bb2
+
+bb:                                               ; preds = %entry
+  %4 = load float* %dt_addr, align 4              ; <float> [#uses=1]
+  %5 = fpext float %4 to double                   ; <double> [#uses=1]
+  %6 = fmul double %5, 1.500000e+00               ; <double> [#uses=1]
+  %7 = fptosi double %6 to i32                    ; <i32> [#uses=1]
+  %8 = add nsw i32 %7, 1                          ; <i32> [#uses=1]
+  store i32 %8, i32* %mask_size, align 4
+  br label %bb3
+
+bb2:                                              ; preds = %entry
+  store i32 1, i32* %mask_size, align 4
+  br label %bb3
+
+bb3:                                              ; preds = %bb2, %bb
+  %9 = load i32* %mask_size, align 4              ; <i32> [#uses=1]
+  %10 = mul i32 %9, 2                             ; <i32> [#uses=1]
+  %11 = add nsw i32 %10, 1                        ; <i32> [#uses=1]
+  store i32 %11, i32* %n_max, align 4
+  %12 = load i32* %x_size_addr, align 4           ; <i32> [#uses=1]
+  %13 = load i32* %n_max, align 4                 ; <i32> [#uses=1]
+  %14 = sub i32 %12, %13                          ; <i32> [#uses=1]
+  store i32 %14, i32* %increment, align 4
+  %15 = load i32* %n_max, align 4                 ; <i32> [#uses=1]
+  %16 = load i32* %n_max, align 4                 ; <i32> [#uses=1]
+  %17 = mul i32 %15, %16                          ; <i32> [#uses=1]
+  %18 = call arm_apcscc  noalias i8* @malloc(i32 %17) nounwind ; <i8*> [#uses=1]
+  store i8* %18, i8** %dp, align 4
+  %19 = load i8** %dp, align 4                    ; <i8*> [#uses=1]
+  store i8* %19, i8** %dpt, align 4
+  %20 = load float* %dt_addr, align 4             ; <float> [#uses=1]
+  %21 = load float* %dt_addr, align 4             ; <float> [#uses=1]
+  %22 = fmul float %20, %21                       ; <float> [#uses=1]
+  %23 = fsub float -0.000000e+00, %22             ; <float> [#uses=1]
+  store float %23, float* %temp, align 4
+  %24 = load i32* %mask_size, align 4             ; <i32> [#uses=1]
+  %25 = sub i32 0, %24                            ; <i32> [#uses=1]
+  store i32 %25, i32* %j, align 4
+  br label %bb5
+
+bb4:                                              ; preds = %bb5
+  %26 = load i32* %j, align 4                     ; <i32> [#uses=1]
+  %27 = load i32* %j, align 4                     ; <i32> [#uses=1]
+  %28 = mul i32 %26, %27                          ; <i32> [#uses=1]
+  %29 = sitofp i32 %28 to double                  ; <double> [#uses=1]
+  %30 = fmul double %29, 1.234000e+00             ; <double> [#uses=1]
+  %31 = fptosi double %30 to i32                  ; <i32> [#uses=1]
+  store i32 %31, i32* %x, align 4
+  %32 = load i32* %x, align 4                     ; <i32> [#uses=1]
+  %33 = trunc i32 %32 to i8                       ; <i8> [#uses=1]
+  %34 = load i8** %dpt, align 4                   ; <i8*> [#uses=1]
+  store i8 %33, i8* %34, align 1
+  %35 = load i8** %dpt, align 4                   ; <i8*> [#uses=1]
+  %36 = getelementptr inbounds i8* %35, i64 1     ; <i8*> [#uses=1]
+  store i8* %36, i8** %dpt, align 4
+  %37 = load i32* %j, align 4                     ; <i32> [#uses=1]
+  %38 = add nsw i32 %37, 1                        ; <i32> [#uses=1]
+  store i32 %38, i32* %j, align 4
+  br label %bb5
+
+bb5:                                              ; preds = %bb4, %bb3
+  %39 = load i32* %j, align 4                     ; <i32> [#uses=1]
+  %40 = load i32* %mask_size, align 4             ; <i32> [#uses=1]
+  %41 = icmp sle i32 %39, %40                     ; <i1> [#uses=1]
+  br i1 %41, label %bb4, label %bb6
+
+bb6:                                              ; preds = %bb5
+  br label %return
+
+return:                                           ; preds = %bb6
+  ret void
+}
+
+declare arm_apcscc i32 @foo(...)
+
+declare arm_apcscc noalias i8* @malloc(i32) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll
index 6f59961..da44cde 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/large-stack.ll
@@ -18,7 +18,7 @@ define void @test2() {
 define i32 @test3() {
 ; CHECK: test3:
 ; CHECK: sub.w sp, sp, #805306368
-; CHECK: sub sp, #24
+; CHECK: sub sp, #20
     %retval = alloca i32, align 4
     %tmp = alloca i32, align 4
     %a = alloca [805306369 x i8], align 16
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll
index aef167b..2b08789 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll
@@ -11,7 +11,7 @@ declare <4 x float> @llvm.arm.neon.vld1.v4f32(i8*) nounwind readonly
 
 define arm_apcscc void @aaa(%quuz* %this, i8* %block) {
 ; CHECK: aaa:
-; CHECK: bic sp, sp, #15
+; CHECK: bic r4, r4, #15
 ; CHECK: vst1.64 {{.*}}sp, :128
 ; CHECK: vld1.64 {{.*}}sp, :128
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2007-01-08-InstrSched.ll b/libclamav/c++/llvm/test/CodeGen/X86/2007-01-08-InstrSched.ll
index 81f0a1d..317ed0a 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2007-01-08-InstrSched.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2007-01-08-InstrSched.ll
@@ -1,5 +1,5 @@
 ; PR1075
-; RUN: llc < %s -mtriple=x86_64-apple-darwin | FileCheck %s
+; RUN: llc < %s -mtriple=x86_64-apple-darwin -O3 | FileCheck %s
 
 define float @foo(float %x) nounwind {
     %tmp1 = fmul float %x, 3.000000e+00
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2008-08-05-SpillerBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2008-08-05-SpillerBug.ll
index 1d166f4..67e14ff 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2008-08-05-SpillerBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2008-08-05-SpillerBug.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -mtriple=i386-apple-darwin -disable-fp-elim -stats |& grep asm-printer | grep 59
+; RUN: llc < %s -mtriple=i386-apple-darwin -disable-fp-elim -stats |& grep asm-printer | grep 58
 ; PR2568
 
 @g_3 = external global i16		; <i16*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-SpillComments.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-SpillComments.ll
index 8c62f4d..1dd9990 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-SpillComments.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-SpillComments.ll
@@ -1,6 +1,4 @@
-; RUN: llc < %s -mtriple=x86_64-unknown-linux | grep "Spill"
-; RUN: llc < %s -mtriple=x86_64-unknown-linux | grep "Folded Spill"
-; RUN: llc < %s -mtriple=x86_64-unknown-linux | grep "Reload"
+; RUN: llc < %s -mtriple=x86_64-unknown-linux | FileCheck %s
 
 	%struct..0anon = type { i32 }
 	%struct.rtvec_def = type { i32, [1 x %struct..0anon] }
@@ -12,6 +10,9 @@ declare %struct.rtx_def* @fixup_memory_subreg(%struct.rtx_def*, %struct.rtx_def*
 
 define %struct.rtx_def* @walk_fixup_memory_subreg(%struct.rtx_def* %x, %struct.rtx_def* %insn) {
 entry:
+; CHECK: Spill
+; CHECK: Folded Spill
+; CHECK: Reload
 	%tmp2 = icmp eq %struct.rtx_def* %x, null		; <i1> [#uses=1]
 	br i1 %tmp2, label %UnifiedReturnBlock, label %cond_next
 
@@ -32,7 +33,7 @@ cond_true13:		; preds = %cond_next
 	br i1 %tmp22, label %cond_true25, label %cond_next32
 
 cond_true25:		; preds = %cond_true13
-	%tmp29 = tail call %struct.rtx_def* @fixup_memory_subreg( %struct.rtx_def* %x, %struct.rtx_def* %insn, i32 1 )		; <%struct.rtx_def*> [#uses=1]
+	%tmp29 = tail call %struct.rtx_def* @fixup_memory_subreg( %struct.rtx_def* %x, %struct.rtx_def* %insn, i32 1 ) nounwind		; <%struct.rtx_def*> [#uses=1]
 	ret %struct.rtx_def* %tmp29
 
 cond_next32:		; preds = %cond_true13, %cond_next
@@ -58,7 +59,7 @@ cond_true47:		; preds = %bb
 	%tmp52 = getelementptr %struct.rtx_def* %x, i32 0, i32 3, i32 %i.01.0		; <%struct..0anon*> [#uses=1]
 	%tmp5354 = bitcast %struct..0anon* %tmp52 to %struct.rtx_def**		; <%struct.rtx_def**> [#uses=1]
 	%tmp55 = load %struct.rtx_def** %tmp5354		; <%struct.rtx_def*> [#uses=1]
-	%tmp58 = tail call  %struct.rtx_def* @walk_fixup_memory_subreg( %struct.rtx_def* %tmp55, %struct.rtx_def* %insn )		; <%struct.rtx_def*> [#uses=1]
+	%tmp58 = tail call  %struct.rtx_def* @walk_fixup_memory_subreg( %struct.rtx_def* %tmp55, %struct.rtx_def* %insn ) nounwind		; <%struct.rtx_def*> [#uses=1]
 	%tmp62 = getelementptr %struct.rtx_def* %x, i32 0, i32 3, i32 %i.01.0, i32 0		; <i32*> [#uses=1]
 	%tmp58.c = ptrtoint %struct.rtx_def* %tmp58 to i32		; <i32> [#uses=1]
 	store i32 %tmp58.c, i32* %tmp62
@@ -81,7 +82,7 @@ bb73:		; preds = %bb73, %bb105.preheader
 	%tmp92 = getelementptr %struct.rtvec_def* %tmp81, i32 0, i32 1, i32 %j.019		; <%struct..0anon*> [#uses=1]
 	%tmp9394 = bitcast %struct..0anon* %tmp92 to %struct.rtx_def**		; <%struct.rtx_def**> [#uses=1]
 	%tmp95 = load %struct.rtx_def** %tmp9394		; <%struct.rtx_def*> [#uses=1]
-	%tmp98 = tail call  %struct.rtx_def* @walk_fixup_memory_subreg( %struct.rtx_def* %tmp95, %struct.rtx_def* %insn )		; <%struct.rtx_def*> [#uses=1]
+	%tmp98 = tail call  %struct.rtx_def* @walk_fixup_memory_subreg( %struct.rtx_def* %tmp95, %struct.rtx_def* %insn ) nounwind		; <%struct.rtx_def*> [#uses=1]
 	%tmp101 = getelementptr %struct.rtvec_def* %tmp81, i32 0, i32 1, i32 %j.019, i32 0		; <i32*> [#uses=1]
 	%tmp98.c = ptrtoint %struct.rtx_def* %tmp98 to i32		; <i32> [#uses=1]
 	store i32 %tmp98.c, i32* %tmp101
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-19-SchedCustomLoweringBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-19-SchedCustomLoweringBug.ll
index f3cf1d5..8cb538b 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-19-SchedCustomLoweringBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-19-SchedCustomLoweringBug.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -mtriple=i386-apple-darwin10 | FileCheck %s
+; RUN: llc < %s -mtriple=i386-apple-darwin10 -post-RA-scheduler=true | FileCheck %s
 
 ; PR4958
 
@@ -10,6 +10,7 @@ entry:
 
 bb:                                               ; preds = %bb1, %entry
 ; CHECK:      addl $1
+; CHECK-NEXT: movl %e
 ; CHECK-NEXT: adcl $0
   %i.0 = phi i64 [ 0, %entry ], [ %0, %bb1 ]      ; <i64> [#uses=1]
   %0 = add nsw i64 %i.0, 1                        ; <i64> [#uses=2]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-12-01-EarlyClobberBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-12-01-EarlyClobberBug.ll
new file mode 100644
index 0000000..1e7a418
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-12-01-EarlyClobberBug.ll
@@ -0,0 +1,38 @@
+; RUN: llc < %s -mtriple=x86_64-apple-darwin | FileCheck %s
+; pr5391
+
+define void @t() nounwind ssp {
+entry:
+; CHECK: t:
+; CHECK: movl %ecx, %eax
+; CHECK: %eax = foo (%eax, %ecx)
+  %b = alloca i32                                 ; <i32*> [#uses=2]
+  %a = alloca i32                                 ; <i32*> [#uses=1]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  %0 = load i32* %b, align 4                      ; <i32> [#uses=1]
+  %1 = load i32* %b, align 4                      ; <i32> [#uses=1]
+  %asmtmp = call i32 asm "$0 = foo ($1, $2)", "=&{ax},%0,r,~{dirflag},~{fpsr},~{flags}"(i32 %0, i32 %1) nounwind ; <i32> [#uses=1]
+  store i32 %asmtmp, i32* %a
+  br label %return
+
+return:                                           ; preds = %entry
+  ret void
+}
+
+define void @t2() nounwind ssp {
+entry:
+; CHECK: t2:
+; CHECK: movl %eax, %ecx
+; CHECK: %ecx = foo (%ecx, %eax)
+  %b = alloca i32                                 ; <i32*> [#uses=2]
+  %a = alloca i32                                 ; <i32*> [#uses=1]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  %0 = load i32* %b, align 4                      ; <i32> [#uses=1]
+  %1 = load i32* %b, align 4                      ; <i32> [#uses=1]
+  %asmtmp = call i32 asm "$0 = foo ($1, $2)", "=&r,%0,r,~{dirflag},~{fpsr},~{flags}"(i32 %0, i32 %1) nounwind ; <i32> [#uses=1]
+  store i32 %asmtmp, i32* %a
+  br label %return
+
+return:                                           ; preds = %entry
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-12-11-TLSNoRedZone.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-12-11-TLSNoRedZone.ll
new file mode 100644
index 0000000..f7ba661
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-12-11-TLSNoRedZone.ll
@@ -0,0 +1,63 @@
+; RUN: llc -relocation-model=pic < %s | FileCheck %s
+; PR5723
+target datalayout = "e-p:64:64"
+target triple = "x86_64-unknown-linux-gnu"
+
+%0 = type { [1 x i64] }
+%link = type { %0* }
+%test = type { i32, %link }
+
+ at data = global [2 x i64] zeroinitializer, align 64 ; <[2 x i64]*> [#uses=1]
+ at ptr = linkonce thread_local global [1 x i64] [i64 ptrtoint ([2 x i64]* @data to i64)], align 64 ; <[1 x i64]*> [#uses=1]
+ at link_ptr = linkonce thread_local global [1 x i64] zeroinitializer, align 64 ; <[1 x i64]*> [#uses=1]
+ at _dm_my_pe = external global [1 x i64], align 64  ; <[1 x i64]*> [#uses=0]
+ at _dm_pes_in_prog = external global [1 x i64], align 64 ; <[1 x i64]*> [#uses=0]
+ at _dm_npes_div_mult = external global [1 x i64], align 64 ; <[1 x i64]*> [#uses=0]
+ at _dm_npes_div_shift = external global [1 x i64], align 64 ; <[1 x i64]*> [#uses=0]
+ at _dm_pe_addr_loc = external global [1 x i64], align 64 ; <[1 x i64]*> [#uses=0]
+ at _dm_offset_addr_mask = external global [1 x i64], align 64 ; <[1 x i64]*> [#uses=0]
+
+define void @leaf() nounwind {
+; CHECK: leaf:
+; CHECK-NOT: -8(%rsp)
+; CHECK: leaq link_ptr at TLSGD
+; CHECK: call __tls_get_addr at PLT
+"file foo2.c, line 14, bb1":
+  %p = alloca %test*, align 8                     ; <%test**> [#uses=4]
+  br label %"file foo2.c, line 14, bb2"
+
+"file foo2.c, line 14, bb2":                      ; preds = %"file foo2.c, line 14, bb1"
+  br label %"@CFE_debug_label_0"
+
+"@CFE_debug_label_0":                             ; preds = %"file foo2.c, line 14, bb2"
+  %r = load %test** bitcast ([1 x i64]* @ptr to %test**), align 8 ; <%test*> [#uses=1]
+  store %test* %r, %test** %p, align 8
+  br label %"@CFE_debug_label_2"
+
+"@CFE_debug_label_2":                             ; preds = %"@CFE_debug_label_0"
+  %r1 = load %link** bitcast ([1 x i64]* @link_ptr to %link**), align 8 ; <%link*> [#uses=1]
+  %r2 = load %test** %p, align 8                  ; <%test*> [#uses=1]
+  %r3 = ptrtoint %test* %r2 to i64                ; <i64> [#uses=1]
+  %r4 = inttoptr i64 %r3 to %link**               ; <%link**> [#uses=1]
+  %r5 = getelementptr %link** %r4, i64 1          ; <%link**> [#uses=1]
+  store %link* %r1, %link** %r5, align 8
+  br label %"@CFE_debug_label_3"
+
+"@CFE_debug_label_3":                             ; preds = %"@CFE_debug_label_2"
+  %r6 = load %test** %p, align 8                  ; <%test*> [#uses=1]
+  %r7 = ptrtoint %test* %r6 to i64                ; <i64> [#uses=1]
+  %r8 = inttoptr i64 %r7 to %link*                ; <%link*> [#uses=1]
+  %r9 = getelementptr %link* %r8, i64 1           ; <%link*> [#uses=1]
+  store %link* %r9, %link** bitcast ([1 x i64]* @link_ptr to %link**), align 8
+  br label %"@CFE_debug_label_4"
+
+"@CFE_debug_label_4":                             ; preds = %"@CFE_debug_label_3"
+  %r10 = load %test** %p, align 8                 ; <%test*> [#uses=1]
+  %r11 = ptrtoint %test* %r10 to i64              ; <i64> [#uses=1]
+  %r12 = inttoptr i64 %r11 to i32*                ; <i32*> [#uses=1]
+  store i32 1, i32* %r12, align 4
+  br label %"@CFE_debug_label_5"
+
+"@CFE_debug_label_5":                             ; preds = %"@CFE_debug_label_4"
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-12-12-CoalescerBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-12-12-CoalescerBug.ll
new file mode 100644
index 0000000..4e8f5fd
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-12-12-CoalescerBug.ll
@@ -0,0 +1,40 @@
+; RUN: llc < %s -mtriple=i386-apple-darwin | FileCheck %s
+
+define i32 @do_loop(i32* nocapture %sdp, i32* nocapture %ddp, i8* %mdp, i8* nocapture %cdp, i32 %w) nounwind readonly optsize ssp {
+entry:
+  br label %bb
+
+bb:                                               ; preds = %bb5, %entry
+  %mask.1.in = load i8* undef, align 1            ; <i8> [#uses=3]
+  %0 = icmp eq i8 %mask.1.in, 0                   ; <i1> [#uses=1]
+  br i1 %0, label %bb5, label %bb1
+
+bb1:                                              ; preds = %bb
+  br i1 undef, label %bb2, label %bb3
+
+bb2:                                              ; preds = %bb1
+; CHECK: %bb2
+; CHECK: movb %ch, %al
+  %1 = zext i8 %mask.1.in to i32                  ; <i32> [#uses=1]
+  %2 = zext i8 undef to i32                       ; <i32> [#uses=1]
+  %3 = mul i32 %2, %1                             ; <i32> [#uses=1]
+  %4 = add i32 %3, 1                              ; <i32> [#uses=1]
+  %5 = add i32 %4, 0                              ; <i32> [#uses=1]
+  %6 = lshr i32 %5, 8                             ; <i32> [#uses=1]
+  %retval12.i = trunc i32 %6 to i8                ; <i8> [#uses=1]
+  br label %bb3
+
+bb3:                                              ; preds = %bb2, %bb1
+  %mask.0.in = phi i8 [ %retval12.i, %bb2 ], [ %mask.1.in, %bb1 ] ; <i8> [#uses=1]
+  %7 = icmp eq i8 %mask.0.in, 0                   ; <i1> [#uses=1]
+  br i1 %7, label %bb5, label %bb4
+
+bb4:                                              ; preds = %bb3
+  br label %bb5
+
+bb5:                                              ; preds = %bb4, %bb3, %bb
+  br i1 undef, label %bb6, label %bb
+
+bb6:                                              ; preds = %bb5
+  ret i32 undef
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/3addr-16bit.ll b/libclamav/c++/llvm/test/CodeGen/X86/3addr-16bit.ll
new file mode 100644
index 0000000..bf1e0ea
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/3addr-16bit.ll
@@ -0,0 +1,93 @@
+; RUN: llc < %s -mtriple=i386-apple-darwin -asm-verbose=false   | FileCheck %s -check-prefix=32BIT
+; RUN: llc < %s -mtriple=x86_64-apple-darwin -asm-verbose=false | FileCheck %s -check-prefix=64BIT
+
+define zeroext i16 @t1(i16 zeroext %c, i16 zeroext %k) nounwind ssp {
+entry:
+; 32BIT:     t1:
+; 32BIT:     movw 20(%esp), %ax
+; 32BIT-NOT: movw %ax, %cx
+; 32BIT:     leal 1(%eax), %ecx
+
+; 64BIT:     t1:
+; 64BIT-NOT: movw %si, %ax
+; 64BIT:     leal 1(%rsi), %eax
+  %0 = icmp eq i16 %k, %c                         ; <i1> [#uses=1]
+  %1 = add i16 %k, 1                              ; <i16> [#uses=3]
+  br i1 %0, label %bb, label %bb1
+
+bb:                                               ; preds = %entry
+  tail call void @foo(i16 zeroext %1) nounwind
+  ret i16 %1
+
+bb1:                                              ; preds = %entry
+  ret i16 %1
+}
+
+define zeroext i16 @t2(i16 zeroext %c, i16 zeroext %k) nounwind ssp {
+entry:
+; 32BIT:     t2:
+; 32BIT:     movw 20(%esp), %ax
+; 32BIT-NOT: movw %ax, %cx
+; 32BIT:     leal -1(%eax), %ecx
+
+; 64BIT:     t2:
+; 64BIT-NOT: movw %si, %ax
+; 64BIT:     leal -1(%rsi), %eax
+  %0 = icmp eq i16 %k, %c                         ; <i1> [#uses=1]
+  %1 = add i16 %k, -1                             ; <i16> [#uses=3]
+  br i1 %0, label %bb, label %bb1
+
+bb:                                               ; preds = %entry
+  tail call void @foo(i16 zeroext %1) nounwind
+  ret i16 %1
+
+bb1:                                              ; preds = %entry
+  ret i16 %1
+}
+
+declare void @foo(i16 zeroext)
+
+define zeroext i16 @t3(i16 zeroext %c, i16 zeroext %k) nounwind ssp {
+entry:
+; 32BIT:     t3:
+; 32BIT:     movw 20(%esp), %ax
+; 32BIT-NOT: movw %ax, %cx
+; 32BIT:     leal 2(%eax), %ecx
+
+; 64BIT:     t3:
+; 64BIT-NOT: movw %si, %ax
+; 64BIT:     leal 2(%rsi), %eax
+  %0 = add i16 %k, 2                              ; <i16> [#uses=3]
+  %1 = icmp eq i16 %k, %c                         ; <i1> [#uses=1]
+  br i1 %1, label %bb, label %bb1
+
+bb:                                               ; preds = %entry
+  tail call void @foo(i16 zeroext %0) nounwind
+  ret i16 %0
+
+bb1:                                              ; preds = %entry
+  ret i16 %0
+}
+
+define zeroext i16 @t4(i16 zeroext %c, i16 zeroext %k) nounwind ssp {
+entry:
+; 32BIT:     t4:
+; 32BIT:     movw 16(%esp), %ax
+; 32BIT:     movw 20(%esp), %cx
+; 32BIT-NOT: movw %cx, %dx
+; 32BIT:     leal (%ecx,%eax), %edx
+
+; 64BIT:     t4:
+; 64BIT-NOT: movw %si, %ax
+; 64BIT:     leal (%rsi,%rdi), %eax
+  %0 = add i16 %k, %c                             ; <i16> [#uses=3]
+  %1 = icmp eq i16 %k, %c                         ; <i1> [#uses=1]
+  br i1 %1, label %bb, label %bb1
+
+bb:                                               ; preds = %entry
+  tail call void @foo(i16 zeroext %0) nounwind
+  ret i16 %0
+
+bb1:                                              ; preds = %entry
+  ret i16 %0
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/scalar_widen_div.ll b/libclamav/c++/llvm/test/CodeGen/X86/scalar_widen_div.ll
new file mode 100644
index 0000000..fc67e44
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/scalar_widen_div.ll
@@ -0,0 +1,154 @@
+; RUN: llc < %s -disable-mmx -march=x86-64 -mattr=+sse42 |  FileCheck %s
+
+; Verify when widening a divide/remainder operation, we only generate a
+; divide/rem per element since divide/remainder can trap.
+
+define void @vectorDiv (<2 x i32> addrspace(1)* %nsource, <2 x i32> addrspace(1)* %dsource, <2 x i32> addrspace(1)* %qdest) nounwind {
+; CHECK: idivl
+; CHECK: idivl
+; CHECK-NOT: idivl
+; CHECK: ret
+entry:
+  %nsource.addr = alloca <2 x i32> addrspace(1)*, align 4
+  %dsource.addr = alloca <2 x i32> addrspace(1)*, align 4
+  %qdest.addr = alloca <2 x i32> addrspace(1)*, align 4
+  %index = alloca i32, align 4
+  store <2 x i32> addrspace(1)* %nsource, <2 x i32> addrspace(1)** %nsource.addr
+  store <2 x i32> addrspace(1)* %dsource, <2 x i32> addrspace(1)** %dsource.addr
+  store <2 x i32> addrspace(1)* %qdest, <2 x i32> addrspace(1)** %qdest.addr
+  %tmp = load <2 x i32> addrspace(1)** %qdest.addr
+  %tmp1 = load i32* %index
+  %arrayidx = getelementptr <2 x i32> addrspace(1)* %tmp, i32 %tmp1
+  %tmp2 = load <2 x i32> addrspace(1)** %nsource.addr
+  %tmp3 = load i32* %index
+  %arrayidx4 = getelementptr <2 x i32> addrspace(1)* %tmp2, i32 %tmp3
+  %tmp5 = load <2 x i32> addrspace(1)* %arrayidx4
+  %tmp6 = load <2 x i32> addrspace(1)** %dsource.addr
+  %tmp7 = load i32* %index
+  %arrayidx8 = getelementptr <2 x i32> addrspace(1)* %tmp6, i32 %tmp7
+  %tmp9 = load <2 x i32> addrspace(1)* %arrayidx8
+  %tmp10 = sdiv <2 x i32> %tmp5, %tmp9
+  store <2 x i32> %tmp10, <2 x i32> addrspace(1)* %arrayidx
+  ret void
+}
+
+define <3 x i8> @test_char_div(<3 x i8> %num, <3 x i8> %div) {
+; CHECK: idivb
+; CHECK: idivb
+; CHECK: idivb
+; CHECK-NOT: idivb
+; CHECK: ret
+  %div.r = sdiv <3 x i8> %num, %div
+  ret <3 x i8>  %div.r
+}
+
+define <3 x i8> @test_uchar_div(<3 x i8> %num, <3 x i8> %div) {
+; CHECK: divb
+; CHECK: divb
+; CHECK: divb
+; CHECK-NOT: divb
+; CHECK: ret
+  %div.r = udiv <3 x i8> %num, %div
+  ret <3 x i8>  %div.r
+}
+
+define <5 x i16> @test_short_div(<5 x i16> %num, <5 x i16> %div) {
+; CHECK: idivw
+; CHECK: idivw
+; CHECK: idivw
+; CHECK: idivw
+; CHECK: idivw
+; CHECK-NOT: idivw
+; CHECK: ret
+  %div.r = sdiv <5 x i16> %num, %div
+  ret <5 x i16>  %div.r
+}
+
+define <4 x i16> @test_ushort_div(<4 x i16> %num, <4 x i16> %div) {
+; CHECK: divw
+; CHECK: divw
+; CHECK: divw
+; CHECK: divw
+; CHECK-NOT: divw
+; CHECK: ret
+  %div.r = udiv <4 x i16> %num, %div
+  ret <4 x i16>  %div.r
+}
+
+define <3 x i32> @test_uint_div(<3 x i32> %num, <3 x i32> %div) {
+; CHECK: divl
+; CHECK: divl
+; CHECK: divl
+; CHECK-NOT: divl
+; CHECK: ret
+  %div.r = udiv <3 x i32> %num, %div
+  ret <3 x i32>  %div.r
+}
+
+define <3 x i64> @test_long_div(<3 x i64> %num, <3 x i64> %div) {
+; CHECK: idivq
+; CHECK: idivq
+; CHECK: idivq
+; CHECK-NOT: idivq
+; CHECK: ret
+  %div.r = sdiv <3 x i64> %num, %div
+  ret <3 x i64>  %div.r
+}
+
+define <3 x i64> @test_ulong_div(<3 x i64> %num, <3 x i64> %div) {
+; CHECK: divq
+; CHECK: divq
+; CHECK: divq
+; CHECK-NOT: divq
+; CHECK: ret
+  %div.r = udiv <3 x i64> %num, %div
+  ret <3 x i64>  %div.r
+}
+
+
+define <4 x i8> @test_char_rem(<4 x i8> %num, <4 x i8> %rem) {
+; CHECK: idivb
+; CHECK: idivb
+; CHECK: idivb
+; CHECK: idivb
+; CHECK-NOT: idivb
+; CHECK: ret
+  %rem.r = srem <4 x i8> %num, %rem
+  ret <4 x i8>  %rem.r
+}
+
+define <5 x i16> @test_short_rem(<5 x i16> %num, <5 x i16> %rem) {
+; CHECK: idivw
+; CHECK: idivw
+; CHECK: idivw
+; CHECK: idivw
+; CHECK: idivw
+; CHECK-NOT: idivw
+; CHECK: ret
+  %rem.r = srem <5 x i16> %num, %rem
+  ret <5 x i16>  %rem.r
+}
+
+define <4 x i32> @test_uint_rem(<4 x i32> %num, <4 x i32> %rem) {
+; CHECK: idivl
+; CHECK: idivl
+; CHECK: idivl
+; CHECK: idivl
+; CHECK-NOT: idivl
+; CHECK: ret
+  %rem.r = srem <4 x i32> %num, %rem
+  ret <4 x i32>  %rem.r
+}
+
+
+define <5 x i64> @test_ulong_rem(<5 x i64> %num, <5 x i64> %rem) {
+; CHECK: divq
+; CHECK: divq
+; CHECK: divq
+; CHECK: divq
+; CHECK: divq
+; CHECK-NOT: divq
+; CHECK: ret
+  %rem.r = urem <5 x i64> %num, %rem
+  ret <5 x i64>  %rem.r
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/select-aggregate.ll b/libclamav/c++/llvm/test/CodeGen/X86/select-aggregate.ll
new file mode 100644
index 0000000..822e594
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/select-aggregate.ll
@@ -0,0 +1,15 @@
+; RUN: llc < %s -march=x86-64 | FileCheck %s
+; PR5757
+
+; CHECK: cmovne %rdi, %rsi
+; CHECK: movl (%rsi), %eax
+
+%0 = type { i64, i32 }
+
+define i32 @foo(%0* %p, %0* %q, i1 %r) nounwind {
+  %t0 = load %0* %p
+  %t1 = load %0* %q
+  %t4 = select i1 %r, %0 %t0, %0 %t1
+  %t5 = extractvalue %0 %t4, 1
+  ret i32 %t5
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/sink-hoist.ll b/libclamav/c++/llvm/test/CodeGen/X86/sink-hoist.ll
index f8d542e..01d7373 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/sink-hoist.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/sink-hoist.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86-64 -asm-verbose=false -mtriple=x86_64-unknown-linux-gnu | FileCheck %s
+; RUN: llc < %s -march=x86-64 -asm-verbose=false -mtriple=x86_64-unknown-linux-gnu -post-RA-scheduler=true | FileCheck %s
 
 ; Currently, floating-point selects are lowered to CFG triangles.
 ; This means that one side of the select is always unconditionally
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/splat-scalar-load.ll b/libclamav/c++/llvm/test/CodeGen/X86/splat-scalar-load.ll
new file mode 100644
index 0000000..32d3ab6
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/splat-scalar-load.ll
@@ -0,0 +1,43 @@
+; RUN: llc < %s -mtriple=i386-apple-darwin -mattr=+sse2 | FileCheck %s
+; rdar://7434544
+
+define <2 x i64> @t1() nounwind ssp {
+entry:
+; CHECK: t1:
+; CHECK: pshufd	$0, (%esp), %xmm0
+  %array = alloca [8 x float], align 16
+  %arrayidx = getelementptr inbounds [8 x float]* %array, i32 0, i32 0
+  %tmp2 = load float* %arrayidx
+  %vecinit = insertelement <4 x float> undef, float %tmp2, i32 0
+  %vecinit5 = insertelement <4 x float> %vecinit, float %tmp2, i32 1
+  %vecinit7 = insertelement <4 x float> %vecinit5, float %tmp2, i32 2
+  %vecinit9 = insertelement <4 x float> %vecinit7, float %tmp2, i32 3
+  %0 = bitcast <4 x float> %vecinit9 to <2 x i64>
+  ret <2 x i64> %0
+}
+
+define <2 x i64> @t2() nounwind ssp {
+entry:
+; CHECK: t2:
+; CHECK: pshufd	$85, (%esp), %xmm0
+  %array = alloca [8 x float], align 4
+  %arrayidx = getelementptr inbounds [8 x float]* %array, i32 0, i32 1
+  %tmp2 = load float* %arrayidx
+  %vecinit = insertelement <4 x float> undef, float %tmp2, i32 0
+  %vecinit5 = insertelement <4 x float> %vecinit, float %tmp2, i32 1
+  %vecinit7 = insertelement <4 x float> %vecinit5, float %tmp2, i32 2
+  %vecinit9 = insertelement <4 x float> %vecinit7, float %tmp2, i32 3
+  %0 = bitcast <4 x float> %vecinit9 to <2 x i64>
+  ret <2 x i64> %0
+}
+
+define <4 x float> @t3(float %tmp1, float %tmp2, float %tmp3) nounwind readnone ssp {
+entry:
+; CHECK: t3:
+; CHECK: pshufd	$-86, (%esp), %xmm0
+  %0 = insertelement <4 x float> undef, float %tmp3, i32 0
+  %1 = insertelement <4 x float> %0, float %tmp3, i32 1
+  %2 = insertelement <4 x float> %1, float %tmp3, i32 2
+  %3 = insertelement <4 x float> %2, float %tmp3, i32 3
+  ret <4 x float> %3
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/sse2.ll b/libclamav/c++/llvm/test/CodeGen/X86/sse2.ll
index 58fe28b..f2b8010 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/sse2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/sse2.ll
@@ -1,5 +1,5 @@
 ; Tests for SSE2 and below, without SSE3+.
-; RUN: llc < %s -mtriple=i386-apple-darwin10 -mcpu=pentium4 | FileCheck %s
+; RUN: llc < %s -mtriple=i386-apple-darwin10 -mcpu=pentium4 -O3 | FileCheck %s
 
 define void @t1(<2 x double>* %r, <2 x double>* %A, double %B) nounwind  {
 	%tmp3 = load <2 x double>* %A, align 16
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll b/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll
index 21c1a3c..5550d26 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll
@@ -1,6 +1,6 @@
 ; These are tests for SSE3 codegen.  Yonah has SSE3 and earlier but not SSSE3+.
 
-; RUN: llc < %s -march=x86-64 -mcpu=yonah -mtriple=i686-apple-darwin9\
+; RUN: llc < %s -march=x86-64 -mcpu=yonah -mtriple=i686-apple-darwin9 -O3 \
 ; RUN:              | FileCheck %s --check-prefix=X64
 
 ; Test for v8xi16 lowering where we extract the first element of the vector and
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll b/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll
index 0d86e56..c70c9fa 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tail-opts.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86-64 -mtriple=x86_64-unknown-linux-gnu -asm-verbose=false | FileCheck %s
+; RUN: llc < %s -march=x86-64 -mtriple=x86_64-unknown-linux-gnu -asm-verbose=false -post-RA-scheduler=true | FileCheck %s
 
 declare void @bar(i32)
 declare void @car(i32)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tailcallstack64.ll b/libclamav/c++/llvm/test/CodeGen/X86/tailcallstack64.ll
index 69018aa..d05dff8 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/tailcallstack64.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tailcallstack64.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -tailcallopt -march=x86-64 | FileCheck %s
+; RUN: llc < %s -tailcallopt -march=x86-64 -post-RA-scheduler=true | FileCheck %s
 
 ; Check that lowered arguments on the stack do not overwrite each other.
 ; Add %in1 %p1 to a different temporary register (%eax).
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/unaligned-load.ll b/libclamav/c++/llvm/test/CodeGen/X86/unaligned-load.ll
index 7dddcda..7778983 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/unaligned-load.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/unaligned-load.ll
@@ -1,4 +1,3 @@
-; RUN: llc < %s -mtriple=x86_64-apple-darwin10.0 -relocation-model=dynamic-no-pic | not grep {movaps\t_.str3}
 ; RUN: llc < %s -mtriple=x86_64-apple-darwin10.0 -relocation-model=dynamic-no-pic | FileCheck %s
 
 @.str1 = internal constant [31 x i8] c"DHRYSTONE PROGRAM, SOME STRING\00", align 8
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/vec_compare-2.ll b/libclamav/c++/llvm/test/CodeGen/X86/vec_compare-2.ll
new file mode 100644
index 0000000..091641b
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/vec_compare-2.ll
@@ -0,0 +1,29 @@
+; RUN: llc < %s -march=x86 -mcpu=penryn -disable-mmx | FileCheck %s
+
+declare <4 x float> @llvm.x86.sse41.blendvps(<4 x float>, <4 x float>, <4 x float>) nounwind readnone
+
+declare <8 x i16> @llvm.x86.sse41.packusdw(<4 x i32>, <4 x i32>) nounwind readnone
+
+declare <4 x i32> @llvm.x86.sse41.pmaxsd(<4 x i32>, <4 x i32>) nounwind readnone
+
+define void @blackDespeckle_wrapper(i8** %args_list, i64* %gtid, i64 %xend) {
+entry:
+; CHECK-NOT: set
+; CHECK: pcmpgt
+; CHECK: blendvps
+  %shr.i = ashr <4 x i32> zeroinitializer, <i32 3, i32 3, i32 3, i32 3> ; <<4 x i32>> [#uses=1]
+  %cmp318.i = sext <4 x i1> zeroinitializer to <4 x i32> ; <<4 x i32>> [#uses=1]
+  %sub322.i = sub <4 x i32> %shr.i, zeroinitializer ; <<4 x i32>> [#uses=1]
+  %cmp323.x = icmp slt <4 x i32> zeroinitializer, %sub322.i ; <<4 x i1>> [#uses=1]
+  %cmp323.i = sext <4 x i1> %cmp323.x to <4 x i32> ; <<4 x i32>> [#uses=1]
+  %or.i = or <4 x i32> %cmp318.i, %cmp323.i       ; <<4 x i32>> [#uses=1]
+  %tmp10.i83.i = bitcast <4 x i32> %or.i to <4 x float> ; <<4 x float>> [#uses=1]
+  %0 = call <4 x float> @llvm.x86.sse41.blendvps(<4 x float> undef, <4 x float> undef, <4 x float> %tmp10.i83.i) nounwind ; <<4 x float>> [#uses=1]
+  %conv.i.i15.i = bitcast <4 x float> %0 to <4 x i32> ; <<4 x i32>> [#uses=1]
+  %swz.i.i28.i = shufflevector <4 x i32> %conv.i.i15.i, <4 x i32> undef, <2 x i32> <i32 0, i32 1> ; <<2 x i32>> [#uses=1]
+  %tmp6.i29.i = bitcast <2 x i32> %swz.i.i28.i to <4 x i16> ; <<4 x i16>> [#uses=1]
+  %swz.i30.i = shufflevector <4 x i16> %tmp6.i29.i, <4 x i16> undef, <2 x i32> <i32 0, i32 1> ; <<2 x i16>> [#uses=1]
+  store <2 x i16> %swz.i30.i, <2 x i16>* undef
+  unreachable
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/vec_ext_inreg.ll b/libclamav/c++/llvm/test/CodeGen/X86/vec_ext_inreg.ll
new file mode 100644
index 0000000..02b16a7
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/vec_ext_inreg.ll
@@ -0,0 +1,37 @@
+; RUN: llc < %s -march=x86-64 
+
+define <8 x i32> @a(<8 x i32> %a) nounwind {
+  %b = trunc <8 x i32> %a to <8 x i16>
+  %c = sext <8 x i16> %b to <8 x i32>
+  ret <8 x i32> %c
+}
+
+define <3 x i32> @b(<3 x i32> %a) nounwind {
+  %b = trunc <3 x i32> %a to <3 x i16>
+  %c = sext <3 x i16> %b to <3 x i32>
+  ret <3 x i32> %c
+}
+
+define <1 x i32> @c(<1 x i32> %a) nounwind {
+  %b = trunc <1 x i32> %a to <1 x i16>
+  %c = sext <1 x i16> %b to <1 x i32>
+  ret <1 x i32> %c
+}
+
+define <8 x i32> @d(<8 x i32> %a) nounwind {
+  %b = trunc <8 x i32> %a to <8 x i16>
+  %c = zext <8 x i16> %b to <8 x i32>
+  ret <8 x i32> %c
+}
+
+define <3 x i32> @e(<3 x i32> %a) nounwind {
+  %b = trunc <3 x i32> %a to <3 x i16>
+  %c = zext <3 x i16> %b to <3 x i32>
+  ret <3 x i32> %c
+}
+
+define <1 x i32> @f(<1 x i32> %a) nounwind {
+  %b = trunc <1 x i32> %a to <1 x i16>
+  %c = zext <1 x i16> %b to <1 x i32>
+  ret <1 x i32> %c
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-3.ll b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-3.ll
index a2b8b82..1f2c250 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/widen_arith-3.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx | FileCheck %s
+; RUN: llc < %s -march=x86 -mattr=+sse42 -disable-mmx -post-RA-scheduler=true | FileCheck %s
 ; CHECK: paddw
 ; CHECK: pextrw
 ; CHECK: movd
diff --git a/libclamav/c++/llvm/test/Unit/lit.cfg b/libclamav/c++/llvm/test/Unit/lit.cfg
index 8321593..34372bb 100644
--- a/libclamav/c++/llvm/test/Unit/lit.cfg
+++ b/libclamav/c++/llvm/test/Unit/lit.cfg
@@ -32,6 +32,12 @@ if config.test_exec_root is None:
     # configuration hasn't been created by the build system, or we are in an
     # out-of-tree build situation).
 
+    # Check for 'llvm_unit_site_config' user parameter, and use that if available.
+    site_cfg = lit.params.get('llvm_unit_site_config', None)
+    if site_cfg and os.path.exists(site_cfg):
+        lit.load_config(config, site_cfg)
+        raise SystemExit
+
     # Try to detect the situation where we are using an out-of-tree build by
     # looking for 'llvm-config'.
     #
diff --git a/libclamav/c++/llvm/test/lit.cfg b/libclamav/c++/llvm/test/lit.cfg
index 1939792..246f270 100644
--- a/libclamav/c++/llvm/test/lit.cfg
+++ b/libclamav/c++/llvm/test/lit.cfg
@@ -58,6 +58,12 @@ if config.test_exec_root is None:
     # configuration hasn't been created by the build system, or we are in an
     # out-of-tree build situation).
 
+    # Check for 'llvm_site_config' user parameter, and use that if available.
+    site_cfg = lit.params.get('llvm_site_config', None)
+    if site_cfg and os.path.exists(site_cfg):
+        lit.load_config(config, site_cfg)
+        raise SystemExit
+
     # Try to detect the situation where we are using an out-of-tree build by
     # looking for 'llvm-config'.
     #
diff --git a/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst b/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
index b92ab69..4cf2a5a 100644
--- a/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
+++ b/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
@@ -347,6 +347,12 @@ separate option groups syntactically.
    - ``really_hidden`` - the option will not be mentioned in any help
      output.
 
+   - ``comma_separated`` - Indicates that any commas specified for an option's
+     value should be used to split the value up into multiple values for the
+     option. This property is valid only for list options. In conjunction with
+     ``forward_value`` can be used to implement option forwarding in style of
+     gcc's ``-Wa,``.
+
    - ``multi_val n`` - this option takes *n* arguments (can be useful in some
      special cases). Usage example: ``(parameter_list_option "foo", (multi_val
      3))``; the command-line syntax is '-foo a b c'. Only list options can have
@@ -359,7 +365,11 @@ separate option groups syntactically.
      examples: ``(switch_option "foo", (init true))``; ``(prefix_option "bar",
      (init "baz"))``.
 
-   - ``extern`` - this option is defined in some other plugin, see below.
+   - ``extern`` - this option is defined in some other plugin, see `below`__.
+
+   __ extern_
+
+.. _extern:
 
 External options
 ----------------
@@ -448,7 +458,7 @@ use TableGen inheritance instead.
 
   - ``element_in_list`` - Returns true if a command-line parameter
     list contains a given value.
-    Example: ``(parameter_in_list "l", "pthread")``.
+    Example: ``(element_in_list "l", "pthread")``.
 
   - ``input_languages_contain`` - Returns true if a given language
     belongs to the current input language set.
@@ -547,7 +557,11 @@ The complete list of all currently implemented tool properties follows.
 
   - ``actions`` - A single big ``case`` expression that specifies how
     this tool reacts on command-line options (described in more detail
-    below).
+    `below`__).
+
+__ actions_
+
+.. _actions:
 
 Actions
 -------
@@ -585,35 +599,42 @@ The list of all possible actions follows.
 
 * Possible actions:
 
-   - ``append_cmd`` - append a string to the tool invocation
-     command.
-     Example: ``(case (switch_on "pthread"), (append_cmd
-     "-lpthread"))``
+   - ``append_cmd`` - Append a string to the tool invocation command.
+     Example: ``(case (switch_on "pthread"), (append_cmd "-lpthread"))``.
 
-   - ``error`` - exit with error.
+   - ``error`` - Exit with error.
      Example: ``(error "Mixing -c and -S is not allowed!")``.
 
-   - ``warning`` - print a warning.
+   - ``warning`` - Print a warning.
      Example: ``(warning "Specifying both -O1 and -O2 is meaningless!")``.
 
-   - ``forward`` - forward an option unchanged.  Example: ``(forward "Wall")``.
+   - ``forward`` - Forward the option unchanged.
+     Example: ``(forward "Wall")``.
 
-   - ``forward_as`` - Change the name of an option, but forward the
-     argument unchanged.
+   - ``forward_as`` - Change the option's name, but forward the argument
+     unchanged.
      Example: ``(forward_as "O0", "--disable-optimization")``.
 
-   - ``output_suffix`` - modify the output suffix of this
-     tool.
+   - ``forward_value`` - Forward only option's value. Cannot be used with switch
+     options (since they don't have values), but works fine with lists.
+     Example: ``(forward_value "Wa,")``.
+
+   - ``forward_transformed_value`` - As above, but applies a hook to the
+     option's value before forwarding (see `below`__). When
+     ``forward_transformed_value`` is applied to a list
+     option, the hook must have signature
+     ``std::string hooks::HookName (const std::vector<std::string>&)``.
+     Example: ``(forward_transformed_value "m", "ConvertToMAttr")``.
+
+     __ hooks_
+
+   - ``output_suffix`` - Modify the output suffix of this tool.
      Example: ``(output_suffix "i")``.
 
-   - ``stop_compilation`` - stop compilation after this tool processes
-     its input. Used without arguments.
+   - ``stop_compilation`` - Stop compilation after this tool processes its
+     input. Used without arguments.
+     Example: ``(stop_compilation)``.
 
-   - ``unpack_values`` - used for for splitting and forwarding
-     comma-separated lists of options, e.g. ``-Wa,-foo=bar,-baz`` is
-     converted to ``-foo=bar -baz`` and appended to the tool invocation
-     command.
-     Example: ``(unpack_values "Wa,")``.
 
 Language map
 ============
@@ -760,6 +781,8 @@ accessible only in the C++ code (i.e. hooks). Use the following code::
     extern const char* ProgramName;
     }
 
+    namespace hooks {
+
     std::string MyHook() {
     //...
     if (strcmp(ProgramName, "mydriver") == 0) {
@@ -767,6 +790,8 @@ accessible only in the C++ code (i.e. hooks). Use the following code::
 
     }
 
+    } // end namespace hooks
+
 In general, you're encouraged not to make the behaviour dependent on the
 executable file name, and use command-line switches instead. See for example how
 the ``Base`` plugin behaves when it needs to choose the correct linker options
diff --git a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
index df9b99e..5e6f6cb 100644
--- a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
+++ b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
@@ -41,9 +41,9 @@ def OptionList : OptionList<[
 //    (help "Optimization level 2. (Default)")),
 // (parameter_option "pre-RA-sched",
 //    (help "Example of an option that is passed to llc")),
- (prefix_list_option "Wa,",
+ (prefix_list_option "Wa,", (comma_separated),
     (help "Pass options to native assembler")),
- (prefix_list_option "Wl,",
+ (prefix_list_option "Wl,", (comma_separated),
     (help "Pass options to native linker"))
 // (prefix_list_option "Wllc,",
 //    (help "Pass options to llc")),
@@ -58,11 +58,11 @@ class clang_based<string language, string cmd, string ext_E> : Tool<
  (output_suffix "bc"),
  (cmd_line (case
            (switch_on "E"),
-           (case 
+           (case
               (not_empty "o"), !strconcat(cmd, " -E $INFILE -o $OUTFILE"),
               (default), !strconcat(cmd, " -E $INFILE")),
            (default), !strconcat(cmd, " $INFILE -o $OUTFILE"))),
- (actions (case 
+ (actions (case
                 (and (multiple_input_files), (or (switch_on "S"), (switch_on "c"))),
               (error "cannot specify -o with -c or -S with multiple files"),
                 (switch_on "E"), [(stop_compilation), (output_suffix ext_E)],
@@ -138,7 +138,7 @@ def gpasm : Tool<[
  (actions (case
           (switch_on "c"), (stop_compilation),
           (switch_on "g"), (append_cmd "-g"),
-          (not_empty "Wa,"), (unpack_values "Wa,")))
+          (not_empty "Wa,"), (forward_value "Wa,")))
 ]>;
 
 def mplink : Tool<[
@@ -147,13 +147,13 @@ def mplink : Tool<[
  (output_suffix "cof"),
  (cmd_line "$CALL(GetBinDir)mplink.exe -k $CALL(GetStdLinkerScriptsDir) -l $CALL(GetStdLibsDir) -p 16f1937  intrinsics.lib devices.lib $INFILE -o $OUTFILE"),
  (actions (case
-          (not_empty "Wl,"), (unpack_values "Wl,"),
+          (not_empty "Wl,"), (forward_value "Wl,"),
           (not_empty "L"), (forward_as "L", "-l"),
           (not_empty "K"), (forward_as "K", "-k"),
           (not_empty "m"), (forward "m"),
 //          (not_empty "l"), [(unpack_values "l"),(append_cmd ".lib")])),
-          (not_empty "k"), (unpack_values "k"),
-          (not_empty "l"), (unpack_values "l"))),
+          (not_empty "k"), (forward_value "k"),
+          (not_empty "l"), (forward_value "l"))),
  (join)
 ]>;
 
@@ -175,13 +175,13 @@ def LanguageMap : LanguageMap<[
 def CompilationGraph : CompilationGraph<[
     Edge<"root", "clang_cc">,
     Edge<"root", "llvm_ld">,
-    OptionalEdge<"root", "llvm_ld_optimizer", (case 
+    OptionalEdge<"root", "llvm_ld_optimizer", (case
                                          (switch_on "S"), (inc_weight),
                                          (switch_on "c"), (inc_weight))>,
     Edge<"root", "gpasm">,
     Edge<"root", "mplink">,
     Edge<"clang_cc", "llvm_ld">,
-    OptionalEdge<"clang_cc", "llvm_ld_optimizer", (case 
+    OptionalEdge<"clang_cc", "llvm_ld_optimizer", (case
                                          (switch_on "S"), (inc_weight),
                                          (switch_on "c"), (inc_weight))>,
     Edge<"llvm_ld", "pic16passes">,
diff --git a/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in b/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
index c26a567..8f928cc 100644
--- a/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
+++ b/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
@@ -38,8 +38,22 @@ def OptList : OptionList<[
     (help "Compile and assemble, but do not link")),
  (switch_option "pthread",
     (help "Enable threads")),
+ (switch_option "m32",
+    (help "Generate code for a 32-bit environment"), (hidden)),
+ (switch_option "m64",
+    (help "Generate code for a 64-bit environment"), (hidden)),
+ (switch_option "fPIC",
+    (help "Relocation model: PIC"), (hidden)),
+ (switch_option "mdynamic-no-pic",
+    (help "Relocation model: dynamic-no-pic"), (hidden)),
  (parameter_option "linker",
     (help "Choose linker (possible values: gcc, g++)")),
+ (parameter_option "mtune",
+    (help "Target a specific CPU type"), (hidden)),
+ (parameter_option "march",
+    (help "A synonym for -mtune"), (hidden)),
+ (parameter_option "mcpu",
+    (help "A deprecated synonym for -mtune"), (hidden)),
  (parameter_option "MF",
     (help "Specify a file to write dependencies to"), (hidden)),
  (parameter_option "MT",
@@ -47,13 +61,19 @@ def OptList : OptionList<[
     (hidden)),
  (parameter_list_option "include",
     (help "Include the named file prior to preprocessing")),
+ (parameter_list_option "framework",
+    (help "Specifies a framework to link against")),
+ (parameter_list_option "weak_framework",
+    (help "Specifies a framework to weakly link against"), (hidden)),
+ (prefix_list_option "F",
+    (help "Add a directory to framework search path")),
  (prefix_list_option "I",
     (help "Add a directory to include path")),
  (prefix_list_option "D",
     (help "Define a macro")),
- (prefix_list_option "Wa,",
+ (prefix_list_option "Wa,", (comma_separated),
     (help "Pass options to assembler")),
- (prefix_list_option "Wllc,",
+ (prefix_list_option "Wllc,", (comma_separated),
     (help "Pass options to llc")),
  (prefix_list_option "L",
     (help "Add a directory to link path")),
@@ -61,8 +81,11 @@ def OptList : OptionList<[
     (help "Search a library when linking")),
  (prefix_list_option "Wl,",
     (help "Pass options to linker")),
- (prefix_list_option "Wo,",
-    (help "Pass options to opt"))
+ (prefix_list_option "Wo,", (comma_separated),
+    (help "Pass options to opt")),
+ (prefix_list_option "m",
+     (help "Enable or disable various extensions (-mmmx, -msse, etc.)"),
+     (hidden))
 ]>;
 
 // Option preprocessor.
@@ -105,11 +128,21 @@ class llvm_gcc_based <string cmd_prefix, string in_lang, string E_ext> : Tool<
          (and (switch_on "emit-llvm"), (switch_on "c")), (stop_compilation),
          (switch_on "fsyntax-only"), (stop_compilation),
          (not_empty "include"), (forward "include"),
+         (not_empty "save-temps"), (append_cmd "-save-temps"),
          (not_empty "I"), (forward "I"),
+         (not_empty "F"), (forward "F"),
          (not_empty "D"), (forward "D"),
+         (not_empty "march"), (forward "march"),
+         (not_empty "mtune"), (forward "mtune"),
+         (not_empty "mcpu"), (forward "mcpu"),
+         (not_empty "m"), (forward "m"),
+         (switch_on "m32"), (forward "m32"),
+         (switch_on "m64"), (forward "m64"),
          (switch_on "O1"), (forward "O1"),
          (switch_on "O2"), (forward "O2"),
          (switch_on "O3"), (forward "O3"),
+         (switch_on "fPIC"), (forward "fPIC"),
+         (switch_on "mdynamic-no-pic"), (forward "mdynamic-no-pic"),
          (not_empty "MF"), (forward "MF"),
          (not_empty "MT"), (forward "MT"))),
  (sink)
@@ -126,7 +159,7 @@ def opt : Tool<
 [(in_language "llvm-bitcode"),
  (out_language "llvm-bitcode"),
  (output_suffix "bc"),
- (actions (case (not_empty "Wo,"), (unpack_values "Wo,"),
+ (actions (case (not_empty "Wo,"), (forward_value "Wo,"),
                 (switch_on "O1"), (forward "O1"),
                 (switch_on "O2"), (forward "O2"),
                 (switch_on "O3"), (forward "O3"))),
@@ -148,7 +181,7 @@ def llvm_gcc_assembler : Tool<
  (cmd_line "@LLVMGCCCOMMAND@ -c -x assembler $INFILE -o $OUTFILE"),
  (actions (case
           (switch_on "c"), (stop_compilation),
-          (not_empty "Wa,"), (unpack_values "Wa,")))
+          (not_empty "Wa,"), (forward_value "Wa,")))
 ]>;
 
 def llc : Tool<
@@ -162,7 +195,14 @@ def llc : Tool<
           (switch_on "O1"), (forward "O1"),
           (switch_on "O2"), (forward "O2"),
           (switch_on "O3"), (forward "O3"),
-          (not_empty "Wllc,"), (unpack_values "Wllc,")))
+          (switch_on "fPIC"), (append_cmd "-relocation-model=pic"),
+          (switch_on "mdynamic-no-pic"),
+                     (append_cmd "-relocation-model=dynamic-no-pic"),
+          (not_empty "march"), (forward "mcpu"),
+          (not_empty "mtune"), (forward "mcpu"),
+          (not_empty "mcpu"), (forward "mcpu"),
+          (not_empty "m"), (forward_transformed_value "m", "ConvertToMAttr"),
+          (not_empty "Wllc,"), (forward_value "Wllc,")))
 ]>;
 
 // Base class for linkers
@@ -175,6 +215,11 @@ class llvm_gcc_based_linker <string cmd_prefix> : Tool<
  (actions (case
           (switch_on "pthread"), (append_cmd "-lpthread"),
           (not_empty "L"), (forward "L"),
+          (not_empty "F"), (forward "F"),
+          (not_empty "framework"), (forward "framework"),
+          (not_empty "weak_framework"), (forward "weak_framework"),
+          (switch_on "m32"), (forward "m32"),
+          (switch_on "m64"), (forward "m64"),
           (not_empty "l"), (forward "l"),
           (not_empty "Wl,"), (forward "Wl,")))
 ]>;
diff --git a/libclamav/c++/llvm/tools/llvmc/plugins/Base/Hooks.cpp b/libclamav/c++/llvm/tools/llvmc/plugins/Base/Hooks.cpp
new file mode 100644
index 0000000..661a914
--- /dev/null
+++ b/libclamav/c++/llvm/tools/llvmc/plugins/Base/Hooks.cpp
@@ -0,0 +1,33 @@
+#include <string>
+#include <vector>
+
+namespace hooks {
+typedef std::vector<std::string> StrVec;
+
+/// ConvertToMAttr - Convert -m* and -mno-* to -mattr=+*,-*
+std::string ConvertToMAttr(const StrVec& Opts) {
+  std::string out("-mattr=");
+
+  bool firstIter = true;
+  for (StrVec::const_iterator B = Opts.begin(), E = Opts.end(); B!=E; ++B) {
+    const std::string& Arg = *B;
+
+    if (firstIter)
+      firstIter = false;
+    else
+      out += ",";
+
+    if (Arg.find("no-") == 0 && Arg[3] != 0) {
+      out += '-';
+      out += Arg.c_str() + 3;
+    }
+    else {
+      out += '+';
+      out += Arg;
+    }
+  }
+
+  return out;
+}
+
+}
diff --git a/libclamav/c++/llvm/tools/llvmc/plugins/Clang/Clang.td b/libclamav/c++/llvm/tools/llvmc/plugins/Clang/Clang.td
index a179c53..ac8ac15 100644
--- a/libclamav/c++/llvm/tools/llvmc/plugins/Clang/Clang.td
+++ b/libclamav/c++/llvm/tools/llvmc/plugins/Clang/Clang.td
@@ -68,7 +68,7 @@ def as : Tool<
  (out_language "object-code"),
  (output_suffix "o"),
  (cmd_line "as $INFILE -o $OUTFILE"),
- (actions (case (not_empty "Wa,"), (unpack_values "Wa,"),
+ (actions (case (not_empty "Wa,"), (forward_value "Wa,"),
                 (switch_on "c"), (stop_compilation)))
 ]>;
 
@@ -82,7 +82,7 @@ def llvm_ld : Tool<
           (switch_on "pthread"), (append_cmd "-lpthread"),
           (not_empty "L"), (forward "L"),
           (not_empty "l"), (forward "l"),
-          (not_empty "Wl,"), (unpack_values "Wl,"))),
+          (not_empty "Wl,"), (forward_value "Wl,"))),
  (join)
 ]>;
 
diff --git a/libclamav/c++/llvm/unittests/ADT/DeltaAlgorithmTest.cpp b/libclamav/c++/llvm/unittests/ADT/DeltaAlgorithmTest.cpp
new file mode 100644
index 0000000..3628922
--- /dev/null
+++ b/libclamav/c++/llvm/unittests/ADT/DeltaAlgorithmTest.cpp
@@ -0,0 +1,96 @@
+//===- llvm/unittest/ADT/DeltaAlgorithmTest.cpp ---------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "gtest/gtest.h"
+#include "llvm/ADT/DeltaAlgorithm.h"
+#include <algorithm>
+#include <cstdarg>
+using namespace llvm;
+
+std::ostream &operator<<(std::ostream &OS,
+                         const std::set<unsigned> &S) {
+  OS << "{";
+  for (std::set<unsigned>::const_iterator it = S.begin(),
+         ie = S.end(); it != ie; ++it) {
+    if (it != S.begin())
+      OS << ",";
+    OS << *it;
+  }
+  OS << "}";
+  return OS;
+}
+
+namespace {
+
+class FixedDeltaAlgorithm : public DeltaAlgorithm {
+  changeset_ty FailingSet;
+  unsigned NumTests;
+
+protected:
+  virtual bool ExecuteOneTest(const changeset_ty &Changes) {
+    ++NumTests;
+    return std::includes(Changes.begin(), Changes.end(),
+                         FailingSet.begin(), FailingSet.end());
+  }
+
+public:
+  FixedDeltaAlgorithm(const changeset_ty &_FailingSet)
+    : FailingSet(_FailingSet),
+      NumTests(0) {}
+
+  unsigned getNumTests() const { return NumTests; }
+};
+
+std::set<unsigned> fixed_set(unsigned N, ...) {
+  std::set<unsigned> S;
+  va_list ap;
+  va_start(ap, N);
+  for (unsigned i = 0; i != N; ++i)
+    S.insert(va_arg(ap, unsigned));
+  va_end(ap);
+  return S;
+}
+
+std::set<unsigned> range(unsigned Start, unsigned End) {
+  std::set<unsigned> S;
+  while (Start != End)
+    S.insert(Start++);
+  return S;
+}
+
+std::set<unsigned> range(unsigned N) {
+  return range(0, N);
+}
+
+TEST(DeltaAlgorithmTest, Basic) {
+  // P = {3,5,7} \in S
+  //   [0, 20) should minimize to {3,5,7} in a reasonable number of tests.
+  std::set<unsigned> Fails = fixed_set(3, 3, 5, 7);
+  FixedDeltaAlgorithm FDA(Fails);
+  EXPECT_EQ(fixed_set(3, 3, 5, 7), FDA.Run(range(20)));
+  EXPECT_GE(33U, FDA.getNumTests());
+
+  // P = {3,5,7} \in S
+  //   [10, 20) should minimize to [10,20)
+  EXPECT_EQ(range(10,20), FDA.Run(range(10,20)));
+
+  // P = [0,4) \in S
+  //   [0, 4) should minimize to [0,4) in 11 tests.
+  //
+  // 11 = |{ {},
+  //         {0}, {1}, {2}, {3},
+  //         {1, 2, 3}, {0, 2, 3}, {0, 1, 3}, {0, 1, 2}, 
+  //         {0, 1}, {2, 3} }|
+  FDA = FixedDeltaAlgorithm(range(10));
+  EXPECT_EQ(range(4), FDA.Run(range(4)));
+  EXPECT_EQ(11U, FDA.getNumTests());  
+}
+
+}
+
diff --git a/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp b/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp
index 101ff24..078028a 100644
--- a/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp
+++ b/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp
@@ -398,7 +398,7 @@ void Pattern::PrintFailureInfo(const SourceMgr &SM, StringRef Buffer,
     }
   }
 
-  if (BestQuality < 50) {
+  if (Best != StringRef::npos && BestQuality < 50) {
     // Print the "possible intended match here" line if we found something
     // reasonable.
     SM.PrintMessage(SMLoc::getFromPointer(Buffer.data() + Best),
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp b/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp
index 7e6c769..e9f30be 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp
@@ -61,14 +61,11 @@ void CodeEmitterGen::reverseBits(std::vector<Record*> &Insts) {
 
 // If the VarBitInit at position 'bit' matches the specified variable then
 // return the variable bit position.  Otherwise return -1.
-int CodeEmitterGen::getVariableBit(const std::string &VarName,
+int CodeEmitterGen::getVariableBit(const Init *VarVal,
             BitsInit *BI, int bit) {
   if (VarBitInit *VBI = dynamic_cast<VarBitInit*>(BI->getBit(bit))) {
     TypedInit *TI = VBI->getVariable();
-    
-    if (VarInit *VI = dynamic_cast<VarInit*>(TI)) {
-      if (VI->getName() == VarName) return VBI->getBitNum();
-    }
+    if (TI == VarVal) return VBI->getBitNum();
   }
   
   return -1;
@@ -162,11 +159,11 @@ void CodeEmitterGen::run(raw_ostream &o) {
       if (!Vals[i].getPrefix() && !Vals[i].getValue()->isComplete()) {
         // Is the operand continuous? If so, we can just mask and OR it in
         // instead of doing it bit-by-bit, saving a lot in runtime cost.
-        const std::string &VarName = Vals[i].getName();
+        const Init *VarVal = Vals[i].getValue();
         bool gotOp = false;
         
         for (int bit = BI->getNumBits()-1; bit >= 0; ) {
-          int varBit = getVariableBit(VarName, BI, bit);
+          int varBit = getVariableBit(VarVal, BI, bit);
           
           if (varBit == -1) {
             --bit;
@@ -176,7 +173,7 @@ void CodeEmitterGen::run(raw_ostream &o) {
             int N = 1;
             
             for (--bit; bit >= 0;) {
-              varBit = getVariableBit(VarName, BI, bit);
+              varBit = getVariableBit(VarVal, BI, bit);
               if (varBit == -1 || varBit != (beginVarBit - N)) break;
               ++N;
               --bit;
@@ -188,7 +185,7 @@ void CodeEmitterGen::run(raw_ostream &o) {
               while (CGI.isFlatOperandNotEmitted(op))
                 ++op;
               
-              Case += "      // op: " + VarName + "\n"
+              Case += "      // op: " + Vals[i].getName() + "\n"
                    +  "      op = getMachineOpValue(MI, MI.getOperand("
                    +  utostr(op++) + "));\n";
               gotOp = true;
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.h b/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.h
index f0b3229..2dc34ba 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.h
+++ b/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.h
@@ -23,6 +23,7 @@ namespace llvm {
 
 class RecordVal;
 class BitsInit;
+struct Init;
 
 class CodeEmitterGen : public TableGenBackend {
   RecordKeeper &Records;
@@ -35,7 +36,7 @@ private:
   void emitMachineOpEmitter(raw_ostream &o, const std::string &Namespace);
   void emitGetValueBit(raw_ostream &o, const std::string &Namespace);
   void reverseBits(std::vector<Record*> &Insts);
-  int getVariableBit(const std::string &VarName, BitsInit *BI, int bit);
+  int getVariableBit(const Init *VarVal, BitsInit *BI, int bit);
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h
index 398764b..c51232a 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h
@@ -445,7 +445,7 @@ struct PatternToMatch {
                  const std::vector<Record*> &dstregs,
                  unsigned complexity):
     Predicates(preds), SrcPattern(src), DstPattern(dst), Dstregs(dstregs),
-    AddedComplexity(complexity) {};
+    AddedComplexity(complexity) {}
 
   ListInit        *Predicates;  // Top level predicate conditions to match.
   TreePatternNode *SrcPattern;  // Source pattern to match.
diff --git a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
index 546988a..613ae03 100644
--- a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
@@ -98,10 +98,13 @@ const std::string GetOperatorName(const DagInit& D) {
 
 // checkNumberOfArguments - Ensure that the number of args in d is
 // greater than or equal to min_arguments, otherwise throw an exception.
-void checkNumberOfArguments (const DagInit* d, unsigned min_arguments) {
-  if (!d || d->getNumArgs() < min_arguments)
+void checkNumberOfArguments (const DagInit* d, unsigned minArgs) {
+  if (!d || d->getNumArgs() < minArgs)
     throw GetOperatorName(d) + ": too few arguments!";
 }
+void checkNumberOfArguments (const DagInit& d, unsigned minArgs) {
+  checkNumberOfArguments(&d, minArgs);
+}
 
 // isDagEmpty - is this DAG marked with an empty marker?
 bool isDagEmpty (const DagInit* d) {
@@ -208,7 +211,8 @@ OptionType::OptionType stringToOptionType(const std::string& T) {
 namespace OptionDescriptionFlags {
   enum OptionDescriptionFlags { Required = 0x1, Hidden = 0x2,
                                 ReallyHidden = 0x4, Extern = 0x8,
-                                OneOrMore = 0x10, ZeroOrOne = 0x20 };
+                                OneOrMore = 0x10, ZeroOrOne = 0x20,
+                                CommaSeparated = 0x40 };
 }
 
 /// OptionDescription - Represents data contained in a single
@@ -244,6 +248,9 @@ struct OptionDescription {
 
   bool isMultiVal() const;
 
+  bool isCommaSeparated() const;
+  void setCommaSeparated();
+
   bool isExtern() const;
   void setExtern();
 
@@ -296,6 +303,13 @@ bool OptionDescription::isMultiVal() const {
   return MultiVal > 1;
 }
 
+bool OptionDescription::isCommaSeparated() const {
+  return Flags & OptionDescriptionFlags::CommaSeparated;
+}
+void OptionDescription::setCommaSeparated() {
+  Flags |= OptionDescriptionFlags::CommaSeparated;
+}
+
 bool OptionDescription::isExtern() const {
   return Flags & OptionDescriptionFlags::Extern;
 }
@@ -456,54 +470,64 @@ void OptionDescriptions::InsertDescription (const OptionDescription& o) {
 
 /// HandlerTable - A base class for function objects implemented as
 /// 'tables of handlers'.
-template <class T>
+template <typename Handler>
 class HandlerTable {
 protected:
   // Implementation details.
 
-  /// Handler -
-  typedef void (T::* Handler) (const DagInit*);
   /// HandlerMap - A map from property names to property handlers
   typedef StringMap<Handler> HandlerMap;
 
   static HandlerMap Handlers_;
   static bool staticMembersInitialized_;
 
-  T* childPtr;
 public:
 
-  HandlerTable(T* cp) : childPtr(cp)
-  {}
-
-  /// operator() - Just forwards to the corresponding property
-  /// handler.
-  void operator() (Init* i) {
-    const DagInit& property = InitPtrToDag(i);
-    const std::string& property_name = GetOperatorName(property);
-    typename HandlerMap::iterator method = Handlers_.find(property_name);
+  Handler GetHandler (const std::string& HandlerName) const {
+    typename HandlerMap::iterator method = Handlers_.find(HandlerName);
 
     if (method != Handlers_.end()) {
       Handler h = method->second;
-      (childPtr->*h)(&property);
+      return h;
     }
     else {
-      throw "No handler found for property " + property_name + "!";
+      throw "No handler found for property " + HandlerName + "!";
     }
   }
 
-  void AddHandler(const char* Property, Handler Handl) {
-    Handlers_[Property] = Handl;
+  void AddHandler(const char* Property, Handler H) {
+    Handlers_[Property] = H;
   }
+
 };
 
-template <class T> typename HandlerTable<T>::HandlerMap
-HandlerTable<T>::Handlers_;
-template <class T> bool HandlerTable<T>::staticMembersInitialized_ = false;
+template <class FunctionObject>
+void InvokeDagInitHandler(FunctionObject* Obj, Init* i) {
+  typedef void (FunctionObject::*Handler) (const DagInit*);
+
+  const DagInit& property = InitPtrToDag(i);
+  const std::string& property_name = GetOperatorName(property);
+  Handler h = Obj->GetHandler(property_name);
+
+  ((Obj)->*(h))(&property);
+}
+
+template <typename H>
+typename HandlerTable<H>::HandlerMap HandlerTable<H>::Handlers_;
+
+template <typename H>
+bool HandlerTable<H>::staticMembersInitialized_ = false;
 
 
 /// CollectOptionProperties - Function object for iterating over an
 /// option property list.
-class CollectOptionProperties : public HandlerTable<CollectOptionProperties> {
+class CollectOptionProperties;
+typedef void (CollectOptionProperties::* CollectOptionPropertiesHandler)
+(const DagInit*);
+
+class CollectOptionProperties
+: public HandlerTable<CollectOptionPropertiesHandler>
+{
 private:
 
   /// optDescs_ - OptionDescriptions table. This is where the
@@ -513,7 +537,7 @@ private:
 public:
 
   explicit CollectOptionProperties(OptionDescription& OD)
-    : HandlerTable<CollectOptionProperties>(this), optDesc_(OD)
+    : optDesc_(OD)
   {
     if (!staticMembersInitialized_) {
       AddHandler("extern", &CollectOptionProperties::onExtern);
@@ -525,11 +549,18 @@ public:
       AddHandler("really_hidden", &CollectOptionProperties::onReallyHidden);
       AddHandler("required", &CollectOptionProperties::onRequired);
       AddHandler("zero_or_one", &CollectOptionProperties::onZeroOrOne);
+      AddHandler("comma_separated", &CollectOptionProperties::onCommaSeparated);
 
       staticMembersInitialized_ = true;
     }
   }
 
+  /// operator() - Just forwards to the corresponding property
+  /// handler.
+  void operator() (Init* i) {
+    InvokeDagInitHandler(this, i);
+  }
+
 private:
 
   /// Option property handlers --
@@ -555,11 +586,18 @@ private:
     optDesc_.setReallyHidden();
   }
 
+  void onCommaSeparated (const DagInit* d) {
+    checkNumberOfArguments(d, 0);
+    if (!optDesc_.isList())
+      throw "'comma_separated' is valid only on list options!";
+    optDesc_.setCommaSeparated();
+  }
+
   void onRequired (const DagInit* d) {
     checkNumberOfArguments(d, 0);
-    if (optDesc_.isOneOrMore())
-      throw std::string("An option can't have both (required) "
-                        "and (one_or_more) properties!");
+    if (optDesc_.isOneOrMore() || optDesc_.isZeroOrOne())
+      throw "Only one of (required), (zero_or_one) or "
+        "(one_or_more) properties is allowed!";
     optDesc_.setRequired();
   }
 
@@ -572,7 +610,7 @@ private:
     correct |= (optDesc_.isSwitch() && (str == "true" || str == "false"));
 
     if (!correct)
-      throw std::string("Incorrect usage of the 'init' option property!");
+      throw "Incorrect usage of the 'init' option property!";
 
     optDesc_.InitVal = i;
   }
@@ -580,8 +618,8 @@ private:
   void onOneOrMore (const DagInit* d) {
     checkNumberOfArguments(d, 0);
     if (optDesc_.isRequired() || optDesc_.isZeroOrOne())
-      throw std::string("Only one of (required), (zero_or_one) or "
-                        "(one_or_more) properties is allowed!");
+      throw "Only one of (required), (zero_or_one) or "
+        "(one_or_more) properties is allowed!";
     if (!OptionType::IsList(optDesc_.Type))
       llvm::errs() << "Warning: specifying the 'one_or_more' property "
         "on a non-list option will have no effect.\n";
@@ -591,8 +629,8 @@ private:
   void onZeroOrOne (const DagInit* d) {
     checkNumberOfArguments(d, 0);
     if (optDesc_.isRequired() || optDesc_.isOneOrMore())
-      throw std::string("Only one of (required), (zero_or_one) or "
-                        "(one_or_more) properties is allowed!");
+      throw "Only one of (required), (zero_or_one) or "
+        "(one_or_more) properties is allowed!";
     if (!OptionType::IsList(optDesc_.Type))
       llvm::errs() << "Warning: specifying the 'zero_or_one' property"
         "on a non-list option will have no effect.\n";
@@ -603,11 +641,10 @@ private:
     checkNumberOfArguments(d, 1);
     int val = InitPtrToInt(d->getArg(0));
     if (val < 2)
-      throw std::string("Error in the 'multi_val' property: "
-                        "the value must be greater than 1!");
+      throw "Error in the 'multi_val' property: "
+        "the value must be greater than 1!";
     if (!OptionType::IsList(optDesc_.Type))
-      throw std::string("The multi_val property is valid only "
-                        "on list options!");
+      throw "The multi_val property is valid only on list options!";
     optDesc_.MultiVal = val;
   }
 
@@ -712,7 +749,13 @@ typedef std::vector<IntrusiveRefCntPtr<ToolDescription> > ToolDescriptions;
 
 /// CollectToolProperties - Function object for iterating over a list of
 /// tool property records.
-class CollectToolProperties : public HandlerTable<CollectToolProperties> {
+
+class CollectToolProperties;
+typedef void (CollectToolProperties::* CollectToolPropertiesHandler)
+(const DagInit*);
+
+class CollectToolProperties : public HandlerTable<CollectToolPropertiesHandler>
+{
 private:
 
   /// toolDesc_ - Properties of the current Tool. This is where the
@@ -722,7 +765,7 @@ private:
 public:
 
   explicit CollectToolProperties (ToolDescription& d)
-    : HandlerTable<CollectToolProperties>(this) , toolDesc_(d)
+    : toolDesc_(d)
   {
     if (!staticMembersInitialized_) {
 
@@ -738,6 +781,10 @@ public:
     }
   }
 
+  void operator() (Init* i) {
+    InvokeDagInitHandler(this, i);
+  }
+
 private:
 
   /// Property handlers --
@@ -749,8 +796,7 @@ private:
     Init* Case = d->getArg(0);
     if (typeid(*Case) != typeid(DagInit) ||
         GetOperatorName(static_cast<DagInit*>(Case)) != "case")
-      throw
-        std::string("The argument to (actions) should be a 'case' construct!");
+      throw "The argument to (actions) should be a 'case' construct!";
     toolDesc_.Actions = Case;
   }
 
@@ -851,8 +897,8 @@ int CalculatePriority(RecordVector::const_iterator B,
     priority  = static_cast<int>((*B)->getValueAsInt("priority"));
 
     if (++B != E)
-      throw std::string("More than one 'PluginPriority' instance found: "
-                        "most probably an error!");
+      throw "More than one 'PluginPriority' instance found: "
+        "most probably an error!";
   }
 
   return priority;
@@ -943,7 +989,7 @@ void TypecheckGraph (const RecordVector& EdgeVector,
     }
 
     if (NodeB == "root")
-      throw std::string("Edges back to the root are not allowed!");
+      throw "Edges back to the root are not allowed!";
   }
 }
 
@@ -963,7 +1009,7 @@ void WalkCase(const Init* Case, F1 TestCallback, F2 StatementCallback,
 
   // Error checks.
   if (GetOperatorName(d) != "case")
-    throw std::string("WalkCase should be invoked only on 'case' expressions!");
+    throw "WalkCase should be invoked only on 'case' expressions!";
 
   if (d.getNumArgs() < 2)
     throw "There should be at least one clause in the 'case' expression:\n"
@@ -983,8 +1029,8 @@ void WalkCase(const Init* Case, F1 TestCallback, F2 StatementCallback,
       const DagInit& Test = InitPtrToDag(arg);
 
       if (GetOperatorName(Test) == "default" && (i+1 != numArgs))
-        throw std::string("The 'default' clause should be the last in the"
-                          "'case' construct!");
+        throw "The 'default' clause should be the last in the "
+          "'case' construct!";
       if (i == numArgs)
         throw "Case construct handler: no corresponding action "
           "found for the test " + Test.getAsString() + '!';
@@ -1017,9 +1063,11 @@ class ExtractOptionNames {
     const DagInit& Stmt = InitPtrToDag(Statement);
     const std::string& ActionName = GetOperatorName(Stmt);
     if (ActionName == "forward" || ActionName == "forward_as" ||
-        ActionName == "unpack_values" || ActionName == "switch_on" ||
-        ActionName == "parameter_equals" || ActionName == "element_in_list" ||
-        ActionName == "not_empty" || ActionName == "empty") {
+        ActionName == "forward_value" ||
+        ActionName == "forward_transformed_value" ||
+        ActionName == "switch_on" || ActionName == "parameter_equals" ||
+        ActionName == "element_in_list" || ActionName == "not_empty" ||
+        ActionName == "empty") {
       checkNumberOfArguments(&Stmt, 1);
       const std::string& Name = InitPtrToString(Stmt.getArg(0));
       OptionNames_.insert(Name);
@@ -1155,6 +1203,9 @@ public:
     if (OptName == "o") {
       O << Neg << "OutputFilename.empty()";
     }
+    else if (OptName == "save-temps") {
+      O << Neg << "(SaveTemps == SaveTempsEnum::Unset)";
+    }
     else {
       const OptionDescription& OptDesc = OptDescs_.FindListOrParameter(OptName);
       O << Neg << OptDesc.GenVariableName() << ".empty()";
@@ -1499,7 +1550,7 @@ StrVector::const_iterator SubstituteSpecialCommands
     const std::string& CmdName = *Pos;
 
     if (CmdName == ")")
-      throw std::string("$CALL invocation: empty argument list!");
+      throw "$CALL invocation: empty argument list!";
 
     O << "hooks::";
     O << CmdName << "(";
@@ -1694,7 +1745,7 @@ void EmitForwardOptionPropertyHandlingCode (const OptionDescription& D,
     break;
   case OptionType::Alias:
   default:
-    throw std::string("Aliases are not allowed in tool option descriptions!");
+    throw "Aliases are not allowed in tool option descriptions!";
   }
 }
 
@@ -1724,90 +1775,134 @@ struct ActionHandlingCallbackBase {
 
 /// EmitActionHandlersCallback - Emit code that handles actions. Used by
 /// EmitGenerateActionMethod() as an argument to EmitCaseConstructHandler().
-class EmitActionHandlersCallback : ActionHandlingCallbackBase {
+class EmitActionHandlersCallback;
+typedef void (EmitActionHandlersCallback::* EmitActionHandlersCallbackHandler)
+(const DagInit&, unsigned, raw_ostream&) const;
+
+class EmitActionHandlersCallback
+: public ActionHandlingCallbackBase,
+  public HandlerTable<EmitActionHandlersCallbackHandler>
+{
   const OptionDescriptions& OptDescs;
+  typedef EmitActionHandlersCallbackHandler Handler;
 
-  void processActionDag(const Init* Statement, unsigned IndentLevel,
-                        raw_ostream& O) const
+  void onAppendCmd (const DagInit& Dag,
+                    unsigned IndentLevel, raw_ostream& O) const
   {
-    const DagInit& Dag = InitPtrToDag(Statement);
-    const std::string& ActionName = GetOperatorName(Dag);
+    checkNumberOfArguments(&Dag, 1);
+    const std::string& Cmd = InitPtrToString(Dag.getArg(0));
+    StrVector Out;
+    llvm::SplitString(Cmd, Out);
 
-    if (ActionName == "append_cmd") {
-      checkNumberOfArguments(&Dag, 1);
-      const std::string& Cmd = InitPtrToString(Dag.getArg(0));
-      StrVector Out;
-      llvm::SplitString(Cmd, Out);
+    for (StrVector::const_iterator B = Out.begin(), E = Out.end();
+         B != E; ++B)
+      O.indent(IndentLevel) << "vec.push_back(\"" << *B << "\");\n";
+  }
 
-      for (StrVector::const_iterator B = Out.begin(), E = Out.end();
-           B != E; ++B)
-        O.indent(IndentLevel) << "vec.push_back(\"" << *B << "\");\n";
-    }
-    else if (ActionName == "error") {
-      this->onErrorDag(Dag, IndentLevel, O);
-    }
-    else if (ActionName == "warning") {
-      this->onWarningDag(Dag, IndentLevel, O);
-    }
-    else if (ActionName == "forward") {
-      checkNumberOfArguments(&Dag, 1);
-      const std::string& Name = InitPtrToString(Dag.getArg(0));
-      EmitForwardOptionPropertyHandlingCode(OptDescs.FindOption(Name),
-                                            IndentLevel, "", O);
-    }
-    else if (ActionName == "forward_as") {
-      checkNumberOfArguments(&Dag, 2);
-      const std::string& Name = InitPtrToString(Dag.getArg(0));
-      const std::string& NewName = InitPtrToString(Dag.getArg(1));
-      EmitForwardOptionPropertyHandlingCode(OptDescs.FindOption(Name),
-                                            IndentLevel, NewName, O);
-    }
-    else if (ActionName == "output_suffix") {
-      checkNumberOfArguments(&Dag, 1);
-      const std::string& OutSuf = InitPtrToString(Dag.getArg(0));
-      O.indent(IndentLevel) << "output_suffix = \"" << OutSuf << "\";\n";
-    }
-    else if (ActionName == "stop_compilation") {
-      O.indent(IndentLevel) << "stop_compilation = true;\n";
-    }
-    else if (ActionName == "unpack_values") {
-      checkNumberOfArguments(&Dag, 1);
-      const std::string& Name = InitPtrToString(Dag.getArg(0));
-      const OptionDescription& D = OptDescs.FindOption(Name);
-
-      if (D.isMultiVal())
-        throw std::string("Can't use unpack_values with multi-valued options!");
-
-      if (D.isList()) {
-        O.indent(IndentLevel)
-          << "for (" << D.GenTypeDeclaration()
-          << "::iterator B = " << D.GenVariableName() << ".begin(),\n";
-        O.indent(IndentLevel)
-          << "E = " << D.GenVariableName() << ".end(); B != E; ++B)\n";
-        O.indent(IndentLevel + Indent1)
-          << "llvm::SplitString(*B, vec, \",\");\n";
-      }
-      else if (D.isParameter()){
-        O.indent(IndentLevel) << "llvm::SplitString("
-                              << D.GenVariableName() << ", vec, \",\");\n";
-      }
-      else {
-        throw "Option '" + D.Name +
-          "': switches can't have the 'unpack_values' property!";
-      }
+  void onForward (const DagInit& Dag,
+                  unsigned IndentLevel, raw_ostream& O) const
+  {
+    checkNumberOfArguments(&Dag, 1);
+    const std::string& Name = InitPtrToString(Dag.getArg(0));
+    EmitForwardOptionPropertyHandlingCode(OptDescs.FindOption(Name),
+                                          IndentLevel, "", O);
+  }
+
+  void onForwardAs (const DagInit& Dag,
+                    unsigned IndentLevel, raw_ostream& O) const
+  {
+    checkNumberOfArguments(&Dag, 2);
+    const std::string& Name = InitPtrToString(Dag.getArg(0));
+    const std::string& NewName = InitPtrToString(Dag.getArg(1));
+    EmitForwardOptionPropertyHandlingCode(OptDescs.FindOption(Name),
+                                          IndentLevel, NewName, O);
+  }
+
+  void onForwardValue (const DagInit& Dag,
+                       unsigned IndentLevel, raw_ostream& O) const
+  {
+    checkNumberOfArguments(&Dag, 1);
+    const std::string& Name = InitPtrToString(Dag.getArg(0));
+    const OptionDescription& D = OptDescs.FindListOrParameter(Name);
+
+    if (D.isParameter()) {
+      O.indent(IndentLevel) << "vec.push_back("
+                            << D.GenVariableName() << ");\n";
     }
     else {
-      throw "Unknown action name: " + ActionName + "!";
+      O.indent(IndentLevel) << "std::copy(" << D.GenVariableName()
+                            << ".begin(), " << D.GenVariableName()
+                            << ".end(), std::back_inserter(vec));\n";
     }
   }
+
+  void onForwardTransformedValue (const DagInit& Dag,
+                                  unsigned IndentLevel, raw_ostream& O) const
+  {
+    checkNumberOfArguments(&Dag, 2);
+    const std::string& Name = InitPtrToString(Dag.getArg(0));
+    const std::string& Hook = InitPtrToString(Dag.getArg(1));
+    const OptionDescription& D = OptDescs.FindListOrParameter(Name);
+
+    O.indent(IndentLevel) << "vec.push_back(" << "hooks::"
+                          << Hook << "(" << D.GenVariableName() << "));\n";
+  }
+
+
+  void onOutputSuffix (const DagInit& Dag,
+                       unsigned IndentLevel, raw_ostream& O) const
+  {
+    checkNumberOfArguments(&Dag, 1);
+    const std::string& OutSuf = InitPtrToString(Dag.getArg(0));
+    O.indent(IndentLevel) << "output_suffix = \"" << OutSuf << "\";\n";
+  }
+
+  void onStopCompilation (const DagInit& Dag,
+                          unsigned IndentLevel, raw_ostream& O) const
+  {
+    O.indent(IndentLevel) << "stop_compilation = true;\n";
+  }
+
+
+  void onUnpackValues (const DagInit& Dag,
+                       unsigned IndentLevel, raw_ostream& O) const
+  {
+    throw "'unpack_values' is deprecated. "
+      "Use 'comma_separated' + 'forward_value' instead!";
+  }
+
  public:
-  EmitActionHandlersCallback(const OptionDescriptions& OD)
-    : OptDescs(OD) {}
+
+  explicit EmitActionHandlersCallback(const OptionDescriptions& OD)
+    : OptDescs(OD)
+  {
+    if (!staticMembersInitialized_) {
+      AddHandler("error", &EmitActionHandlersCallback::onErrorDag);
+      AddHandler("warning", &EmitActionHandlersCallback::onWarningDag);
+      AddHandler("append_cmd", &EmitActionHandlersCallback::onAppendCmd);
+      AddHandler("forward", &EmitActionHandlersCallback::onForward);
+      AddHandler("forward_as", &EmitActionHandlersCallback::onForwardAs);
+      AddHandler("forward_value", &EmitActionHandlersCallback::onForwardValue);
+      AddHandler("forward_transformed_value",
+                 &EmitActionHandlersCallback::onForwardTransformedValue);
+      AddHandler("output_suffix", &EmitActionHandlersCallback::onOutputSuffix);
+      AddHandler("stop_compilation",
+                 &EmitActionHandlersCallback::onStopCompilation);
+      AddHandler("unpack_values",
+                 &EmitActionHandlersCallback::onUnpackValues);
+
+      staticMembersInitialized_ = true;
+    }
+  }
 
   void operator()(const Init* Statement,
                   unsigned IndentLevel, raw_ostream& O) const
   {
-    this->processActionDag(Statement, IndentLevel, O);
+    const DagInit& Dag = InitPtrToDag(Statement);
+    const std::string& ActionName = GetOperatorName(Dag);
+    Handler h = GetHandler(ActionName);
+
+    ((this)->*(h))(Dag, IndentLevel, O);
   }
 };
 
@@ -1863,11 +1958,9 @@ bool IsOutFileIndexCheckRequired (Init* CmdLine) {
     return IsOutFileIndexCheckRequiredCase(CmdLine);
 }
 
-// EmitGenerateActionMethod - Emit either a normal or a "join" version of the
-// Tool::GenerateAction() method.
-void EmitGenerateActionMethod (const ToolDescription& D,
-                               const OptionDescriptions& OptDescs,
-                               bool IsJoin, raw_ostream& O) {
+void EmitGenerateActionMethodHeader(const ToolDescription& D,
+                                    bool IsJoin, raw_ostream& O)
+{
   if (IsJoin)
     O.indent(Indent1) << "Action GenerateAction(const PathVector& inFiles,\n";
   else
@@ -1883,6 +1976,15 @@ void EmitGenerateActionMethod (const ToolDescription& D,
   O.indent(Indent2) << "bool stop_compilation = !HasChildren;\n";
   O.indent(Indent2) << "const char* output_suffix = \""
                     << D.OutputSuffix << "\";\n";
+}
+
+// EmitGenerateActionMethod - Emit either a normal or a "join" version of the
+// Tool::GenerateAction() method.
+void EmitGenerateActionMethod (const ToolDescription& D,
+                               const OptionDescriptions& OptDescs,
+                               bool IsJoin, raw_ostream& O) {
+
+  EmitGenerateActionMethodHeader(D, IsJoin, O);
 
   if (!D.CmdLine)
     throw "Tool " + D.Name + " has no cmd_line property!";
@@ -2076,12 +2178,13 @@ void EmitOptionDefinitions (const OptionDescriptions& descs,
         O << ", cl::ZeroOrOne";
     }
 
-    if (val.isReallyHidden()) {
+    if (val.isReallyHidden())
       O << ", cl::ReallyHidden";
-    }
-    else if (val.isHidden()) {
+    else if (val.isHidden())
       O << ", cl::Hidden";
-    }
+
+    if (val.isCommaSeparated())
+      O << ", cl::CommaSeparated";
 
     if (val.MultiVal > 1)
       O << ", cl::multi_val(" << val.MultiVal << ')';
@@ -2140,7 +2243,7 @@ class EmitPreprocessOptionsCallback : ActionHandlingCallbackBase {
       O.indent(IndentLevel) << OptDesc.GenVariableName() << ".clear();\n";
     }
     else {
-      throw "Can't apply 'unset_option' to alias option '" + OptName + "'";
+      throw "Can't apply 'unset_option' to alias option '" + OptName + "'!";
     }
   }
 
@@ -2218,7 +2321,7 @@ void EmitPopulateLanguageMap (const RecordKeeper& Records, raw_ostream& O)
 
     ListInit* LangsToSuffixesList = LangMapRecord->getValueAsListInit("map");
     if (!LangsToSuffixesList)
-      throw std::string("Error in the language map definition!");
+      throw "Error in the language map definition!";
 
     for (unsigned i = 0; i < LangsToSuffixesList->size(); ++i) {
       const Record* LangToSuffixes = LangsToSuffixesList->getElementAsRecord(i);
@@ -2346,23 +2449,72 @@ void EmitPopulateCompilationGraph (const RecordVector& EdgeVector,
   O << "}\n\n";
 }
 
+/// HookInfo - Information about the hook type and number of arguments.
+struct HookInfo {
+
+  // A hook can either have a single parameter of type std::vector<std::string>,
+  // or NumArgs parameters of type const char*.
+  enum HookType { ListHook, ArgHook };
+
+  HookType Type;
+  unsigned NumArgs;
+
+  HookInfo() : Type(ArgHook), NumArgs(1)
+  {}
+
+  HookInfo(HookType T) : Type(T), NumArgs(1)
+  {}
+
+  HookInfo(unsigned N) : Type(ArgHook), NumArgs(N)
+  {}
+};
+
+typedef llvm::StringMap<HookInfo> HookInfoMap;
+
 /// ExtractHookNames - Extract the hook names from all instances of
-/// $CALL(HookName) in the provided command line string. Helper
+/// $CALL(HookName) in the provided command line string/action. Helper
 /// function used by FillInHookNames().
 class ExtractHookNames {
-  llvm::StringMap<unsigned>& HookNames_;
+  HookInfoMap& HookNames_;
+  const OptionDescriptions& OptDescs_;
 public:
-  ExtractHookNames(llvm::StringMap<unsigned>& HookNames)
-  : HookNames_(HookNames) {}
+  ExtractHookNames(HookInfoMap& HookNames, const OptionDescriptions& OptDescs)
+    : HookNames_(HookNames), OptDescs_(OptDescs)
+  {}
 
-  void operator()(const Init* CmdLine) {
-    StrVector cmds;
+  void onAction (const DagInit& Dag) {
+    if (GetOperatorName(Dag) == "forward_transformed_value") {
+      checkNumberOfArguments(Dag, 2);
+      const std::string& OptName = InitPtrToString(Dag.getArg(0));
+      const std::string& HookName = InitPtrToString(Dag.getArg(1));
+      const OptionDescription& D = OptDescs_.FindOption(OptName);
 
-    // Ignore nested 'case' DAG.
-    if (typeid(*CmdLine) == typeid(DagInit))
+      HookNames_[HookName] = HookInfo(D.isList() ? HookInfo::ListHook
+                                      : HookInfo::ArgHook);
+    }
+  }
+
+  void operator()(const Init* Arg) {
+
+    // We're invoked on an action (either a dag or a dag list).
+    if (typeid(*Arg) == typeid(DagInit)) {
+      const DagInit& Dag = InitPtrToDag(Arg);
+      this->onAction(Dag);
+      return;
+    }
+    else if (typeid(*Arg) == typeid(ListInit)) {
+      const ListInit& List = InitPtrToList(Arg);
+      for (ListInit::const_iterator B = List.begin(), E = List.end(); B != E;
+           ++B) {
+        const DagInit& Dag = InitPtrToDag(*B);
+        this->onAction(Dag);
+      }
       return;
+    }
 
-    TokenizeCmdline(InitPtrToString(CmdLine), cmds);
+    // We're invoked on a command line.
+    StrVector cmds;
+    TokenizeCmdline(InitPtrToString(Arg), cmds);
     for (StrVector::const_iterator B = cmds.begin(), E = cmds.end();
          B != E; ++B) {
       const std::string& cmd = *B;
@@ -2380,13 +2532,14 @@ public:
           ++NumArgs;
         }
 
-        StringMap<unsigned>::const_iterator H = HookNames_.find(HookName);
+        HookInfoMap::const_iterator H = HookNames_.find(HookName);
 
-        if (H != HookNames_.end() && H->second != NumArgs)
+        if (H != HookNames_.end() && H->second.NumArgs != NumArgs &&
+            H->second.Type != HookInfo::ArgHook)
           throw "Overloading of hooks is not allowed. Overloaded hook: "
             + HookName;
         else
-          HookNames_[HookName] = NumArgs;
+          HookNames_[HookName] = HookInfo(NumArgs);
 
       }
     }
@@ -2403,40 +2556,56 @@ public:
 /// FillInHookNames - Actually extract the hook names from all command
 /// line strings. Helper function used by EmitHookDeclarations().
 void FillInHookNames(const ToolDescriptions& ToolDescs,
-                     llvm::StringMap<unsigned>& HookNames)
+                     const OptionDescriptions& OptDescs,
+                     HookInfoMap& HookNames)
 {
-  // For all command lines:
+  // For all tool descriptions:
   for (ToolDescriptions::const_iterator B = ToolDescs.begin(),
          E = ToolDescs.end(); B != E; ++B) {
     const ToolDescription& D = *(*B);
+
+    // Look for 'forward_transformed_value' in 'actions'.
+    if (D.Actions)
+      WalkCase(D.Actions, Id(), ExtractHookNames(HookNames, OptDescs));
+
+    // Look for hook invocations in 'cmd_line'.
     if (!D.CmdLine)
       continue;
     if (dynamic_cast<StringInit*>(D.CmdLine))
       // This is a string.
-      ExtractHookNames(HookNames).operator()(D.CmdLine);
+      ExtractHookNames(HookNames, OptDescs).operator()(D.CmdLine);
     else
       // This is a 'case' construct.
-      WalkCase(D.CmdLine, Id(), ExtractHookNames(HookNames));
+      WalkCase(D.CmdLine, Id(), ExtractHookNames(HookNames, OptDescs));
   }
 }
 
 /// EmitHookDeclarations - Parse CmdLine fields of all the tool
 /// property records and emit hook function declaration for each
 /// instance of $CALL(HookName).
-void EmitHookDeclarations(const ToolDescriptions& ToolDescs, raw_ostream& O) {
-  llvm::StringMap<unsigned> HookNames;
+void EmitHookDeclarations(const ToolDescriptions& ToolDescs,
+                          const OptionDescriptions& OptDescs, raw_ostream& O) {
+  HookInfoMap HookNames;
 
-  FillInHookNames(ToolDescs, HookNames);
+  FillInHookNames(ToolDescs, OptDescs, HookNames);
   if (HookNames.empty())
     return;
 
   O << "namespace hooks {\n";
-  for (StringMap<unsigned>::const_iterator B = HookNames.begin(),
+  for (HookInfoMap::const_iterator B = HookNames.begin(),
          E = HookNames.end(); B != E; ++B) {
-    O.indent(Indent1) << "std::string " << B->first() << "(";
+    const char* HookName = B->first();
+    const HookInfo& Info = B->second;
 
-    for (unsigned i = 0, j = B->second; i < j; ++i) {
-      O << "const char* Arg" << i << (i+1 == j ? "" : ", ");
+    O.indent(Indent1) << "std::string " << HookName << "(";
+
+    if (Info.Type == HookInfo::ArgHook) {
+      for (unsigned i = 0, j = Info.NumArgs; i < j; ++i) {
+        O << "const char* Arg" << i << (i+1 == j ? "" : ", ");
+      }
+    }
+    else {
+      O << "const std::vector<std::string>& Arg";
     }
 
     O <<");\n";
@@ -2469,11 +2638,12 @@ void EmitIncludes(raw_ostream& O) {
     << "#include \"llvm/CompilerDriver/Plugin.h\"\n"
     << "#include \"llvm/CompilerDriver/Tool.h\"\n\n"
 
-    << "#include \"llvm/ADT/StringExtras.h\"\n"
     << "#include \"llvm/Support/CommandLine.h\"\n"
     << "#include \"llvm/Support/raw_ostream.h\"\n\n"
 
+    << "#include <algorithm>\n"
     << "#include <cstdlib>\n"
+    << "#include <iterator>\n"
     << "#include <stdexcept>\n\n"
 
     << "using namespace llvm;\n"
@@ -2567,7 +2737,7 @@ void EmitPluginCode(const PluginData& Data, raw_ostream& O) {
   EmitOptionDefinitions(Data.OptDescs, Data.HasSink, Data.HasExterns, O);
 
   // Emit hook declarations.
-  EmitHookDeclarations(Data.ToolDescs, O);
+  EmitHookDeclarations(Data.ToolDescs, Data.OptDescs, O);
 
   O << "namespace {\n\n";
 
diff --git a/libclamav/c++/llvm/utils/TableGen/OptParserEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/OptParserEmitter.cpp
index ce1aef5..3cd5784 100644
--- a/libclamav/c++/llvm/utils/TableGen/OptParserEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/OptParserEmitter.cpp
@@ -127,7 +127,18 @@ void OptParserEmitter::run(raw_ostream &OS) {
         OS << "INVALID";
 
       // The other option arguments (unused for groups).
-      OS << ", INVALID, 0, 0, 0, 0)\n";
+      OS << ", INVALID, 0, 0";
+
+      // The option help text.
+      if (!dynamic_cast<UnsetInit*>(R.getValueInit("HelpText"))) {
+        OS << ",\n";
+        OS << "       ";
+        write_cstring(OS, R.getValueAsString("HelpText"));
+      } else
+        OS << ", 0";
+
+      // The option meta-variable name (unused).
+      OS << ", 0)\n";
     }
     OS << "\n";
 
diff --git a/libclamav/c++/llvm/utils/TableGen/RegisterInfoEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/RegisterInfoEmitter.cpp
index bf0721e..fcf4123 100644
--- a/libclamav/c++/llvm/utils/TableGen/RegisterInfoEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/RegisterInfoEmitter.cpp
@@ -162,7 +162,7 @@ private:
 
 public:
   RegisterSorter(std::map<Record*, std::set<Record*>, LessRecord> &RS)
-    : RegisterSubRegs(RS) {};
+    : RegisterSubRegs(RS) {}
 
   bool operator()(Record *RegA, Record *RegB) {
     // B is sub-register of A.
diff --git a/libclamav/c++/llvm/utils/buildit/build_llvm b/libclamav/c++/llvm/utils/buildit/build_llvm
index 9168d1a..4392b27 100755
--- a/libclamav/c++/llvm/utils/buildit/build_llvm
+++ b/libclamav/c++/llvm/utils/buildit/build_llvm
@@ -341,6 +341,14 @@ chgrp -R wheel $DEST_DIR
 find $DEST_DIR -name html.tar.gz -exec rm {} \;
 
 ################################################################################
+# symlinks so that B&I can find things
+
+cd $DEST_DIR
+mkdir -p ./usr/lib/
+cd usr/lib
+ln -s ../../Developer/usr/lib/libLTO.dylib ./libLTO.dylib
+
+################################################################################
 # w00t! Done!
 
 exit 0
diff --git a/libclamav/c++/llvm/utils/lit/lit.py b/libclamav/c++/llvm/utils/lit/lit.py
index dcdce7d..293976f 100755
--- a/libclamav/c++/llvm/utils/lit/lit.py
+++ b/libclamav/c++/llvm/utils/lit/lit.py
@@ -230,7 +230,7 @@ def getTests(path, litConfig, testSuiteCache, localConfigCache):
     ts,path_in_suite = getTestSuite(path, litConfig, testSuiteCache)
     if ts is None:
         litConfig.warning('unable to find test suite for %r' % path)
-        return ()
+        return (),()
 
     if litConfig.debug:
         litConfig.note('resolved input %r to %r::%r' % (path, ts.name,

-- 
Debian repository for ClamAV



More information about the Pkg-clamav-commits mailing list