[Pkg-clamav-commits] [SCM] Debian repository for ClamAV branch, debian/unstable, updated. debian/0.95+dfsg-1-6156-g094ec9b

Török Edvin edwin at clamav.net
Sun Apr 4 01:12:44 UTC 2010


The following commit has been merged in the debian/unstable branch:
commit de5bf27b3cfe85f0c56b7344eee0f3dff0c0fa44
Author: Török Edvin <edwin at clamav.net>
Date:   Tue Dec 15 14:28:14 2009 +0200

    Merge LLVM upstream r91428.
    
    Squashed commit of the following:
    
    commit 08c733e79dd6b65be6eab3060b47fe4d231098b9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 15 09:05:13 2009 +0000
    
        add some other xforms that should be done as part of PR5783
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91428 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 39a7fa146ef728a10fce157d2efcecd806bf276b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 15 08:34:01 2009 +0000
    
        a few improvements:
        1. Use std::equal instead of reinventing it.
        2. don't run dtors in destroy_range if element is pod-like.
        3. Use isPodLike to decide between memcpy/uninitialized_copy
           instead of is_class.  isPodLike is more generous in some cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91427 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3a95c15ce022ba6cdeea981f9b7b0a7d4724e11a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 15 08:29:22 2009 +0000
    
        hoist the begin/end/capacity members and a few trivial methods
        up into the non-templated SmallVectorBase class.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91426 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 142f4f4c9d8ab4a1d1eb5c2fde61a6383fed25c4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 15 07:40:44 2009 +0000
    
        improve isPodLike to know that all non-class types are pod.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91425 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bc6f37b22aeb8f1ec5c7eb650ecbdea67f34a3de
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 15 07:27:58 2009 +0000
    
        Lang verified that SlotIndex is "pod like" even though it isn't a pod.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91423 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 169f3a233e90dcdd01e42829b396c823d016fe30
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 15 07:26:43 2009 +0000
    
        Remove isPod() from DenseMapInfo, splitting it out to its own
        isPodLike type trait.  This is a generally useful type trait for
        more than just DenseMap, and we really care about whether something
        acts like a pod, not whether it really is a pod.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91421 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e1d483dade6f1675d9c2279fb9ae503858b89844
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Dec 15 07:21:14 2009 +0000
    
        Convert llvmc tests to FileCheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91420 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d016c18182f165c7a967f1c5a6a343971bcd2465
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Dec 15 07:20:50 2009 +0000
    
        Support hook invocation from 'append_cmd'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91419 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4136d8daf27d7f04dea28a578b39e5a614fca81e
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 15 06:49:02 2009 +0000
    
        Fix an encoding bug.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91417 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 54fec492a4c81ee84265ad953f4212eda9aff5c1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Dec 15 06:14:33 2009 +0000
    
        add an ALWAYS_INLINE macro, which does the obvious thing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91416 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 428c804a753234ecaf6a6177107361a1312508f8
    Author: Kenneth Uildriks <kennethuil at gmail.com>
    Date:   Tue Dec 15 03:27:52 2009 +0000
    
        For fastcc on x86, let ECX be used as a return register after EAX and EDX
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91410 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 90468b7e484723a7ecfe7b4bf7a3264d2c6c6d06
    Author: John McCall <rjmccall at apple.com>
    Date:   Tue Dec 15 03:10:26 2009 +0000
    
        Names from dependent base classes are not found by unqualified lookup.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91407 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87c0a2dffc5590dc2604754dbe12c9430a54b27b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 15 03:07:11 2009 +0000
    
        Disable 91381 for now. It's miscompiling ARMISelDAG2DAG.cpp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91405 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5ca004428400555d08d43eebe7e91c7035793afb
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Dec 15 03:04:52 2009 +0000
    
        Validate the generated C++ code in llvmc tests.
    
        Checks that the code generated by 'tblgen --emit-llvmc' can be actually
        compiled. Also fixes two bugs found in this way:
    
        - forward_transformed_value didn't work with non-list arguments
        - cl::ZeroOrOne is now called cl::Optional
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91404 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e4f60395f69857730808200642874b0ecd44896
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Dec 15 03:04:14 2009 +0000
    
        Pipe 'grep' output to 'count'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91403 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bc4f5408a6a0881c31e2a3165d022d74a6b2b9e5
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Dec 15 03:04:02 2009 +0000
    
        Allow $CALL(Hook, '$INFILE') for non-join tools.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91402 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ff7c2e17fda5f570afff5eaf75f88460019d3f74
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Dec 15 03:03:37 2009 +0000
    
        Small documentation update.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91401 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d629d80a80bfd6094563000bc82ed37b42acfffa
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 15 03:00:32 2009 +0000
    
        Make 91378 more conservative.
        1. Only perform (zext (shl (zext x), y)) -> (shl (zext x), y) when y is a constant. This makes sure it remove at least one zest.
        2. If the shift is a left shift, make sure the original shift cannot shift out bits.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91399 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7caa1423082e873b0685e8d1fb4f7351bdabb103
    Author: John McCall <rjmccall at apple.com>
    Date:   Tue Dec 15 02:35:24 2009 +0000
    
        You can't use typedefs to declare template member specializations, and
        clang enforces it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91397 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8aa0b417ca5d0fc33b6079aa11b81cf86667956
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 15 01:54:51 2009 +0000
    
        Initial work on disabling the scheduler. This is a work in progress, and this
        stuff isn't used just yet.
    
        We want to model the GCC `-fno-schedule-insns' and `-fno-schedule-insns2'
        flags. The hypothesis is that the people who use these flags know what they are
        doing, and have hand-optimized the C code to reduce latencies and other
        conflicts.
    
        The idea behind our scheme to turn off scheduling is to create a map "on the
        side" during DAG generation. It will order the nodes by how they appeared in the
        code. This map is then used during scheduling to get the ordering.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91392 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3f0f8885c7079d20930ca0336bb879adde51aaaf
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 15 01:44:10 2009 +0000
    
        Tail duplication should zap a copy it inserted for SSA update if the copy is the only use of its source.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91390 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 834ae6b04f4c3650b92182662aa8bb5b0fcf419f
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 15 00:53:42 2009 +0000
    
        Use sbb x, x to materialize carry bit in a GPR. The result is all one's or all zero's.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91381 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5b6187226b44f590ce7f614b128480b9c2d823ef
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 15 00:52:11 2009 +0000
    
        Fold (zext (and x, cst)) -> (and (zext x), cst).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91380 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d08dad66572d86df1826c3547cb824b43ae8e8be
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Dec 15 00:41:47 2009 +0000
    
        NNT: Make sure stderr for build commands goes to log file, as intended but misdirected.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91379 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3ff63ae679cf08e69db6770e7965e4f3d04637b9
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Dec 15 00:41:36 2009 +0000
    
        Propagate zest through logical shift.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91378 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9f669d99f66d2ca120c85c4c379f2571d6dd947a
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Dec 15 00:40:55 2009 +0000
    
        Formatting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91377 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87426f8cd21507e13f0256a6727a0c27f60705c3
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Dec 15 00:39:24 2009 +0000
    
        Revert these. They may have been causing 483_xalancbmk to fail:
    
        $ svn merge -c -91161 https://llvm.org/svn/llvm-project/llvm/trunk
        --- Reverse-merging r91161 into '.':
        U    lib/CodeGen/BranchFolding.cpp
        U    lib/CodeGen/MachineBasicBlock.cpp
        $ svn merge -c -91113 https://llvm.org/svn/llvm-project/llvm/trunk
        --- Reverse-merging r91113 into '.':
        G    lib/CodeGen/MachineBasicBlock.cpp
        $ svn merge -c -91101 https://llvm.org/svn/llvm-project/llvm/trunk
        --- Reverse-merging r91101 into '.':
        U    include/llvm/CodeGen/MachineBasicBlock.h
        G    lib/CodeGen/MachineBasicBlock.cpp
        $ svn merge -c -91092 https://llvm.org/svn/llvm-project/llvm/trunk
        --- Reverse-merging r91092 into '.':
        G    include/llvm/CodeGen/MachineBasicBlock.h
        G    lib/CodeGen/MachineBasicBlock.cpp
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91376 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e6e14f2cfc4e8bd346bf3fa7a5ac87b6ebf422ff
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Dec 15 00:12:35 2009 +0000
    
        nand atomic requires opposite operand ordering
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91371 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c6cfdd3f717bfa1b43351c354e39c066dbd167cd
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 23:40:38 2009 +0000
    
        Fix integer cast code to handle vector types.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91362 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8b0d8db13172ed290285f0832e021b4ce3ef9aea
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 23:36:03 2009 +0000
    
        Move Flag and isVoid after the vector types, since bit arithmetic with
        those enum values is less common.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91361 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 81c5562ec2e59741258fc67824bbb64b91ece71e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 23:34:36 2009 +0000
    
        Fix these asserts to check the invariant that the code actually
        depends on.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91360 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 612ae24984fcce968041a2f3f379505d8e007a83
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 23:13:31 2009 +0000
    
        Update this comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91356 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a34782d2b71b5fd6b3b32fa4943de1fc89d47115
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 23:08:09 2009 +0000
    
        Fix this to properly clear the FastISel debug location. Thanks to
        Bill for spotting this!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91355 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit becf334c8d194b1f6c21db915ba3c22c451ab42a
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Dec 14 22:44:22 2009 +0000
    
        Rearrange rules to add missing dependency and allow parallel makes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91352 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6fdbe657ebacf9e1bdb1e2cebfe82a9549d86d3e
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Dec 14 21:51:34 2009 +0000
    
        Add encoding bits "let Inst{11-4} = 0b00000000;" to BR_JTr to disambiguate
        between BR_JTr and STREXD.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91339 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 32b48e95922e730032f188c313cdd2e50c63cbc9
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 14 21:49:44 2009 +0000
    
        The CIE says that the LSDA point in the FDE section is an "sdata4". That's fine,
        but we need it to actually be 4-bytes in the FDE.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91337 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ba16e07fc539e23bb604defb021187e64c04702a
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Dec 14 21:33:32 2009 +0000
    
        v6 sync insn copy/paste error
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91333 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6eee903ab286e1a0093c5091bb30fc35e00cd86b
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Dec 14 21:24:16 2009 +0000
    
        Add ARMv6 memory and sync barrier instructions
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91329 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ac1d378d72ad45806ab86d316bd50ca5e7f861c
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Dec 14 21:01:46 2009 +0000
    
        Fixed encoding bits typo of ldrexd/strexd.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91327 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5b595cd6311d7b9670268b69096b55dd1a384d35
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Dec 14 20:14:59 2009 +0000
    
        Thumb2 atomic operations
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91321 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 95218a2eb2163a644a1ff5d419ef0b581a60eb39
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 19:55:22 2009 +0000
    
        Add svn:ignore entries for the Disassembler files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91320 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b6e3c7b1e4283ee072c8115244515c145ef6d072
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 19:43:09 2009 +0000
    
        Move several function bodies which are rarely inlined out of line.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91319 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a850594e8be4f3a3cb7c4d404b8434dfb3844ec8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 14 19:34:32 2009 +0000
    
        fix an obvious bug found by clang++ and collapse a redundant if.
    
        Here's the diagnostic from clang:
    
        /Volumes/Data/dgregor/Projects/llvm/lib/Target/CppBackend/CPPBackend.cpp:989:23: warning: 'gv' is always NULL in this context
                printConstant(gv);
                              ^
        1 diagnostic generated.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91318 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5dbe26aa8326068823cb9481972426dca151c3cc
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 19:32:31 2009 +0000
    
        Micro-optimize these functions in the case where they are not inlined.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91316 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2d6e24935ebc8902bd9b22f73ba02fa31d60f8bb
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Dec 14 19:24:11 2009 +0000
    
        correct selection requirements for thumb2 vs. arm versions of the barrier intrinsics
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91313 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4fc99a1a87b457f994c5e8e0d12206b9b2e02bb4
    Author: Eric Christopher <echristo at apple.com>
    Date:   Mon Dec 14 19:07:25 2009 +0000
    
        Add radar fixed in comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91312 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit efbc1f057fd24bd540ab94dfcac298d6762aa3bd
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Dec 14 18:56:47 2009 +0000
    
        add Thumb2 atomic and memory barrier instruction definitions
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91310 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31b2740914c8fec8580a8dc1000e3b5295309dfb
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Dec 14 18:36:32 2009 +0000
    
        whitespace
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91307 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 63437d96828f86ca3833c58964c4a5d4b142aa07
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Dec 14 18:31:20 2009 +0000
    
        ARM memory barrier instructions are not predicable
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91305 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 49da09d50ecde9dcaacb4bc57807b9fe0fd31005
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Dec 14 17:58:33 2009 +0000
    
        NNT: Use [e]grep -a when scanning logs, its possibly they will have non-text
        characters in them, in which case the grep will just return 'Binary file
        matches' and the whole thing falls over.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91302 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2cef3baf8abe8446367182510bb5410247c99a8e
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Dec 14 17:58:27 2009 +0000
    
        NNT: Always create the -sentdata.txt file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91301 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5d6c29ba56ae19b4d81f8a8f7abf04aa356403fb
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 17:35:17 2009 +0000
    
        Clear the Processed set when it is no longer used, and clear the
        IVUses list in releaseMemory().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91296 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6f4122b67d3bd31a6d3544f319527949f2d1cf4e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 17:31:01 2009 +0000
    
        Fix a thinko; isNotAlreadyContainedIn had a built-in negative, so the
        condition was inverted when the code was converted to contains().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91295 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b937cf58556a1fea130dae4d42e49489b308edc5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 17:19:06 2009 +0000
    
        Remove unnecessary #includes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91293 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 34b9035f2aa2e072afa2da175e47a86de9c723ce
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 17:14:32 2009 +0000
    
        Make the IVUses member private.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91291 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8c5b238c82b464d9993971757204b347a18ed86e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 17:12:51 2009 +0000
    
        Instead of having a ScalarEvolution pointer member in BasedUser, just pass
        the ScalarEvolution pointer into the functions which need it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91289 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e48b5a49457a7976192930b8503e889383e7c0e7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 17:10:44 2009 +0000
    
        Don't bother cleaning up if there's nothing to clean up.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91288 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7366d992d0c8f3d840085a41c04be370a3cfe95
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 17:08:09 2009 +0000
    
        Delete an unused variable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91287 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f1e30e458078b78f37abe8ca738d50df8b3cfae8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 17:06:50 2009 +0000
    
        Drop Loop::isNotAlreadyContainedIn in favor of Loop::contains. The
        former was just exposing a LoopInfoBase implementation detail.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91286 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c83030d61279ac68b9532896fea512ae408387de
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Dec 14 17:02:55 2009 +0000
    
        add ldrexd/strexd instructions
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91284 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d1d6f3708a558575396f8c066b9d9575889f8642
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 17:02:34 2009 +0000
    
        LSR itself doesn't need LoopInfo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91283 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 01c63bf35c8b7ff7775bc83a02a39fc2efcfe3f8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 16:57:08 2009 +0000
    
        LSR itself doesn't need DominatorTree.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91282 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7ad7ae23378a83d55d836338cf33935a4a6829b9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 16:52:55 2009 +0000
    
        Remove the code in LSR that manually hoists expansions out of loops;
        SCEVExpander does this automatically.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91281 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c476702d130b84050a146b9e8a602709bbdc3e2e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Dec 14 16:37:29 2009 +0000
    
        Minor code cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91280 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 814a12c5353afed59395f62dc082aca10b93c3dd
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Dec 14 16:18:45 2009 +0000
    
        Use DW_AT_specification to point to DIE describing function declaration.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91278 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 99e265ce64bf952da29d01da65438a96984819fe
    Author: Shantonu Sen <ssen at apple.com>
    Date:   Mon Dec 14 14:15:15 2009 +0000
    
        Remove empty file completely
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91277 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8da34e01191c9d3819aa1b52fdbc6fe1d544095
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Mon Dec 14 12:38:18 2009 +0000
    
        Add "generic" fallback.
    
        gcc warned that the function may not have a return value, indeed
        for non-intel and non-amd X86 CPUs it is right (VIA, etc.).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91276 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76d8399d2044c4af7ef6b723f2905e4ad6cbbbf3
    Author: Lang Hames <lhames at gmail.com>
    Date:   Mon Dec 14 07:43:25 2009 +0000
    
        Added CalcSpillWeights to CMakeLists.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91275 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8233240e96cc3df533f37641d17df9ae2d15af12
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Dec 14 06:51:19 2009 +0000
    
        Whitespace changes, comment clarification. No functional changes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91274 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f49e0f7a619ff4a98eae831896636e8fa9051a4
    Author: Lang Hames <lhames at gmail.com>
    Date:   Mon Dec 14 06:49:42 2009 +0000
    
        Moved spill weight calculation out of SimpleRegisterCoalescing and into its own pass: CalculateSpillWeights.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91273 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit edf3f1eff2ea650086482a564fe3a649801a17fe
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Dec 14 05:11:02 2009 +0000
    
        revert r91184, because it causes a crash on a .bc file I just
        sent to Bob.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91268 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c7e4ddcbcad547e5513dfd7eefe8c1ae97e84485
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Dec 14 04:22:04 2009 +0000
    
        atomic binary operations up to 32-bits wide.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91260 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 15874c1fc1d911bfe2ff73e4a66d500d2c07e6f6
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Dec 14 04:06:38 2009 +0000
    
        Add a test for the 'init' option property.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91259 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7fe6f87162f412660039e41bc96d1ac96d107176
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Sun Dec 13 20:30:32 2009 +0000
    
        Reinstate r91208 to fix available_externally linkage for globals, with
        nlewycky's fix to add -rdynamic so the JIT can look symbols up in Linux builds
        of the JITTests binary.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91250 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eae8d0465c69874badfcb83312d374d1ba668962
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Sun Dec 13 08:59:40 2009 +0000
    
        Using _MSC_VER there was wrong, better just use the already existing ifdefs for
        x86 CPU detection for the X86 getHostCPUName too, and create a simple
        getHostCPUName that returns "generic" for all else.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91240 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87a4e6c0cb18756f3d55ec0f1b5cb86c4c88e068
    Author: Chandler Carruth <chandlerc at gmail.com>
    Date:   Sun Dec 13 07:04:45 2009 +0000
    
        Don't leave pointers uninitialized in the default constructor. GCC complains
        about the potential use of these uninitialized members under certain conditions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91239 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c29ae320a827facfbcc32b91d6d98c6b06e44ea
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Dec 13 01:00:59 2009 +0000
    
        Fix weird typo which leads to unallocated memory access for nodes with 4 results.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91233 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit efb9350360fe13284f9162fec884d16590da206a
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Dec 13 01:00:32 2009 +0000
    
        Do not allow uninitialize access during debug printing
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91232 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 993eb83df375e2fa7d3fb2c1519690402c27b460
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Sat Dec 12 23:23:43 2009 +0000
    
        More info on this transformation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91230 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a3a131a8c63dc9768694c87d74109afefb021cfb
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Sat Dec 12 21:41:48 2009 +0000
    
        Remove some stuff that's already implemented.  Also, remove the note about
        merging x >u 5 and x <s 20 because it's impossible to implement.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91228 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee30369f778aaece9f0f70dc482331c6ed8cb326
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Dec 12 21:17:54 2009 +0000
    
        Update install-clang target for clang-cc removal.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91226 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8597901e12c0caa1cf841472e12df422c1d2c02b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Dec 12 20:03:14 2009 +0000
    
        Disable r91104 for x86. It causes partial register stall which pessimize code in 32-bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91223 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 36d987ea7542f0face7c2a3e98cfa4d8f31ab5e9
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Dec 12 18:55:37 2009 +0000
    
        Implement variable-width shifts.
        No testcase yet - it seems we're exposing generic codegen bugs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91221 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9357ab4ddb97c2a6606ba0ee9f859b9c93b364b7
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Dec 12 18:55:26 2009 +0000
    
        Add comment about potential partial register stall.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91220 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca348204499380bc590165f8467f8dccdc3f414a
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Dec 12 18:51:56 2009 +0000
    
        Fix an obvious bug. No test case since LEA16r is not being used.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91219 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d50b1dd026feed23280c98d75ec3465627424725
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Sat Dec 12 12:42:31 2009 +0000
    
        Enable CPU detection when using MS VS 2k8 too.
        MSVS2k8 doesn't define __i386__, hence all the CPU detection code was disabled.
        Enable it by looking for _MSC_VER.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91217 91177308-0d34-0410-b5e6-96231b3b80d8

diff --git a/libclamav/c++/llvm/Makefile b/libclamav/c++/llvm/Makefile
index 1ef89e4..2183a5b 100644
--- a/libclamav/c++/llvm/Makefile
+++ b/libclamav/c++/llvm/Makefile
@@ -66,8 +66,7 @@ ifeq ($(MAKECMDGOALS),tools-only)
 endif
 
 ifeq ($(MAKECMDGOALS),install-clang)
-  DIRS := tools/clang/tools/driver tools/clang/tools/clang-cc \
-	tools/clang/lib/Headers tools/clang/docs
+  DIRS := tools/clang/tools/driver tools/clang/lib/Headers tools/clang/docs
   OPTIONAL_DIRS :=
   NO_INSTALL = 1
 endif
diff --git a/libclamav/c++/llvm/docs/Makefile b/libclamav/c++/llvm/docs/Makefile
index 310c4bd..5bfa6c3 100644
--- a/libclamav/c++/llvm/docs/Makefile
+++ b/libclamav/c++/llvm/docs/Makefile
@@ -100,7 +100,12 @@ install-ocamldoc: ocamldoc
 	  $(FIND) . -type f -exec \
 	    $(DataInstall) {} $(PROJ_docsdir)/ocamldoc/html \;
 
-ocamldoc: regen-ocamldoc $(PROJ_OBJ_DIR)/ocamldoc.tar.gz
+ocamldoc: regen-ocamldoc
+	$(Echo) Packaging ocamldoc documentation
+	$(Verb) $(RM) -rf $(PROJ_OBJ_DIR)/ocamldoc.tar*
+	$(Verb) $(TAR) cf $(PROJ_OBJ_DIR)/ocamldoc.tar ocamldoc
+	$(Verb) $(GZIP) $(PROJ_OBJ_DIR)/ocamldoc.tar
+	$(Verb) $(CP) $(PROJ_OBJ_DIR)/ocamldoc.tar.gz $(PROJ_OBJ_DIR)/ocamldoc/html/
 
 regen-ocamldoc:
 	$(Echo) Building ocamldoc documentation
@@ -113,13 +118,6 @@ regen-ocamldoc:
 		$(OCAMLDOC) -d $(PROJ_OBJ_DIR)/ocamldoc/html -sort -colorize-code -html \
 		`$(FIND) $(LEVEL)/bindings/ocaml -name "*.odoc" -exec echo -load '{}' ';'`
 
-$(PROJ_OBJ_DIR)/ocamldoc.tar.gz:
-	$(Echo) Packaging ocamldoc documentation
-	$(Verb) $(RM) -rf $@ $(PROJ_OBJ_DIR)/ocamldoc.tar
-	$(Verb) $(TAR) cf $(PROJ_OBJ_DIR)/ocamldoc.tar ocamldoc
-	$(Verb) $(GZIP) $(PROJ_OBJ_DIR)/ocamldoc.tar
-	$(Verb) $(CP) $(PROJ_OBJ_DIR)/ocamldoc.tar.gz $(PROJ_OBJ_DIR)/ocamldoc/html/
-
 uninstall-local::
 	$(Echo) Uninstalling Documentation
 	$(Verb) $(RM) -rf $(PROJ_docsdir)
diff --git a/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h b/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
index 8329947..8b62f2d 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
@@ -217,7 +217,8 @@ public:
 
 private:
   void CopyFrom(const DenseMap& other) {
-    if (NumBuckets != 0 && (!KeyInfoT::isPod() || !ValueInfoT::isPod())) {
+    if (NumBuckets != 0 &&
+        (!isPodLike<KeyInfoT>::value || !isPodLike<ValueInfoT>::value)) {
       const KeyT EmptyKey = getEmptyKey(), TombstoneKey = getTombstoneKey();
       for (BucketT *P = Buckets, *E = Buckets+NumBuckets; P != E; ++P) {
         if (!KeyInfoT::isEqual(P->first, EmptyKey) &&
@@ -239,7 +240,7 @@ private:
     Buckets = static_cast<BucketT*>(operator new(sizeof(BucketT) *
                                                  other.NumBuckets));
 
-    if (KeyInfoT::isPod() && ValueInfoT::isPod())
+    if (isPodLike<KeyInfoT>::value && isPodLike<ValueInfoT>::value)
       memcpy(Buckets, other.Buckets, other.NumBuckets * sizeof(BucketT));
     else
       for (size_t i = 0; i < other.NumBuckets; ++i) {
diff --git a/libclamav/c++/llvm/include/llvm/ADT/DenseMapInfo.h b/libclamav/c++/llvm/include/llvm/ADT/DenseMapInfo.h
index 2f241c5..6b494ef 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/DenseMapInfo.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/DenseMapInfo.h
@@ -15,7 +15,7 @@
 #define LLVM_ADT_DENSEMAPINFO_H
 
 #include "llvm/Support/PointerLikeTypeTraits.h"
-#include <utility>
+#include "llvm/Support/type_traits.h"
 
 namespace llvm {
 
@@ -25,7 +25,6 @@ struct DenseMapInfo {
   //static inline T getTombstoneKey();
   //static unsigned getHashValue(const T &Val);
   //static bool isEqual(const T &LHS, const T &RHS);
-  //static bool isPod()
 };
 
 // Provide DenseMapInfo for all pointers.
@@ -46,7 +45,6 @@ struct DenseMapInfo<T*> {
            (unsigned((uintptr_t)PtrVal) >> 9);
   }
   static bool isEqual(const T *LHS, const T *RHS) { return LHS == RHS; }
-  static bool isPod() { return true; }
 };
 
 // Provide DenseMapInfo for chars.
@@ -54,7 +52,6 @@ template<> struct DenseMapInfo<char> {
   static inline char getEmptyKey() { return ~0; }
   static inline char getTombstoneKey() { return ~0 - 1; }
   static unsigned getHashValue(const char& Val) { return Val * 37; }
-  static bool isPod() { return true; }
   static bool isEqual(const char &LHS, const char &RHS) {
     return LHS == RHS;
   }
@@ -65,7 +62,6 @@ template<> struct DenseMapInfo<unsigned> {
   static inline unsigned getEmptyKey() { return ~0; }
   static inline unsigned getTombstoneKey() { return ~0U - 1; }
   static unsigned getHashValue(const unsigned& Val) { return Val * 37; }
-  static bool isPod() { return true; }
   static bool isEqual(const unsigned& LHS, const unsigned& RHS) {
     return LHS == RHS;
   }
@@ -78,7 +74,6 @@ template<> struct DenseMapInfo<unsigned long> {
   static unsigned getHashValue(const unsigned long& Val) {
     return (unsigned)(Val * 37UL);
   }
-  static bool isPod() { return true; }
   static bool isEqual(const unsigned long& LHS, const unsigned long& RHS) {
     return LHS == RHS;
   }
@@ -91,7 +86,6 @@ template<> struct DenseMapInfo<unsigned long long> {
   static unsigned getHashValue(const unsigned long long& Val) {
     return (unsigned)(Val * 37ULL);
   }
-  static bool isPod() { return true; }
   static bool isEqual(const unsigned long long& LHS,
                       const unsigned long long& RHS) {
     return LHS == RHS;
@@ -127,7 +121,6 @@ struct DenseMapInfo<std::pair<T, U> > {
     return (unsigned)key;
   }
   static bool isEqual(const Pair& LHS, const Pair& RHS) { return LHS == RHS; }
-  static bool isPod() { return FirstInfo::isPod() && SecondInfo::isPod(); }
 };
 
 } // end namespace llvm
diff --git a/libclamav/c++/llvm/include/llvm/ADT/ImmutableList.h b/libclamav/c++/llvm/include/llvm/ADT/ImmutableList.h
index 5f8cb57..7757c08 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/ImmutableList.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/ImmutableList.h
@@ -211,9 +211,12 @@ template<typename T> struct DenseMapInfo<ImmutableList<T> > {
   static bool isEqual(ImmutableList<T> X1, ImmutableList<T> X2) {
     return X1 == X2;
   }
-  static bool isPod() { return true; }
 };
 
+template <typename T> struct isPodLike;
+template <typename T>
+struct isPodLike<ImmutableList<T> > { static const bool value = true; };
+
 } // end llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/ADT/PointerIntPair.h b/libclamav/c++/llvm/include/llvm/ADT/PointerIntPair.h
index 73ba3c7..64f4a7c 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/PointerIntPair.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/PointerIntPair.h
@@ -106,6 +106,12 @@ public:
   bool operator>=(const PointerIntPair &RHS) const {return Value >= RHS.Value;}
 };
 
+template <typename T> struct isPodLike;
+template<typename PointerTy, unsigned IntBits, typename IntType>
+struct isPodLike<PointerIntPair<PointerTy, IntBits, IntType> > {
+   static const bool value = true;
+};
+  
 // Provide specialization of DenseMapInfo for PointerIntPair.
 template<typename PointerTy, unsigned IntBits, typename IntType>
 struct DenseMapInfo<PointerIntPair<PointerTy, IntBits, IntType> > {
@@ -125,7 +131,6 @@ struct DenseMapInfo<PointerIntPair<PointerTy, IntBits, IntType> > {
     return unsigned(IV) ^ unsigned(IV >> 9);
   }
   static bool isEqual(const Ty &LHS, const Ty &RHS) { return LHS == RHS; }
-  static bool isPod() { return true; }
 };
 
 // Teach SmallPtrSet that PointerIntPair is "basically a pointer".
diff --git a/libclamav/c++/llvm/include/llvm/ADT/SmallVector.h b/libclamav/c++/llvm/include/llvm/ADT/SmallVector.h
index f3b4533..b16649e 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/SmallVector.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/SmallVector.h
@@ -46,20 +46,17 @@ namespace std {
 
 namespace llvm {
 
-/// SmallVectorImpl - This class consists of common code factored out of the
-/// SmallVector class to reduce code duplication based on the SmallVector 'N'
-/// template parameter.
-template <typename T>
-class SmallVectorImpl {
+/// SmallVectorBase - This is all the non-templated stuff common to all
+/// SmallVectors.
+class SmallVectorBase {
 protected:
-  T *Begin, *End, *Capacity;
+  void *BeginX, *EndX, *CapacityX;
 
   // Allocate raw space for N elements of type T.  If T has a ctor or dtor, we
   // don't want it to be automatically run, so we need to represent the space as
   // something else.  An array of char would work great, but might not be
   // aligned sufficiently.  Instead, we either use GCC extensions, or some
   // number of union instances for the space, which guarantee maximal alignment.
-protected:
 #ifdef __GNUC__
   typedef char U;
   U FirstEl __attribute__((aligned));
@@ -72,46 +69,65 @@ protected:
   } FirstEl;
 #endif
   // Space after 'FirstEl' is clobbered, do not add any instance vars after it.
+  
+protected:
+  SmallVectorBase(size_t Size)
+    : BeginX(&FirstEl), EndX(&FirstEl), CapacityX((char*)&FirstEl+Size) {}
+  
+  /// isSmall - Return true if this is a smallvector which has not had dynamic
+  /// memory allocated for it.
+  bool isSmall() const {
+    return BeginX == static_cast<const void*>(&FirstEl);
+  }
+  
+  
+public:
+  bool empty() const { return BeginX == EndX; }
+};
+  
+/// SmallVectorImpl - This class consists of common code factored out of the
+/// SmallVector class to reduce code duplication based on the SmallVector 'N'
+/// template parameter.
+template <typename T>
+class SmallVectorImpl : public SmallVectorBase {
+  void setEnd(T *P) { EndX = P; }
 public:
   // Default ctor - Initialize to empty.
-  explicit SmallVectorImpl(unsigned N)
-    : Begin(reinterpret_cast<T*>(&FirstEl)),
-      End(reinterpret_cast<T*>(&FirstEl)),
-      Capacity(reinterpret_cast<T*>(&FirstEl)+N) {
+  explicit SmallVectorImpl(unsigned N) : SmallVectorBase(N*sizeof(T)) {
   }
 
   ~SmallVectorImpl() {
     // Destroy the constructed elements in the vector.
-    destroy_range(Begin, End);
+    destroy_range(begin(), end());
 
     // If this wasn't grown from the inline copy, deallocate the old space.
     if (!isSmall())
-      operator delete(Begin);
+      operator delete(begin());
   }
 
   typedef size_t size_type;
   typedef ptrdiff_t difference_type;
   typedef T value_type;
-  typedef T* iterator;
-  typedef const T* const_iterator;
+  typedef T *iterator;
+  typedef const T *const_iterator;
 
-  typedef std::reverse_iterator<const_iterator>  const_reverse_iterator;
-  typedef std::reverse_iterator<iterator>  reverse_iterator;
+  typedef std::reverse_iterator<const_iterator> const_reverse_iterator;
+  typedef std::reverse_iterator<iterator> reverse_iterator;
 
-  typedef T& reference;
-  typedef const T& const_reference;
-  typedef T* pointer;
-  typedef const T* const_pointer;
-
-  bool empty() const { return Begin == End; }
-  size_type size() const { return End-Begin; }
-  size_type max_size() const { return size_type(-1) / sizeof(T); }
+  typedef T &reference;
+  typedef const T &const_reference;
+  typedef T *pointer;
+  typedef const T *const_pointer;
 
   // forward iterator creation methods.
-  iterator begin() { return Begin; }
-  const_iterator begin() const { return Begin; }
-  iterator end() { return End; }
-  const_iterator end() const { return End; }
+  iterator begin() { return (iterator)BeginX; }
+  const_iterator begin() const { return (const_iterator)BeginX; }
+  iterator end() { return (iterator)EndX; }
+  const_iterator end() const { return (const_iterator)EndX; }
+private:
+  iterator capacity_ptr() { return (iterator)CapacityX; }
+  const_iterator capacity_ptr() const { return (const_iterator)CapacityX; }
+public:
 
   // reverse iterator creation methods.
   reverse_iterator rbegin()            { return reverse_iterator(end()); }
@@ -119,14 +135,25 @@ public:
   reverse_iterator rend()              { return reverse_iterator(begin()); }
   const_reverse_iterator rend() const { return const_reverse_iterator(begin());}
 
-
+  size_type size() const { return end()-begin(); }
+  size_type max_size() const { return size_type(-1) / sizeof(T); }
+  
+  /// capacity - Return the total number of elements in the currently allocated
+  /// buffer.
+  size_t capacity() const { return capacity_ptr() - begin(); }
+  
+  /// data - Return a pointer to the vector's buffer, even if empty().
+  pointer data() { return pointer(begin()); }
+  /// data - Return a pointer to the vector's buffer, even if empty().
+  const_pointer data() const { return const_pointer(begin()); }
+  
   reference operator[](unsigned idx) {
-    assert(Begin + idx < End);
-    return Begin[idx];
+    assert(begin() + idx < end());
+    return begin()[idx];
   }
   const_reference operator[](unsigned idx) const {
-    assert(Begin + idx < End);
-    return Begin[idx];
+    assert(begin() + idx < end());
+    return begin()[idx];
   }
 
   reference front() {
@@ -144,10 +171,10 @@ public:
   }
 
   void push_back(const_reference Elt) {
-    if (End < Capacity) {
+    if (EndX < CapacityX) {
   Retry:
-      new (End) T(Elt);
-      ++End;
+      new (end()) T(Elt);
+      setEnd(end()+1);
       return;
     }
     grow();
@@ -155,8 +182,8 @@ public:
   }
 
   void pop_back() {
-    --End;
-    End->~T();
+    setEnd(end()-1);
+    end()->~T();
   }
 
   T pop_back_val() {
@@ -166,36 +193,36 @@ public:
   }
 
   void clear() {
-    destroy_range(Begin, End);
-    End = Begin;
+    destroy_range(begin(), end());
+    EndX = BeginX;
   }
 
   void resize(unsigned N) {
     if (N < size()) {
-      destroy_range(Begin+N, End);
-      End = Begin+N;
+      destroy_range(begin()+N, end());
+      setEnd(begin()+N);
     } else if (N > size()) {
-      if (unsigned(Capacity-Begin) < N)
+      if (capacity() < N)
         grow(N);
-      construct_range(End, Begin+N, T());
-      End = Begin+N;
+      construct_range(end(), begin()+N, T());
+      setEnd(begin()+N);
     }
   }
 
   void resize(unsigned N, const T &NV) {
     if (N < size()) {
-      destroy_range(Begin+N, End);
-      End = Begin+N;
+      destroy_range(begin()+N, end());
+      setEnd(begin()+N);
     } else if (N > size()) {
-      if (unsigned(Capacity-Begin) < N)
+      if (capacity() < N)
         grow(N);
-      construct_range(End, Begin+N, NV);
-      End = Begin+N;
+      construct_range(end(), begin()+N, NV);
+      setEnd(begin()+N);
     }
   }
 
   void reserve(unsigned N) {
-    if (unsigned(Capacity-Begin) < N)
+    if (capacity() < N)
       grow(N);
   }
 
@@ -207,38 +234,38 @@ public:
   void append(in_iter in_start, in_iter in_end) {
     size_type NumInputs = std::distance(in_start, in_end);
     // Grow allocated space if needed.
-    if (NumInputs > size_type(Capacity-End))
+    if (NumInputs > size_type(capacity_ptr()-end()))
       grow(size()+NumInputs);
 
     // Copy the new elements over.
-    std::uninitialized_copy(in_start, in_end, End);
-    End += NumInputs;
+    std::uninitialized_copy(in_start, in_end, end());
+    setEnd(end() + NumInputs);
   }
 
   /// append - Add the specified range to the end of the SmallVector.
   ///
   void append(size_type NumInputs, const T &Elt) {
     // Grow allocated space if needed.
-    if (NumInputs > size_type(Capacity-End))
+    if (NumInputs > size_type(capacity_ptr()-end()))
       grow(size()+NumInputs);
 
     // Copy the new elements over.
-    std::uninitialized_fill_n(End, NumInputs, Elt);
-    End += NumInputs;
+    std::uninitialized_fill_n(end(), NumInputs, Elt);
+    setEnd(end() + NumInputs);
   }
 
   void assign(unsigned NumElts, const T &Elt) {
     clear();
-    if (unsigned(Capacity-Begin) < NumElts)
+    if (capacity() < NumElts)
       grow(NumElts);
-    End = Begin+NumElts;
-    construct_range(Begin, End, Elt);
+    setEnd(begin()+NumElts);
+    construct_range(begin(), end(), Elt);
   }
 
   iterator erase(iterator I) {
     iterator N = I;
     // Shift all elts down one.
-    std::copy(I+1, End, I);
+    std::copy(I+1, end(), I);
     // Drop the last elt.
     pop_back();
     return(N);
@@ -247,36 +274,36 @@ public:
   iterator erase(iterator S, iterator E) {
     iterator N = S;
     // Shift all elts down.
-    iterator I = std::copy(E, End, S);
+    iterator I = std::copy(E, end(), S);
     // Drop the last elts.
-    destroy_range(I, End);
-    End = I;
+    destroy_range(I, end());
+    setEnd(I);
     return(N);
   }
 
   iterator insert(iterator I, const T &Elt) {
-    if (I == End) {  // Important special case for empty vector.
+    if (I == end()) {  // Important special case for empty vector.
       push_back(Elt);
       return end()-1;
     }
 
-    if (End < Capacity) {
+    if (EndX < CapacityX) {
   Retry:
-      new (End) T(back());
-      ++End;
+      new (end()) T(back());
+      setEnd(end()+1);
       // Push everything else over.
-      std::copy_backward(I, End-1, End);
+      std::copy_backward(I, end()-1, end());
       *I = Elt;
       return I;
     }
-    size_t EltNo = I-Begin;
+    size_t EltNo = I-begin();
     grow();
-    I = Begin+EltNo;
+    I = begin()+EltNo;
     goto Retry;
   }
 
   iterator insert(iterator I, size_type NumToInsert, const T &Elt) {
-    if (I == End) {  // Important special case for empty vector.
+    if (I == end()) {  // Important special case for empty vector.
       append(NumToInsert, Elt);
       return end()-1;
     }
@@ -295,8 +322,8 @@ public:
     // insertion.  Since we already reserved space, we know that this won't
     // reallocate the vector.
     if (size_t(end()-I) >= NumToInsert) {
-      T *OldEnd = End;
-      append(End-NumToInsert, End);
+      T *OldEnd = end();
+      append(end()-NumToInsert, end());
 
       // Copy the existing elements that get replaced.
       std::copy_backward(I, OldEnd-NumToInsert, OldEnd);
@@ -309,10 +336,10 @@ public:
     // not inserting at the end.
 
     // Copy over the elements that we're about to overwrite.
-    T *OldEnd = End;
-    End += NumToInsert;
+    T *OldEnd = end();
+    setEnd(end() + NumToInsert);
     size_t NumOverwritten = OldEnd-I;
-    std::uninitialized_copy(I, OldEnd, End-NumOverwritten);
+    std::uninitialized_copy(I, OldEnd, end()-NumOverwritten);
 
     // Replace the overwritten part.
     std::fill_n(I, NumOverwritten, Elt);
@@ -324,7 +351,7 @@ public:
 
   template<typename ItTy>
   iterator insert(iterator I, ItTy From, ItTy To) {
-    if (I == End) {  // Important special case for empty vector.
+    if (I == end()) {  // Important special case for empty vector.
       append(From, To);
       return end()-1;
     }
@@ -344,8 +371,8 @@ public:
     // insertion.  Since we already reserved space, we know that this won't
     // reallocate the vector.
     if (size_t(end()-I) >= NumToInsert) {
-      T *OldEnd = End;
-      append(End-NumToInsert, End);
+      T *OldEnd = end();
+      append(end()-NumToInsert, end());
 
       // Copy the existing elements that get replaced.
       std::copy_backward(I, OldEnd-NumToInsert, OldEnd);
@@ -358,10 +385,10 @@ public:
     // not inserting at the end.
 
     // Copy over the elements that we're about to overwrite.
-    T *OldEnd = End;
-    End += NumToInsert;
+    T *OldEnd = end();
+    setEnd(end() + NumToInsert);
     size_t NumOverwritten = OldEnd-I;
-    std::uninitialized_copy(I, OldEnd, End-NumOverwritten);
+    std::uninitialized_copy(I, OldEnd, end()-NumOverwritten);
 
     // Replace the overwritten part.
     std::copy(From, From+NumOverwritten, I);
@@ -371,25 +398,11 @@ public:
     return I;
   }
 
-  /// data - Return a pointer to the vector's buffer, even if empty().
-  pointer data() {
-    return pointer(Begin);
-  }
-
-  /// data - Return a pointer to the vector's buffer, even if empty().
-  const_pointer data() const {
-    return const_pointer(Begin);
-  }
-
   const SmallVectorImpl &operator=(const SmallVectorImpl &RHS);
 
   bool operator==(const SmallVectorImpl &RHS) const {
     if (size() != RHS.size()) return false;
-    for (T *This = Begin, *That = RHS.Begin, *E = Begin+size();
-         This != E; ++This, ++That)
-      if (*This != *That)
-        return false;
-    return true;
+    return std::equal(begin(), end(), RHS.begin());
   }
   bool operator!=(const SmallVectorImpl &RHS) const { return !(*this == RHS); }
 
@@ -398,10 +411,6 @@ public:
                                         RHS.begin(), RHS.end());
   }
 
-  /// capacity - Return the total number of elements in the currently allocated
-  /// buffer.
-  size_t capacity() const { return Capacity - Begin; }
-
   /// set_size - Set the array size to \arg N, which the current array must have
   /// enough capacity for.
   ///
@@ -413,17 +422,10 @@ public:
   /// which will only be overwritten.
   void set_size(unsigned N) {
     assert(N <= capacity());
-    End = Begin + N;
+    setEnd(begin() + N);
   }
 
 private:
-  /// isSmall - Return true if this is a smallvector which has not had dynamic
-  /// memory allocated for it.
-  bool isSmall() const {
-    return static_cast<const void*>(Begin) ==
-           static_cast<const void*>(&FirstEl);
-  }
-
   /// grow - double the size of the allocated memory, guaranteeing space for at
   /// least one more element or MinSize if specified.
   void grow(size_type MinSize = 0);
@@ -434,6 +436,9 @@ private:
   }
 
   void destroy_range(T *S, T *E) {
+    // No need to do a destroy loop for POD's.
+    if (isPodLike<T>::value) return;
+    
     while (S != E) {
       --E;
       E->~T();
@@ -444,7 +449,7 @@ private:
 // Define this out-of-line to dissuade the C++ compiler from inlining it.
 template <typename T>
 void SmallVectorImpl<T>::grow(size_t MinSize) {
-  size_t CurCapacity = Capacity-Begin;
+  size_t CurCapacity = capacity();
   size_t CurSize = size();
   size_t NewCapacity = 2*CurCapacity;
   if (NewCapacity < MinSize)
@@ -452,22 +457,22 @@ void SmallVectorImpl<T>::grow(size_t MinSize) {
   T *NewElts = static_cast<T*>(operator new(NewCapacity*sizeof(T)));
 
   // Copy the elements over.
-  if (is_class<T>::value)
-    std::uninitialized_copy(Begin, End, NewElts);
+  if (isPodLike<T>::value)
+    // Use memcpy for PODs: std::uninitialized_copy optimizes to memmove.
+    memcpy(NewElts, begin(), CurSize * sizeof(T));
   else
-    // Use memcpy for PODs (std::uninitialized_copy optimizes to memmove).
-    memcpy(NewElts, Begin, CurSize * sizeof(T));
+    std::uninitialized_copy(begin(), end(), NewElts);
 
   // Destroy the original elements.
-  destroy_range(Begin, End);
+  destroy_range(begin(), end());
 
   // If this wasn't grown from the inline copy, deallocate the old space.
   if (!isSmall())
-    operator delete(Begin);
+    operator delete(begin());
 
-  Begin = NewElts;
-  End = NewElts+CurSize;
-  Capacity = Begin+NewCapacity;
+  setEnd(NewElts+CurSize);
+  BeginX = NewElts;
+  CapacityX = begin()+NewCapacity;
 }
 
 template <typename T>
@@ -476,35 +481,35 @@ void SmallVectorImpl<T>::swap(SmallVectorImpl<T> &RHS) {
 
   // We can only avoid copying elements if neither vector is small.
   if (!isSmall() && !RHS.isSmall()) {
-    std::swap(Begin, RHS.Begin);
-    std::swap(End, RHS.End);
-    std::swap(Capacity, RHS.Capacity);
+    std::swap(BeginX, RHS.BeginX);
+    std::swap(EndX, RHS.EndX);
+    std::swap(CapacityX, RHS.CapacityX);
     return;
   }
-  if (RHS.size() > size_type(Capacity-Begin))
+  if (RHS.size() > capacity())
     grow(RHS.size());
-  if (size() > size_type(RHS.Capacity-RHS.begin()))
+  if (size() > RHS.capacity())
     RHS.grow(size());
 
   // Swap the shared elements.
   size_t NumShared = size();
   if (NumShared > RHS.size()) NumShared = RHS.size();
   for (unsigned i = 0; i != static_cast<unsigned>(NumShared); ++i)
-    std::swap(Begin[i], RHS[i]);
+    std::swap((*this)[i], RHS[i]);
 
   // Copy over the extra elts.
   if (size() > RHS.size()) {
     size_t EltDiff = size() - RHS.size();
-    std::uninitialized_copy(Begin+NumShared, End, RHS.End);
-    RHS.End += EltDiff;
-    destroy_range(Begin+NumShared, End);
-    End = Begin+NumShared;
+    std::uninitialized_copy(begin()+NumShared, end(), RHS.end());
+    RHS.setEnd(RHS.end()+EltDiff);
+    destroy_range(begin()+NumShared, end());
+    setEnd(begin()+NumShared);
   } else if (RHS.size() > size()) {
     size_t EltDiff = RHS.size() - size();
-    std::uninitialized_copy(RHS.Begin+NumShared, RHS.End, End);
-    End += EltDiff;
-    destroy_range(RHS.Begin+NumShared, RHS.End);
-    RHS.End = RHS.Begin+NumShared;
+    std::uninitialized_copy(RHS.begin()+NumShared, RHS.end(), end());
+    setEnd(end() + EltDiff);
+    destroy_range(RHS.begin()+NumShared, RHS.end());
+    RHS.setEnd(RHS.begin()+NumShared);
   }
 }
 
@@ -516,42 +521,42 @@ SmallVectorImpl<T>::operator=(const SmallVectorImpl<T> &RHS) {
 
   // If we already have sufficient space, assign the common elements, then
   // destroy any excess.
-  unsigned RHSSize = unsigned(RHS.size());
-  unsigned CurSize = unsigned(size());
+  size_t RHSSize = RHS.size();
+  size_t CurSize = size();
   if (CurSize >= RHSSize) {
     // Assign common elements.
     iterator NewEnd;
     if (RHSSize)
-      NewEnd = std::copy(RHS.Begin, RHS.Begin+RHSSize, Begin);
+      NewEnd = std::copy(RHS.begin(), RHS.begin()+RHSSize, begin());
     else
-      NewEnd = Begin;
+      NewEnd = begin();
 
     // Destroy excess elements.
-    destroy_range(NewEnd, End);
+    destroy_range(NewEnd, end());
 
     // Trim.
-    End = NewEnd;
+    setEnd(NewEnd);
     return *this;
   }
 
   // If we have to grow to have enough elements, destroy the current elements.
   // This allows us to avoid copying them during the grow.
-  if (unsigned(Capacity-Begin) < RHSSize) {
+  if (capacity() < RHSSize) {
     // Destroy current elements.
-    destroy_range(Begin, End);
-    End = Begin;
+    destroy_range(begin(), end());
+    setEnd(begin());
     CurSize = 0;
     grow(RHSSize);
   } else if (CurSize) {
     // Otherwise, use assignment for the already-constructed elements.
-    std::copy(RHS.Begin, RHS.Begin+CurSize, Begin);
+    std::copy(RHS.begin(), RHS.begin()+CurSize, begin());
   }
 
   // Copy construct the new elements in place.
-  std::uninitialized_copy(RHS.Begin+CurSize, RHS.End, Begin+CurSize);
+  std::uninitialized_copy(RHS.begin()+CurSize, RHS.end(), begin()+CurSize);
 
   // Set end.
-  End = Begin+RHSSize;
+  setEnd(begin()+RHSSize);
   return *this;
 }
 
diff --git a/libclamav/c++/llvm/include/llvm/ADT/ValueMap.h b/libclamav/c++/llvm/include/llvm/ADT/ValueMap.h
index b043c38..6f57fe8 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/ValueMap.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/ValueMap.h
@@ -250,6 +250,12 @@ public:
   }
 };
 
+  
+template<typename KeyT, typename ValueT, typename Config, typename ValueInfoT>
+struct isPodLike<ValueMapCallbackVH<KeyT, ValueT, Config, ValueInfoT> > {
+  static const bool value = true;
+};
+
 template<typename KeyT, typename ValueT, typename Config, typename ValueInfoT>
 struct DenseMapInfo<ValueMapCallbackVH<KeyT, ValueT, Config, ValueInfoT> > {
   typedef ValueMapCallbackVH<KeyT, ValueT, Config, ValueInfoT> VH;
@@ -267,7 +273,6 @@ struct DenseMapInfo<ValueMapCallbackVH<KeyT, ValueT, Config, ValueInfoT> > {
   static bool isEqual(const VH &LHS, const VH &RHS) {
     return LHS == RHS;
   }
-  static bool isPod() { return false; }
 };
 
 
diff --git a/libclamav/c++/llvm/include/llvm/ADT/ilist.h b/libclamav/c++/llvm/include/llvm/ADT/ilist.h
index b3824a2..e4d26dd 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/ilist.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/ilist.h
@@ -643,7 +643,7 @@ struct ilist : public iplist<NodeTy> {
 
   // Main implementation here - Insert for a node passed by value...
   iterator insert(iterator where, const NodeTy &val) {
-    return insert(where, createNode(val));
+    return insert(where, this->createNode(val));
   }
 
 
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/AliasSetTracker.h b/libclamav/c++/llvm/include/llvm/Analysis/AliasSetTracker.h
index 42a377e..09f12ad 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/AliasSetTracker.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/AliasSetTracker.h
@@ -259,11 +259,9 @@ class AliasSetTracker {
     ASTCallbackVH(Value *V, AliasSetTracker *AST = 0);
     ASTCallbackVH &operator=(Value *V);
   };
-  /// ASTCallbackVHDenseMapInfo - Traits to tell DenseMap that ASTCallbackVH
-  /// is not a POD (it needs its destructor called).
-  struct ASTCallbackVHDenseMapInfo : public DenseMapInfo<Value *> {
-    static bool isPod() { return false; }
-  };
+  /// ASTCallbackVHDenseMapInfo - Traits to tell DenseMap that tell us how to
+  /// compare and hash the value handle.
+  struct ASTCallbackVHDenseMapInfo : public DenseMapInfo<Value *> {};
 
   AliasAnalysis &AA;
   ilist<AliasSet> AliasSets;
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h b/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h
index 22fbb35..fcd9caa 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h
@@ -175,11 +175,11 @@ class IVUsers : public LoopPass {
   ScalarEvolution *SE;
   SmallPtrSet<Instruction*,16> Processed;
 
-public:
   /// IVUses - A list of all tracked IV uses of induction variable expressions
   /// we are interested in.
   ilist<IVUsersOfOneStride> IVUses;
 
+public:
   /// IVUsesByStride - A mapping from the strides in StrideOrder to the
   /// uses in IVUses.
   std::map<const SCEV *, IVUsersOfOneStride*> IVUsesByStride;
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
index 7419cdc..2294e53 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
@@ -976,13 +976,6 @@ public:
   void removeBlock(BasicBlock *BB) {
     LI.removeBlock(BB);
   }
-
-  static bool isNotAlreadyContainedIn(const Loop *SubLoop,
-                                      const Loop *ParentLoop) {
-    return
-      LoopInfoBase<BasicBlock, Loop>::isNotAlreadyContainedIn(SubLoop,
-                                                              ParentLoop);
-  }
 };
 
 
diff --git a/libclamav/c++/llvm/include/llvm/Bitcode/Deserialize.h b/libclamav/c++/llvm/include/llvm/Bitcode/Deserialize.h
index 90a5141..3266038 100644
--- a/libclamav/c++/llvm/include/llvm/Bitcode/Deserialize.h
+++ b/libclamav/c++/llvm/include/llvm/Bitcode/Deserialize.h
@@ -25,53 +25,52 @@
 
 namespace llvm {
 
+struct BPNode {
+  BPNode* Next;
+  uintptr_t& PtrRef;
+  
+  BPNode(BPNode* n, uintptr_t& pref)
+  : Next(n), PtrRef(pref) {
+    PtrRef = 0;
+  }
+};
+
+struct BPEntry {
+  union { BPNode* Head; void* Ptr; };
+  BPEntry() : Head(NULL) {}
+  void SetPtr(BPNode*& FreeList, void* P);
+};
+
+class BPKey {
+  unsigned Raw;
+public:
+  BPKey(SerializedPtrID PtrId) : Raw(PtrId << 1) { assert (PtrId > 0); }
+  BPKey(unsigned code, unsigned) : Raw(code) {}
+  
+  void MarkFinal() { Raw |= 0x1; }
+  bool hasFinalPtr() const { return Raw & 0x1 ? true : false; }
+  SerializedPtrID getID() const { return Raw >> 1; }
+  
+  static inline BPKey getEmptyKey() { return BPKey(0,0); }
+  static inline BPKey getTombstoneKey() { return BPKey(1,0); }
+  static inline unsigned getHashValue(const BPKey& K) { return K.Raw & ~0x1; }
+  
+  static bool isEqual(const BPKey& K1, const BPKey& K2) {
+    return (K1.Raw ^ K2.Raw) & ~0x1 ? false : true;
+  }
+};
+  
+template <>
+struct isPodLike<BPKey> { static const bool value = true; };
+template <>
+struct isPodLike<BPEntry> { static const bool value = true; };
+  
 class Deserializer {
 
   //===----------------------------------------------------------===//
   // Internal type definitions.
   //===----------------------------------------------------------===//
 
-  struct BPNode {
-    BPNode* Next;
-    uintptr_t& PtrRef;
-
-    BPNode(BPNode* n, uintptr_t& pref)
-      : Next(n), PtrRef(pref) {
-        PtrRef = 0;
-      }
-  };
-
-  struct BPEntry {
-    union { BPNode* Head; void* Ptr; };
-
-    BPEntry() : Head(NULL) {}
-
-    static inline bool isPod() { return true; }
-
-    void SetPtr(BPNode*& FreeList, void* P);
-  };
-
-  class BPKey {
-    unsigned Raw;
-
-  public:
-    BPKey(SerializedPtrID PtrId) : Raw(PtrId << 1) { assert (PtrId > 0); }
-    BPKey(unsigned code, unsigned) : Raw(code) {}
-
-    void MarkFinal() { Raw |= 0x1; }
-    bool hasFinalPtr() const { return Raw & 0x1 ? true : false; }
-    SerializedPtrID getID() const { return Raw >> 1; }
-
-    static inline BPKey getEmptyKey() { return BPKey(0,0); }
-    static inline BPKey getTombstoneKey() { return BPKey(1,0); }
-    static inline unsigned getHashValue(const BPKey& K) { return K.Raw & ~0x1; }
-
-    static bool isEqual(const BPKey& K1, const BPKey& K2) {
-      return (K1.Raw ^ K2.Raw) & ~0x1 ? false : true;
-    }
-
-    static bool isPod() { return true; }
-  };
 
   typedef llvm::DenseMap<BPKey,BPEntry,BPKey,BPEntry> MapTy;
 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/CalcSpillWeights.h b/libclamav/c++/llvm/include/llvm/CodeGen/CalcSpillWeights.h
new file mode 100644
index 0000000..2fc03bd
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/CalcSpillWeights.h
@@ -0,0 +1,39 @@
+//===---------------- lib/CodeGen/CalcSpillWeights.h ------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+
+#ifndef LLVM_CODEGEN_CALCSPILLWEIGHTS_H
+#define LLVM_CODEGEN_CALCSPILLWEIGHTS_H
+
+#include "llvm/CodeGen/MachineFunctionPass.h"
+
+namespace llvm {
+
+  class LiveInterval;
+
+  /// CalculateSpillWeights - Compute spill weights for all virtual register
+  /// live intervals.
+  class CalculateSpillWeights : public MachineFunctionPass {
+  public:
+    static char ID;
+
+    CalculateSpillWeights() : MachineFunctionPass(&ID) {}
+
+    virtual void getAnalysisUsage(AnalysisUsage &au) const;
+
+    virtual bool runOnMachineFunction(MachineFunction &fn);    
+
+  private:
+    /// Returns true if the given live interval is zero length.
+    bool isZeroLengthInterval(LiveInterval *li) const;
+  };
+
+}
+
+#endif // LLVM_CODEGEN_CALCSPILLWEIGHTS_H
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h b/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
index 6a2b166..7233f3f 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
@@ -110,8 +110,7 @@ void SelectRoot(SelectionDAG &DAG) {
     DAG.setSubgraphColor(Node, "red");
 #endif
     SDNode *ResNode = Select(SDValue(Node, 0));
-    // If node should not be replaced, 
-    // continue with the next one.
+    // If node should not be replaced, continue with the next one.
     if (ResNode == Node)
       continue;
     // Replace node.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
index 7e3ce6b..6b4c640 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
@@ -327,11 +327,6 @@ public:
   /// 'Old', change the code and CFG so that it branches to 'New' instead.
   void ReplaceUsesOfBlockWith(MachineBasicBlock *Old, MachineBasicBlock *New);
 
-  /// BranchesToLandingPad - The basic block is a landing pad or branches only
-  /// to a landing pad. No other instructions are present other than the
-  /// unconditional branch.
-  bool BranchesToLandingPad(const MachineBasicBlock *MBB) const;
-
   /// CorrectExtraCFGEdges - Various pieces of code can cause excess edges in
   /// the CFG to be inserted.  If we have proven that MBB can only branch to
   /// DestA and DestB, remove any other MBB successors from the CFG. DestA and
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
index 6e15617..c09c634 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
@@ -110,6 +110,46 @@ class SelectionDAG {
   /// SelectionDAG.
   BumpPtrAllocator Allocator;
 
+  /// NodeOrdering - Assigns a "line number" value to each SDNode that
+  /// corresponds to the "line number" of the original LLVM instruction. This
+  /// used for turning off scheduling, because we'll forgo the normal scheduling
+  /// algorithm and output the instructions according to this ordering.
+  class NodeOrdering {
+    /// LineNo - The line of the instruction the node corresponds to. A value of
+    /// `0' means it's not assigned.
+    unsigned LineNo;
+    std::map<const SDNode*, unsigned> Order;
+
+    void operator=(const NodeOrdering&); // Do not implement.
+    NodeOrdering(const NodeOrdering&);   // Do not implement.
+  public:
+    NodeOrdering() : LineNo(0) {}
+
+    void add(const SDNode *Node) {
+      assert(LineNo && "Invalid line number!");
+      Order[Node] = LineNo;
+    }
+    void remove(const SDNode *Node) {
+      std::map<const SDNode*, unsigned>::iterator Itr = Order.find(Node);
+      if (Itr != Order.end())
+        Order.erase(Itr);
+    }
+    void clear() {
+      Order.clear();
+      LineNo = 1;
+    }
+    unsigned getLineNo(const SDNode *Node) {
+      unsigned LN = Order[Node];
+      assert(LN && "Node isn't in ordering map!");
+      return LN;
+    }
+    void newInst() {
+      ++LineNo;
+    }
+
+    void dump() const;
+  } *Ordering;
+
   /// VerifyNode - Sanity check the given node.  Aborts if it is invalid.
   void VerifyNode(SDNode *N);
 
@@ -120,6 +160,9 @@ class SelectionDAG {
                               DenseSet<SDNode *> &visited,
                               int level, bool &printed);
 
+  void operator=(const SelectionDAG&); // Do not implement.
+  SelectionDAG(const SelectionDAG&);   // Do not implement.
+
 public:
   SelectionDAG(TargetLowering &tli, FunctionLoweringInfo &fli);
   ~SelectionDAG();
@@ -199,6 +242,13 @@ public:
     return Root = N;
   }
 
+  /// NewInst - Tell the ordering object that we're processing a new
+  /// instruction.
+  void NewInst() {
+    if (Ordering)
+      Ordering->newInst();
+  }
+
   /// Combine - This iterates over the nodes in the SelectionDAG, folding
   /// certain types of nodes together, or eliminating superfluous nodes.  The
   /// Level argument controls whether Combine is allowed to produce nodes and
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
index 580986a..571db47 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
@@ -891,8 +891,9 @@ template<> struct DenseMapInfo<SDValue> {
   static bool isEqual(const SDValue &LHS, const SDValue &RHS) {
     return LHS == RHS;
   }
-  static bool isPod() { return true; }
 };
+template <> struct isPodLike<SDValue> { static const bool value = true; };
+
 
 /// simplify_type specializations - Allow casting operators to work directly on
 /// SDValues as if they were SDNode*'s.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h b/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
index 65d85fc..9a85ee1 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
@@ -343,8 +343,10 @@ namespace llvm {
     static inline bool isEqual(const SlotIndex &LHS, const SlotIndex &RHS) {
       return (LHS == RHS);
     }
-    static inline bool isPod() { return false; }
   };
+  
+  template <> struct isPodLike<SlotIndex> { static const bool value = true; };
+
 
   inline raw_ostream& operator<<(raw_ostream &os, SlotIndex li) {
     li.print(os);
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
index 3106213..06e07f3 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
@@ -47,35 +47,36 @@ namespace llvm {
       f80            =   9,   // This is a 80 bit floating point value
       f128           =  10,   // This is a 128 bit floating point value
       ppcf128        =  11,   // This is a PPC 128-bit floating point value
-      Flag           =  12,   // This is a condition code or machine flag.
-
-      isVoid         =  13,   // This has no value
-
-      v2i8           =  14,   //  2 x i8
-      v4i8           =  15,   //  4 x i8
-      v8i8           =  16,   //  8 x i8
-      v16i8          =  17,   // 16 x i8
-      v32i8          =  18,   // 32 x i8
-      v2i16          =  19,   //  2 x i16
-      v4i16          =  20,   //  4 x i16
-      v8i16          =  21,   //  8 x i16
-      v16i16         =  22,   // 16 x i16
-      v2i32          =  23,   //  2 x i32
-      v4i32          =  24,   //  4 x i32
-      v8i32          =  25,   //  8 x i32
-      v1i64          =  26,   //  1 x i64
-      v2i64          =  27,   //  2 x i64
-      v4i64          =  28,   //  4 x i64
-
-      v2f32          =  29,   //  2 x f32
-      v4f32          =  30,   //  4 x f32
-      v8f32          =  31,   //  8 x f32
-      v2f64          =  32,   //  2 x f64
-      v4f64          =  33,   //  4 x f64
+
+      v2i8           =  12,   //  2 x i8
+      v4i8           =  13,   //  4 x i8
+      v8i8           =  14,   //  8 x i8
+      v16i8          =  15,   // 16 x i8
+      v32i8          =  16,   // 32 x i8
+      v2i16          =  17,   //  2 x i16
+      v4i16          =  18,   //  4 x i16
+      v8i16          =  19,   //  8 x i16
+      v16i16         =  20,   // 16 x i16
+      v2i32          =  21,   //  2 x i32
+      v4i32          =  22,   //  4 x i32
+      v8i32          =  23,   //  8 x i32
+      v1i64          =  24,   //  1 x i64
+      v2i64          =  25,   //  2 x i64
+      v4i64          =  26,   //  4 x i64
+
+      v2f32          =  27,   //  2 x f32
+      v4f32          =  28,   //  4 x f32
+      v8f32          =  29,   //  8 x f32
+      v2f64          =  30,   //  2 x f64
+      v4f64          =  31,   //  4 x f64
 
       FIRST_VECTOR_VALUETYPE = v2i8,
       LAST_VECTOR_VALUETYPE  = v4f64,
 
+      Flag           =  32,   // This glues nodes together during pre-RA sched
+
+      isVoid         =  33,   // This has no value
+
       LAST_VALUETYPE =  34,   // This always remains at the end of the list.
 
       // This is the current maximum for LAST_VALUETYPE.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.td b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.td
index 986555b..c8bb789 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.td
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.td
@@ -31,30 +31,31 @@ def f64    : ValueType<64 ,  8>;   // 64-bit floating point value
 def f80    : ValueType<80 ,  9>;   // 80-bit floating point value
 def f128   : ValueType<128, 10>;   // 128-bit floating point value
 def ppcf128: ValueType<128, 11>;   // PPC 128-bit floating point value
-def FlagVT : ValueType<0  , 12>;   // Condition code or machine flag
-def isVoid : ValueType<0  , 13>;   // Produces no value
 
-def v2i8   : ValueType<16 , 14>;   //  2 x i8  vector value
-def v4i8   : ValueType<32 , 15>;   //  4 x i8  vector value
-def v8i8   : ValueType<64 , 16>;   //  8 x i8  vector value
-def v16i8  : ValueType<128, 17>;   // 16 x i8  vector value
-def v32i8  : ValueType<256, 18>;   // 32 x i8 vector value
-def v2i16  : ValueType<32 , 19>;   //  2 x i16 vector value
-def v4i16  : ValueType<64 , 20>;   //  4 x i16 vector value
-def v8i16  : ValueType<128, 21>;   //  8 x i16 vector value
-def v16i16 : ValueType<256, 22>;   // 16 x i16 vector value
-def v2i32  : ValueType<64 , 23>;   //  2 x i32 vector value
-def v4i32  : ValueType<128, 24>;   //  4 x i32 vector value
-def v8i32  : ValueType<256, 25>;   //  8 x i32 vector value
-def v1i64  : ValueType<64 , 26>;   //  1 x i64 vector value
-def v2i64  : ValueType<128, 27>;   //  2 x i64 vector value
-def v4i64  : ValueType<256, 28>;   //  4 x f64 vector value
+def v2i8   : ValueType<16 , 12>;   //  2 x i8  vector value
+def v4i8   : ValueType<32 , 13>;   //  4 x i8  vector value
+def v8i8   : ValueType<64 , 14>;   //  8 x i8  vector value
+def v16i8  : ValueType<128, 15>;   // 16 x i8  vector value
+def v32i8  : ValueType<256, 16>;   // 32 x i8 vector value
+def v2i16  : ValueType<32 , 17>;   //  2 x i16 vector value
+def v4i16  : ValueType<64 , 18>;   //  4 x i16 vector value
+def v8i16  : ValueType<128, 19>;   //  8 x i16 vector value
+def v16i16 : ValueType<256, 20>;   // 16 x i16 vector value
+def v2i32  : ValueType<64 , 21>;   //  2 x i32 vector value
+def v4i32  : ValueType<128, 22>;   //  4 x i32 vector value
+def v8i32  : ValueType<256, 23>;   //  8 x i32 vector value
+def v1i64  : ValueType<64 , 24>;   //  1 x i64 vector value
+def v2i64  : ValueType<128, 25>;   //  2 x i64 vector value
+def v4i64  : ValueType<256, 26>;   //  4 x f64 vector value
 
-def v2f32  : ValueType<64,  29>;   //  2 x f32 vector value
-def v4f32  : ValueType<128, 30>;   //  4 x f32 vector value
-def v8f32  : ValueType<256, 31>;   //  8 x f32 vector value
-def v2f64  : ValueType<128, 32>;   //  2 x f64 vector value
-def v4f64  : ValueType<256, 33>;   //  4 x f64 vector value
+def v2f32  : ValueType<64,  27>;   //  2 x f32 vector value
+def v4f32  : ValueType<128, 28>;   //  4 x f32 vector value
+def v8f32  : ValueType<256, 29>;   //  8 x f32 vector value
+def v2f64  : ValueType<128, 30>;   //  2 x f64 vector value
+def v4f64  : ValueType<256, 31>;   //  4 x f64 vector value
+
+def FlagVT : ValueType<0  , 32>;   // Pre-RA sched glue
+def isVoid : ValueType<0  , 33>;   // Produces no value
 
 def MetadataVT: ValueType<0, 250>; // Metadata
 
diff --git a/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td b/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
index cfd675b..8d2f63b 100644
--- a/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
+++ b/libclamav/c++/llvm/include/llvm/CompilerDriver/Common.td
@@ -42,9 +42,9 @@ def hidden;
 def init;
 def multi_val;
 def one_or_more;
+def optional;
 def really_hidden;
 def required;
-def zero_or_one;
 def comma_separated;
 
 // The 'case' construct.
diff --git a/libclamav/c++/llvm/include/llvm/Pass.h b/libclamav/c++/llvm/include/llvm/Pass.h
index 909ccde..f3e4dfd 100644
--- a/libclamav/c++/llvm/include/llvm/Pass.h
+++ b/libclamav/c++/llvm/include/llvm/Pass.h
@@ -111,12 +111,10 @@ public:
   virtual void assignPassManager(PMStack &, 
                                  PassManagerType = PMT_Unknown) {}
   /// Check if available pass managers are suitable for this pass or not.
-  virtual void preparePassManager(PMStack &) {}
+  virtual void preparePassManager(PMStack &);
   
   ///  Return what kind of Pass Manager can manage this pass.
-  virtual PassManagerType getPotentialPassManagerType() const {
-    return PMT_Unknown; 
-  }
+  virtual PassManagerType getPotentialPassManagerType() const;
 
   // Access AnalysisResolver
   inline void setResolver(AnalysisResolver *AR) { 
@@ -132,9 +130,7 @@ public:
   /// particular analysis result to this function, it can then use the
   /// getAnalysis<AnalysisType>() function, below.
   ///
-  virtual void getAnalysisUsage(AnalysisUsage &) const {
-    // By default, no analysis results are used, all are invalidated.
-  }
+  virtual void getAnalysisUsage(AnalysisUsage &) const;
 
   /// releaseMemory() - This member can be implemented by a pass if it wants to
   /// be able to release its memory when it is no longer needed.  The default
@@ -147,11 +143,11 @@ public:
   /// Optionally implement this function to release pass memory when it is no
   /// longer used.
   ///
-  virtual void releaseMemory() {}
+  virtual void releaseMemory();
 
   /// verifyAnalysis() - This member can be implemented by a analysis pass to
   /// check state of analysis information. 
-  virtual void verifyAnalysis() const {}
+  virtual void verifyAnalysis() const;
 
   // dumpPassStructure - Implement the -debug-passes=PassStructure option
   virtual void dumpPassStructure(unsigned Offset = 0);
@@ -221,9 +217,7 @@ public:
                                  PassManagerType T = PMT_ModulePassManager);
 
   ///  Return what kind of Pass Manager can manage this pass.
-  virtual PassManagerType getPotentialPassManagerType() const {
-    return PMT_ModulePassManager;
-  }
+  virtual PassManagerType getPotentialPassManagerType() const;
 
   explicit ModulePass(intptr_t pid) : Pass(pid) {}
   explicit ModulePass(const void *pid) : Pass(pid) {}
@@ -245,7 +239,7 @@ public:
   /// and if it does, the overloaded version of initializePass may get access to
   /// these passes with getAnalysis<>.
   ///
-  virtual void initializePass() {}
+  virtual void initializePass();
 
   /// ImmutablePasses are never run.
   ///
@@ -276,7 +270,7 @@ public:
   /// doInitialization - Virtual method overridden by subclasses to do
   /// any necessary per-module initialization.
   ///
-  virtual bool doInitialization(Module &) { return false; }
+  virtual bool doInitialization(Module &);
   
   /// runOnFunction - Virtual method overriden by subclasses to do the
   /// per-function processing of the pass.
@@ -286,7 +280,7 @@ public:
   /// doFinalization - Virtual method overriden by subclasses to do any post
   /// processing needed after all passes have run.
   ///
-  virtual bool doFinalization(Module &) { return false; }
+  virtual bool doFinalization(Module &);
 
   /// runOnModule - On a module, we run this pass by initializing,
   /// ronOnFunction'ing once for every function in the module, then by
@@ -303,9 +297,7 @@ public:
                                  PassManagerType T = PMT_FunctionPassManager);
 
   ///  Return what kind of Pass Manager can manage this pass.
-  virtual PassManagerType getPotentialPassManagerType() const {
-    return PMT_FunctionPassManager;
-  }
+  virtual PassManagerType getPotentialPassManagerType() const;
 };
 
 
@@ -328,12 +320,12 @@ public:
   /// doInitialization - Virtual method overridden by subclasses to do
   /// any necessary per-module initialization.
   ///
-  virtual bool doInitialization(Module &) { return false; }
+  virtual bool doInitialization(Module &);
 
   /// doInitialization - Virtual method overridden by BasicBlockPass subclasses
   /// to do any necessary per-function initialization.
   ///
-  virtual bool doInitialization(Function &) { return false; }
+  virtual bool doInitialization(Function &);
 
   /// runOnBasicBlock - Virtual method overriden by subclasses to do the
   /// per-basicblock processing of the pass.
@@ -343,12 +335,12 @@ public:
   /// doFinalization - Virtual method overriden by BasicBlockPass subclasses to
   /// do any post processing needed after all passes have run.
   ///
-  virtual bool doFinalization(Function &) { return false; }
+  virtual bool doFinalization(Function &);
 
   /// doFinalization - Virtual method overriden by subclasses to do any post
   /// processing needed after all passes have run.
   ///
-  virtual bool doFinalization(Module &) { return false; }
+  virtual bool doFinalization(Module &);
 
 
   // To run this pass on a function, we simply call runOnBasicBlock once for
@@ -360,9 +352,7 @@ public:
                                  PassManagerType T = PMT_BasicBlockPassManager);
 
   ///  Return what kind of Pass Manager can manage this pass.
-  virtual PassManagerType getPotentialPassManagerType() const {
-    return PMT_BasicBlockPassManager; 
-  }
+  virtual PassManagerType getPotentialPassManagerType() const;
 };
 
 /// If the user specifies the -time-passes argument on an LLVM tool command line
diff --git a/libclamav/c++/llvm/include/llvm/Support/Compiler.h b/libclamav/c++/llvm/include/llvm/Support/Compiler.h
index da31f98..8861a20 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Compiler.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Compiler.h
@@ -70,6 +70,16 @@
 #define DISABLE_INLINE
 #endif
 
+// ALWAYS_INLINE - On compilers where we have a directive to do so, mark a
+// method "always inline" because it is performance sensitive.
+#if (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
+#define ALWAYS_INLINE __attribute__((always_inline))
+#else
+// TODO: No idea how to do this with MSVC.
+#define ALWAYS_INLINE
+#endif
+
+
 #ifdef __GNUC__
 #define NORETURN __attribute__((noreturn))
 #elif defined(_MSC_VER)
diff --git a/libclamav/c++/llvm/include/llvm/Support/DebugLoc.h b/libclamav/c++/llvm/include/llvm/Support/DebugLoc.h
index 362390f..6814f63 100644
--- a/libclamav/c++/llvm/include/llvm/Support/DebugLoc.h
+++ b/libclamav/c++/llvm/include/llvm/Support/DebugLoc.h
@@ -66,7 +66,7 @@ namespace llvm {
   };
 
   // Specialize DenseMapInfo for DebugLocTuple.
-  template<>  struct DenseMapInfo<DebugLocTuple> {
+  template<> struct DenseMapInfo<DebugLocTuple> {
     static inline DebugLocTuple getEmptyKey() {
       return DebugLocTuple(0, 0, ~0U, ~0U);
     }
@@ -85,9 +85,9 @@ namespace llvm {
              LHS.Line         == RHS.Line &&
              LHS.Col          == RHS.Col;
     }
-
-    static bool isPod() { return true; }
   };
+  template <> struct isPodLike<DebugLocTuple> {static const bool value = true;};
+
 
   /// DebugLocTracker - This class tracks debug location information.
   ///
diff --git a/libclamav/c++/llvm/include/llvm/Support/ValueHandle.h b/libclamav/c++/llvm/include/llvm/Support/ValueHandle.h
index a9872a7..82c3cae 100644
--- a/libclamav/c++/llvm/include/llvm/Support/ValueHandle.h
+++ b/libclamav/c++/llvm/include/llvm/Support/ValueHandle.h
@@ -254,15 +254,18 @@ struct DenseMapInfo<AssertingVH<T> > {
   static bool isEqual(const AssertingVH<T> &LHS, const AssertingVH<T> &RHS) {
     return LHS == RHS;
   }
-  static bool isPod() {
+};
+  
+template <typename T>
+struct isPodLike<AssertingVH<T> > {
 #ifdef NDEBUG
-    return true;
+  static const bool value = true;
 #else
-    return false;
+  static const bool value = false;
 #endif
-  }
 };
 
+
 /// TrackingVH - This is a value handle that tracks a Value (or Value subclass),
 /// even across RAUW operations.
 ///
diff --git a/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h b/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h
index a78e81f..2b3341d 100644
--- a/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h
+++ b/libclamav/c++/llvm/include/llvm/Support/raw_ostream.h
@@ -186,14 +186,12 @@ public:
     // Inline fast path, particulary for constant strings where a sufficiently
     // smart compiler will simplify strlen.
 
-    this->operator<<(StringRef(Str));
-    return *this;
+    return this->operator<<(StringRef(Str));
   }
 
   raw_ostream &operator<<(const std::string &Str) {
     // Avoid the fast path, it would only increase code size for a marginal win.
-    write(Str.data(), Str.length());
-    return *this;
+    return write(Str.data(), Str.length());
   }
 
   raw_ostream &operator<<(unsigned long N);
@@ -202,13 +200,11 @@ public:
   raw_ostream &operator<<(long long N);
   raw_ostream &operator<<(const void *P);
   raw_ostream &operator<<(unsigned int N) {
-    this->operator<<(static_cast<unsigned long>(N));
-    return *this;
+    return this->operator<<(static_cast<unsigned long>(N));
   }
 
   raw_ostream &operator<<(int N) {
-    this->operator<<(static_cast<long>(N));
-    return *this;
+    return this->operator<<(static_cast<long>(N));
   }
 
   raw_ostream &operator<<(double N);
diff --git a/libclamav/c++/llvm/include/llvm/Support/type_traits.h b/libclamav/c++/llvm/include/llvm/Support/type_traits.h
index ce916b5..515295b 100644
--- a/libclamav/c++/llvm/include/llvm/Support/type_traits.h
+++ b/libclamav/c++/llvm/include/llvm/Support/type_traits.h
@@ -17,13 +17,15 @@
 #ifndef LLVM_SUPPORT_TYPE_TRAITS_H
 #define LLVM_SUPPORT_TYPE_TRAITS_H
 
+#include <utility>
+
 // This is actually the conforming implementation which works with abstract
 // classes.  However, enough compilers have trouble with it that most will use
 // the one in boost/type_traits/object_traits.hpp. This implementation actually
 // works with VC7.0, but other interactions seem to fail when we use it.
 
 namespace llvm {
-
+  
 namespace dont_use
 {
     // These two functions should never be used. They are helpers to
@@ -48,6 +50,23 @@ struct is_class
  public:
     enum { value = sizeof(char) == sizeof(dont_use::is_class_helper<T>(0)) };
 };
+  
+  
+/// isPodLike - This is a type trait that is used to determine whether a given
+/// type can be copied around with memcpy instead of running ctors etc.
+template <typename T>
+struct isPodLike {
+  // If we don't know anything else, we can (at least) assume that all non-class
+  // types are PODs.
+  static const bool value = !is_class<T>::value;
+};
+
+// std::pair's are pod-like if their elements are.
+template<typename T, typename U>
+struct isPodLike<std::pair<T, U> > {
+  static const bool value = isPodLike<T>::value & isPodLike<U>::value;
+};
+  
 
 /// \brief Metafunction that determines whether the two given types are 
 /// equivalent.
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h b/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
index 91ee923..1bcd6fd 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
@@ -286,11 +286,10 @@ public:
   ///    just return false, leaving TBB/FBB null.
   /// 2. If this block ends with only an unconditional branch, it sets TBB to be
   ///    the destination block.
-  /// 3. If this block ends with an conditional branch and it falls through to
-  ///    a successor block, it sets TBB to be the branch destination block and
-  ///    a list of operands that evaluate the condition. These
-  ///    operands can be passed to other TargetInstrInfo methods to create new
-  ///    branches.
+  /// 3. If this block ends with a conditional branch and it falls through to a
+  ///    successor block, it sets TBB to be the branch destination block and a
+  ///    list of operands that evaluate the condition. These operands can be
+  ///    passed to other TargetInstrInfo methods to create new branches.
   /// 4. If this block ends with a conditional branch followed by an
   ///    unconditional branch, it returns the 'true' destination in TBB, the
   ///    'false' destination in FBB, and a list of operands that evaluate the
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h b/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
index e4ea5a5..9536e04 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
@@ -972,7 +972,7 @@ protected:
   /// not work with the with specified type and indicate what to do about it.
   void setLoadExtAction(unsigned ExtType, MVT VT,
                       LegalizeAction Action) {
-    assert((unsigned)VT.SimpleTy < MVT::LAST_VALUETYPE &&
+    assert((unsigned)VT.SimpleTy*2 < 63 &&
            ExtType < array_lengthof(LoadExtActions) &&
            "Table isn't big enough!");
     LoadExtActions[ExtType] &= ~(uint64_t(3UL) << VT.SimpleTy*2);
@@ -984,7 +984,7 @@ protected:
   void setTruncStoreAction(MVT ValVT, MVT MemVT,
                            LegalizeAction Action) {
     assert((unsigned)ValVT.SimpleTy < array_lengthof(TruncStoreActions) &&
-           (unsigned)MemVT.SimpleTy < MVT::LAST_VALUETYPE &&
+           (unsigned)MemVT.SimpleTy*2 < 63 &&
            "Table isn't big enough!");
     TruncStoreActions[ValVT.SimpleTy] &= ~(uint64_t(3UL)  << MemVT.SimpleTy*2);
     TruncStoreActions[ValVT.SimpleTy] |= (uint64_t)Action << MemVT.SimpleTy*2;
diff --git a/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp b/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp
index e12db81..4d5b312 100644
--- a/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IPA/Andersens.cpp
@@ -121,8 +121,6 @@ namespace {
 
       return *LHS == *RHS;
     }
-
-    static bool isPod() { return true; }
   };
 
   class Andersens : public ModulePass, public AliasAnalysis,
diff --git a/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp b/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
index 37747b6..627dbbb 100644
--- a/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
@@ -53,7 +53,7 @@ static bool containsAddRecFromDifferentLoop(const SCEV *S, Loop *L) {
       if (newLoop == L)
         return false;
       // if newLoop is an outer loop of L, this is OK.
-      if (!LoopInfo::isNotAlreadyContainedIn(L, newLoop))
+      if (newLoop->contains(L->getHeader()))
         return false;
     }
     return true;
@@ -307,6 +307,7 @@ bool IVUsers::runOnLoop(Loop *l, LPPassManager &LPM) {
   for (BasicBlock::iterator I = L->getHeader()->begin(); isa<PHINode>(I); ++I)
     AddUsersIfInteresting(I);
 
+  Processed.clear();
   return false;
 }
 
@@ -369,7 +370,7 @@ void IVUsers::dump() const {
 void IVUsers::releaseMemory() {
   IVUsesByStride.clear();
   StrideOrder.clear();
-  Processed.clear();
+  IVUses.clear();
 }
 
 void IVStrideUse::deleted() {
diff --git a/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp b/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
index 5a7f691..c49c6e1 100644
--- a/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ProfileInfo.cpp
@@ -44,19 +44,19 @@ ProfileInfoT<Function, BasicBlock>::~ProfileInfoT() {
 }
 
 template<>
-char ProfileInfo::ID = 0;
+char ProfileInfoT<Function,BasicBlock>::ID = 0;
 
 template<>
-char MachineProfileInfo::ID = 0;
+char ProfileInfoT<MachineFunction, MachineBasicBlock>::ID = 0;
 
 template<>
-const double ProfileInfo::MissingValue = -1;
+const double ProfileInfoT<Function,BasicBlock>::MissingValue = -1;
 
-template<>
-const double MachineProfileInfo::MissingValue = -1;
+template<> const
+double ProfileInfoT<MachineFunction, MachineBasicBlock>::MissingValue = -1;
 
-template<>
-double ProfileInfo::getExecutionCount(const BasicBlock *BB) {
+template<> double
+ProfileInfoT<Function,BasicBlock>::getExecutionCount(const BasicBlock *BB) {
   std::map<const Function*, BlockCounts>::iterator J =
     BlockInformation.find(BB->getParent());
   if (J != BlockInformation.end()) {
@@ -118,7 +118,8 @@ double ProfileInfo::getExecutionCount(const BasicBlock *BB) {
 }
 
 template<>
-double MachineProfileInfo::getExecutionCount(const MachineBasicBlock *MBB) {
+double ProfileInfoT<MachineFunction, MachineBasicBlock>::
+        getExecutionCount(const MachineBasicBlock *MBB) {
   std::map<const MachineFunction*, BlockCounts>::iterator J =
     BlockInformation.find(MBB->getParent());
   if (J != BlockInformation.end()) {
@@ -131,7 +132,7 @@ double MachineProfileInfo::getExecutionCount(const MachineBasicBlock *MBB) {
 }
 
 template<>
-double ProfileInfo::getExecutionCount(const Function *F) {
+double ProfileInfoT<Function,BasicBlock>::getExecutionCount(const Function *F) {
   std::map<const Function*, double>::iterator J =
     FunctionInformation.find(F);
   if (J != FunctionInformation.end())
@@ -147,7 +148,8 @@ double ProfileInfo::getExecutionCount(const Function *F) {
 }
 
 template<>
-double MachineProfileInfo::getExecutionCount(const MachineFunction *MF) {
+double ProfileInfoT<MachineFunction, MachineBasicBlock>::
+        getExecutionCount(const MachineFunction *MF) {
   std::map<const MachineFunction*, double>::iterator J =
     FunctionInformation.find(MF);
   if (J != FunctionInformation.end())
@@ -159,21 +161,23 @@ double MachineProfileInfo::getExecutionCount(const MachineFunction *MF) {
 }
 
 template<>
-void ProfileInfo::setExecutionCount(const BasicBlock *BB, double w) {
+void ProfileInfoT<Function,BasicBlock>::
+        setExecutionCount(const BasicBlock *BB, double w) {
   DEBUG(errs() << "Creating Block " << BB->getName() 
                << " (weight: " << format("%.20g",w) << ")\n");
   BlockInformation[BB->getParent()][BB] = w;
 }
 
 template<>
-void MachineProfileInfo::setExecutionCount(const MachineBasicBlock *MBB, double w) {
+void ProfileInfoT<MachineFunction, MachineBasicBlock>::
+        setExecutionCount(const MachineBasicBlock *MBB, double w) {
   DEBUG(errs() << "Creating Block " << MBB->getBasicBlock()->getName()
                << " (weight: " << format("%.20g",w) << ")\n");
   BlockInformation[MBB->getParent()][MBB] = w;
 }
 
 template<>
-void ProfileInfo::addEdgeWeight(Edge e, double w) {
+void ProfileInfoT<Function,BasicBlock>::addEdgeWeight(Edge e, double w) {
   double oldw = getEdgeWeight(e);
   assert (oldw != MissingValue && "Adding weight to Edge with no previous weight");
   DEBUG(errs() << "Adding to Edge " << e
@@ -182,7 +186,8 @@ void ProfileInfo::addEdgeWeight(Edge e, double w) {
 }
 
 template<>
-void ProfileInfo::addExecutionCount(const BasicBlock *BB, double w) {
+void ProfileInfoT<Function,BasicBlock>::
+        addExecutionCount(const BasicBlock *BB, double w) {
   double oldw = getExecutionCount(BB);
   assert (oldw != MissingValue && "Adding weight to Block with no previous weight");
   DEBUG(errs() << "Adding to Block " << BB->getName()
@@ -191,7 +196,7 @@ void ProfileInfo::addExecutionCount(const BasicBlock *BB, double w) {
 }
 
 template<>
-void ProfileInfo::removeBlock(const BasicBlock *BB) {
+void ProfileInfoT<Function,BasicBlock>::removeBlock(const BasicBlock *BB) {
   std::map<const Function*, BlockCounts>::iterator J =
     BlockInformation.find(BB->getParent());
   if (J == BlockInformation.end()) return;
@@ -201,7 +206,7 @@ void ProfileInfo::removeBlock(const BasicBlock *BB) {
 }
 
 template<>
-void ProfileInfo::removeEdge(Edge e) {
+void ProfileInfoT<Function,BasicBlock>::removeEdge(Edge e) {
   std::map<const Function*, EdgeWeights>::iterator J =
     EdgeInformation.find(getFunction(e));
   if (J == EdgeInformation.end()) return;
@@ -211,7 +216,8 @@ void ProfileInfo::removeEdge(Edge e) {
 }
 
 template<>
-void ProfileInfo::replaceEdge(const Edge &oldedge, const Edge &newedge) {
+void ProfileInfoT<Function,BasicBlock>::
+        replaceEdge(const Edge &oldedge, const Edge &newedge) {
   double w;
   if ((w = getEdgeWeight(newedge)) == MissingValue) {
     w = getEdgeWeight(oldedge);
@@ -225,8 +231,9 @@ void ProfileInfo::replaceEdge(const Edge &oldedge, const Edge &newedge) {
 }
 
 template<>
-const BasicBlock *ProfileInfo::GetPath(const BasicBlock *Src, const BasicBlock *Dest,
-                                       Path &P, unsigned Mode) {
+const BasicBlock *ProfileInfoT<Function,BasicBlock>::
+        GetPath(const BasicBlock *Src, const BasicBlock *Dest,
+                Path &P, unsigned Mode) {
   const BasicBlock *BB = 0;
   bool hasFoundPath = false;
 
@@ -268,7 +275,8 @@ const BasicBlock *ProfileInfo::GetPath(const BasicBlock *Src, const BasicBlock *
 }
 
 template<>
-void ProfileInfo::divertFlow(const Edge &oldedge, const Edge &newedge) {
+void ProfileInfoT<Function,BasicBlock>::
+        divertFlow(const Edge &oldedge, const Edge &newedge) {
   DEBUG(errs() << "Diverting " << oldedge << " via " << newedge );
 
   // First check if the old edge was taken, if not, just delete it...
@@ -302,8 +310,8 @@ void ProfileInfo::divertFlow(const Edge &oldedge, const Edge &newedge) {
 /// This checks all edges of the function the blocks reside in and replaces the
 /// occurences of RmBB with DestBB.
 template<>
-void ProfileInfo::replaceAllUses(const BasicBlock *RmBB, 
-                                 const BasicBlock *DestBB) {
+void ProfileInfoT<Function,BasicBlock>::
+        replaceAllUses(const BasicBlock *RmBB, const BasicBlock *DestBB) {
   DEBUG(errs() << "Replacing " << RmBB->getName()
                << " with " << DestBB->getName() << "\n");
   const Function *F = DestBB->getParent();
@@ -352,10 +360,10 @@ void ProfileInfo::replaceAllUses(const BasicBlock *RmBB,
 /// Since its possible that there is more than one edge in the CFG from FristBB
 /// to SecondBB its necessary to redirect the flow proporionally.
 template<>
-void ProfileInfo::splitEdge(const BasicBlock *FirstBB,
-                            const BasicBlock *SecondBB,
-                            const BasicBlock *NewBB,
-                            bool MergeIdenticalEdges) {
+void ProfileInfoT<Function,BasicBlock>::splitEdge(const BasicBlock *FirstBB,
+                                                  const BasicBlock *SecondBB,
+                                                  const BasicBlock *NewBB,
+                                                  bool MergeIdenticalEdges) {
   const Function *F = FirstBB->getParent();
   std::map<const Function*, EdgeWeights>::iterator J =
     EdgeInformation.find(F);
@@ -398,7 +406,8 @@ void ProfileInfo::splitEdge(const BasicBlock *FirstBB,
 }
 
 template<>
-void ProfileInfo::splitBlock(const BasicBlock *Old, const BasicBlock* New) {
+void ProfileInfoT<Function,BasicBlock>::splitBlock(const BasicBlock *Old,
+                                                   const BasicBlock* New) {
   const Function *F = Old->getParent();
   std::map<const Function*, EdgeWeights>::iterator J =
     EdgeInformation.find(F);
@@ -426,8 +435,10 @@ void ProfileInfo::splitBlock(const BasicBlock *Old, const BasicBlock* New) {
 }
 
 template<>
-void ProfileInfo::splitBlock(const BasicBlock *BB, const BasicBlock* NewBB,
-                            BasicBlock *const *Preds, unsigned NumPreds) {
+void ProfileInfoT<Function,BasicBlock>::splitBlock(const BasicBlock *BB,
+                                                   const BasicBlock* NewBB,
+                                                   BasicBlock *const *Preds,
+                                                   unsigned NumPreds) {
   const Function *F = BB->getParent();
   std::map<const Function*, EdgeWeights>::iterator J =
     EdgeInformation.find(F);
@@ -461,7 +472,8 @@ void ProfileInfo::splitBlock(const BasicBlock *BB, const BasicBlock* NewBB,
 }
 
 template<>
-void ProfileInfo::transfer(const Function *Old, const Function *New) {
+void ProfileInfoT<Function,BasicBlock>::transfer(const Function *Old,
+                                                 const Function *New) {
   DEBUG(errs() << "Replacing Function " << Old->getName() << " with "
                << New->getName() << "\n");
   std::map<const Function*, EdgeWeights>::iterator J =
@@ -474,8 +486,8 @@ void ProfileInfo::transfer(const Function *Old, const Function *New) {
   FunctionInformation.erase(Old);
 }
 
-static double readEdgeOrRemember(ProfileInfo::Edge edge, double w, ProfileInfo::Edge &tocalc,
-                                 unsigned &uncalc) {
+static double readEdgeOrRemember(ProfileInfo::Edge edge, double w,
+                                 ProfileInfo::Edge &tocalc, unsigned &uncalc) {
   if (w == ProfileInfo::MissingValue) {
     tocalc = edge;
     uncalc++;
@@ -486,7 +498,9 @@ static double readEdgeOrRemember(ProfileInfo::Edge edge, double w, ProfileInfo::
 }
 
 template<>
-bool ProfileInfo::CalculateMissingEdge(const BasicBlock *BB, Edge &removed, bool assumeEmptySelf) {
+bool ProfileInfoT<Function,BasicBlock>::
+        CalculateMissingEdge(const BasicBlock *BB, Edge &removed,
+                             bool assumeEmptySelf) {
   Edge edgetocalc;
   unsigned uncalculated = 0;
 
@@ -562,7 +576,7 @@ static void readEdge(ProfileInfo *PI, ProfileInfo::Edge e, double &calcw, std::s
 }
 
 template<>
-bool ProfileInfo::EstimateMissingEdges(const BasicBlock *BB) {
+bool ProfileInfoT<Function,BasicBlock>::EstimateMissingEdges(const BasicBlock *BB) {
   bool hasNoSuccessors = false;
 
   double inWeight = 0;
@@ -619,7 +633,7 @@ bool ProfileInfo::EstimateMissingEdges(const BasicBlock *BB) {
 }
 
 template<>
-void ProfileInfo::repair(const Function *F) {
+void ProfileInfoT<Function,BasicBlock>::repair(const Function *F) {
 //  if (getExecutionCount(&(F->getEntryBlock())) == 0) {
 //    for (Function::const_iterator FI = F->begin(), FE = F->end();
 //         FI != FE; ++FI) {
diff --git a/libclamav/c++/llvm/lib/Bitcode/Reader/Deserialize.cpp b/libclamav/c++/llvm/lib/Bitcode/Reader/Deserialize.cpp
index 67607ef..b8e720a 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Reader/Deserialize.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Reader/Deserialize.cpp
@@ -413,7 +413,7 @@ uintptr_t Deserializer::ReadInternalRefPtr() {
   return GetFinalPtr(E);
 }
 
-void Deserializer::BPEntry::SetPtr(BPNode*& FreeList, void* P) {
+void BPEntry::SetPtr(BPNode*& FreeList, void* P) {
   BPNode* Last = NULL;
   
   for (BPNode* N = Head; N != NULL; N=N->Next) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
index 0b1a196..c200a46 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
@@ -906,7 +906,7 @@ void DwarfDebug::constructTypeDIE(DIE &Buffer, DICompositeType CTy) {
         continue;
       DIE *ElemDie = NULL;
       if (Element.getTag() == dwarf::DW_TAG_subprogram)
-        ElemDie = createMemberSubprogramDIE(DISubprogram(Element.getNode()));
+        ElemDie = createSubprogramDIE(DISubprogram(Element.getNode()));
       else
         ElemDie = createMemberDIE(DIDerivedType(Element.getNode()));
       Buffer.addChild(ElemDie);
@@ -1098,11 +1098,13 @@ DIE *DwarfDebug::createMemberDIE(const DIDerivedType &DT) {
   return MemberDie;
 }
 
-/// createRawSubprogramDIE - Create new partially incomplete DIE. This is
-/// a helper routine used by createMemberSubprogramDIE and 
-/// createSubprogramDIE.
-DIE *DwarfDebug::createRawSubprogramDIE(const DISubprogram &SP) {
-  DIE *SPDie = new DIE(dwarf::DW_TAG_subprogram);
+/// createSubprogramDIE - Create new DIE using SP.
+DIE *DwarfDebug::createSubprogramDIE(const DISubprogram &SP, bool MakeDecl) {
+  DIE *SPDie = ModuleCU->getDIE(SP.getNode());
+  if (SPDie)
+    return SPDie;
+
+  SPDie = new DIE(dwarf::DW_TAG_subprogram);
   addString(SPDie, dwarf::DW_AT_name, dwarf::DW_FORM_string, SP.getName());
 
   StringRef LinkageName = SP.getLinkageName();
@@ -1144,52 +1146,7 @@ DIE *DwarfDebug::createRawSubprogramDIE(const DISubprogram &SP) {
     ContainingTypeMap.insert(std::make_pair(SPDie, WeakVH(SP.getContainingType().getNode())));
   }
 
-  return SPDie;
-}
-
-/// createMemberSubprogramDIE - Create new member DIE using SP. This routine
-/// always returns a die with DW_AT_declaration attribute.
-DIE *DwarfDebug::createMemberSubprogramDIE(const DISubprogram &SP) {
-  DIE *SPDie = ModuleCU->getDIE(SP.getNode());
-  if (!SPDie)
-    SPDie = createSubprogramDIE(SP);
-
-  // If SPDie has DW_AT_declaration then reuse it.
-  if (!SP.isDefinition())
-    return SPDie;
-
-  // Otherwise create new DIE for the declaration. First push definition
-  // DIE at the top level.
-  if (TopLevelDIEs.insert(SPDie))
-    TopLevelDIEsVector.push_back(SPDie);
-
-  SPDie = createRawSubprogramDIE(SP);
-
-  // Add arguments. 
-  DICompositeType SPTy = SP.getType();
-  DIArray Args = SPTy.getTypeArray();
-  unsigned SPTag = SPTy.getTag();
-  if (SPTag == dwarf::DW_TAG_subroutine_type)
-    for (unsigned i = 1, N =  Args.getNumElements(); i < N; ++i) {
-      DIE *Arg = new DIE(dwarf::DW_TAG_formal_parameter);
-      addType(Arg, DIType(Args.getElement(i).getNode()));
-      addUInt(Arg, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1); // ??
-      SPDie->addChild(Arg);
-    }
-
-  addUInt(SPDie, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
-  return SPDie;
-}
-
-/// createSubprogramDIE - Create new DIE using SP.
-DIE *DwarfDebug::createSubprogramDIE(const DISubprogram &SP) {
-  DIE *SPDie = ModuleCU->getDIE(SP.getNode());
-  if (SPDie)
-    return SPDie;
-
-  SPDie = createRawSubprogramDIE(SP);
-
-  if (!SP.isDefinition()) {
+  if (MakeDecl || !SP.isDefinition()) {
     addUInt(SPDie, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
 
     // Add arguments. Do not add arguments for subprogram definition. They will
@@ -1310,6 +1267,28 @@ DIE *DwarfDebug::updateSubprogramScopeDIE(MDNode *SPNode) {
 
  DIE *SPDie = ModuleCU->getDIE(SPNode);
  assert (SPDie && "Unable to find subprogram DIE!");
+ DISubprogram SP(SPNode);
+ if (SP.isDefinition() && !SP.getContext().isCompileUnit()) {
+   addUInt(SPDie, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
+  // Add arguments. 
+   DICompositeType SPTy = SP.getType();
+   DIArray Args = SPTy.getTypeArray();
+   unsigned SPTag = SPTy.getTag();
+   if (SPTag == dwarf::DW_TAG_subroutine_type)
+     for (unsigned i = 1, N =  Args.getNumElements(); i < N; ++i) {
+       DIE *Arg = new DIE(dwarf::DW_TAG_formal_parameter);
+       addType(Arg, DIType(Args.getElement(i).getNode()));
+       addUInt(Arg, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1); // ??
+       SPDie->addChild(Arg);
+     }
+   DIE *SPDeclDie = SPDie;
+   SPDie = new DIE(dwarf::DW_TAG_subprogram);
+   addDIEEntry(SPDie, dwarf::DW_AT_specification, dwarf::DW_FORM_ref4, 
+               SPDeclDie);
+   
+   ModuleCU->addDie(SPDie);
+ }
+   
  addLabel(SPDie, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
           DWLabel("func_begin", SubprogramCount));
  addLabel(SPDie, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
index 0e0064f..12ad322 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
@@ -350,17 +350,7 @@ class DwarfDebug : public Dwarf {
   DIE *createMemberDIE(const DIDerivedType &DT);
 
   /// createSubprogramDIE - Create new DIE using SP.
-  DIE *createSubprogramDIE(const DISubprogram &SP);
-
-  /// createMemberSubprogramDIE - Create new member DIE using SP. This
-  /// routine always returns a die with DW_AT_declaration attribute.
-
-  DIE *createMemberSubprogramDIE(const DISubprogram &SP);
-
-  /// createRawSubprogramDIE - Create new partially incomplete DIE. This is
-  /// a helper routine used by createMemberSubprogramDIE and 
-  /// createSubprogramDIE.
-  DIE *createRawSubprogramDIE(const DISubprogram &SP);
+  DIE *createSubprogramDIE(const DISubprogram &SP, bool MakeDecl = false);
 
   /// findCompileUnit - Get the compile unit for the given descriptor. 
   ///
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
index 1c8b8f4..3fd077f 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
@@ -292,14 +292,13 @@ void DwarfException::EmitFDE(const FunctionEHFrameInfo &EHFrameInfo) {
       Asm->EmitULEB128Bytes(is4Byte ? 4 : 8);
       Asm->EOL("Augmentation size");
 
+      // We force 32-bits here because we've encoded our LSDA in the CIE with
+      // `dwarf::DW_EH_PE_sdata4'. And the CIE and FDE should agree.
       if (EHFrameInfo.hasLandingPads)
-        EmitReference("exception", EHFrameInfo.Number, true, false);
-      else {
-        if (is4Byte)
-          Asm->EmitInt32((int)0);
-        else
-          Asm->EmitInt64((int)0);
-      }
+        EmitReference("exception", EHFrameInfo.Number, true, true);
+      else
+        Asm->EmitInt32((int)0);
+
       Asm->EOL("Language Specific Data Area");
     } else {
       Asm->EmitULEB128Bytes(0);
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h
index aff1665..aa01c5b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h
@@ -119,7 +119,6 @@ class DwarfException : public Dwarf {
     static inline unsigned getTombstoneKey() { return -2U; }
     static unsigned getHashValue(const unsigned &Key) { return Key; }
     static bool isEqual(unsigned LHS, unsigned RHS) { return LHS == RHS; }
-    static bool isPod() { return true; }
   };
 
   /// PadRange - Structure holding a try-range and the associated landing pad.
diff --git a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
index 7ac8bda..3887e6d 100644
--- a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
@@ -1205,11 +1205,11 @@ ReoptimizeBlock:
     }
   }
 
-  // If the prior block doesn't fall through into this block and if this block
-  // doesn't fall through into some other block and it's not branching only to a
-  // landing pad, then see if we can find a place to move this block where a
-  // fall-through will happen.
-  if (!PrevBB.canFallThrough() && !MBB->BranchesToLandingPad(MBB)) {
+  // If the prior block doesn't fall through into this block, and if this
+  // block doesn't fall through into some other block, see if we can find a
+  // place to move this block where a fall-through will happen.
+  if (!PrevBB.canFallThrough()) {
+
     // Now we know that there was no fall-through into this block, check to
     // see if it has a fall-through into its successor.
     bool CurFallsThru = MBB->canFallThrough();
@@ -1221,32 +1221,28 @@ ReoptimizeBlock:
            E = MBB->pred_end(); PI != E; ++PI) {
         // Analyze the branch at the end of the pred.
         MachineBasicBlock *PredBB = *PI;
-        MachineFunction::iterator PredNextBB = PredBB; ++PredNextBB;
+        MachineFunction::iterator PredFallthrough = PredBB; ++PredFallthrough;
         MachineBasicBlock *PredTBB, *PredFBB;
         SmallVector<MachineOperand, 4> PredCond;
-        if (PredBB != MBB && !PredBB->canFallThrough()
-            && !TII->AnalyzeBranch(*PredBB, PredTBB, PredFBB, PredCond, true)
+        if (PredBB != MBB && !PredBB->canFallThrough() &&
+            !TII->AnalyzeBranch(*PredBB, PredTBB, PredFBB, PredCond, true)
             && (!CurFallsThru || !CurTBB || !CurFBB)
             && (!CurFallsThru || MBB->getNumber() >= PredBB->getNumber())) {
-          // If the current block doesn't fall through, just move it.  If the
-          // current block can fall through and does not end with a conditional
-          // branch, we need to append an unconditional jump to the (current)
-          // next block.  To avoid a possible compile-time infinite loop, move
-          // blocks only backward in this case.
-          // 
-          // Also, if there are already 2 branches here, we cannot add a third.
-          // I.e. we have the case:
-          // 
-          //     Bcc next
-          //     B elsewhere
-          //   next:
+          // If the current block doesn't fall through, just move it.
+          // If the current block can fall through and does not end with a
+          // conditional branch, we need to append an unconditional jump to
+          // the (current) next block.  To avoid a possible compile-time
+          // infinite loop, move blocks only backward in this case.
+          // Also, if there are already 2 branches here, we cannot add a third;
+          // this means we have the case
+          // Bcc next
+          // B elsewhere
+          // next:
           if (CurFallsThru) {
-            MachineBasicBlock *NextBB =
-              llvm::next(MachineFunction::iterator(MBB));
+            MachineBasicBlock *NextBB = llvm::next(MachineFunction::iterator(MBB));
             CurCond.clear();
             TII->InsertBranch(*MBB, NextBB, 0, CurCond);
           }
-
           MBB->moveAfter(PredBB);
           MadeChange = true;
           goto ReoptimizeBlock;
diff --git a/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt b/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
index 1fac395..7a969f0 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
@@ -1,6 +1,7 @@
 add_llvm_library(LLVMCodeGen
   AggressiveAntiDepBreaker.cpp
   BranchFolding.cpp
+  CalcSpillWeights.cpp
   CodePlacementOpt.cpp
   CriticalAntiDepBreaker.cpp
   DeadMachineInstructionElim.cpp
diff --git a/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp b/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp
new file mode 100644
index 0000000..dcffb8a
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp
@@ -0,0 +1,154 @@
+//===------------------------ CalcSpillWeights.cpp ------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "calcspillweights"
+
+#include "llvm/Function.h"
+#include "llvm/ADT/SmallSet.h"
+#include "llvm/CodeGen/CalcSpillWeights.h"
+#include "llvm/CodeGen/LiveIntervalAnalysis.h"
+#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/CodeGen/MachineLoopInfo.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/SlotIndexes.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+
+using namespace llvm;
+
+char CalculateSpillWeights::ID = 0;
+static RegisterPass<CalculateSpillWeights> X("calcspillweights",
+                                             "Calculate spill weights");
+
+void CalculateSpillWeights::getAnalysisUsage(AnalysisUsage &au) const {
+  au.addRequired<LiveIntervals>();
+  au.addRequired<MachineLoopInfo>();
+  au.setPreservesAll();
+  MachineFunctionPass::getAnalysisUsage(au);
+}
+
+bool CalculateSpillWeights::runOnMachineFunction(MachineFunction &fn) {
+
+  DEBUG(errs() << "********** Compute Spill Weights **********\n"
+               << "********** Function: "
+               << fn.getFunction()->getName() << '\n');
+
+  LiveIntervals *lis = &getAnalysis<LiveIntervals>();
+  MachineLoopInfo *loopInfo = &getAnalysis<MachineLoopInfo>();
+  const TargetInstrInfo *tii = fn.getTarget().getInstrInfo();
+  MachineRegisterInfo *mri = &fn.getRegInfo();
+
+  SmallSet<unsigned, 4> processed;
+  for (MachineFunction::iterator mbbi = fn.begin(), mbbe = fn.end();
+       mbbi != mbbe; ++mbbi) {
+    MachineBasicBlock* mbb = mbbi;
+    SlotIndex mbbEnd = lis->getMBBEndIdx(mbb);
+    MachineLoop* loop = loopInfo->getLoopFor(mbb);
+    unsigned loopDepth = loop ? loop->getLoopDepth() : 0;
+    bool isExiting = loop ? loop->isLoopExiting(mbb) : false;
+
+    for (MachineBasicBlock::const_iterator mii = mbb->begin(), mie = mbb->end();
+         mii != mie; ++mii) {
+      const MachineInstr *mi = mii;
+      if (tii->isIdentityCopy(*mi))
+        continue;
+
+      if (mi->getOpcode() == TargetInstrInfo::IMPLICIT_DEF)
+        continue;
+
+      for (unsigned i = 0, e = mi->getNumOperands(); i != e; ++i) {
+        const MachineOperand &mopi = mi->getOperand(i);
+        if (!mopi.isReg() || mopi.getReg() == 0)
+          continue;
+        unsigned reg = mopi.getReg();
+        if (!TargetRegisterInfo::isVirtualRegister(mopi.getReg()))
+          continue;
+        // Multiple uses of reg by the same instruction. It should not
+        // contribute to spill weight again.
+        if (!processed.insert(reg))
+          continue;
+
+        bool hasDef = mopi.isDef();
+        bool hasUse = !hasDef;
+        for (unsigned j = i+1; j != e; ++j) {
+          const MachineOperand &mopj = mi->getOperand(j);
+          if (!mopj.isReg() || mopj.getReg() != reg)
+            continue;
+          hasDef |= mopj.isDef();
+          hasUse |= mopj.isUse();
+          if (hasDef && hasUse)
+            break;
+        }
+
+        LiveInterval &regInt = lis->getInterval(reg);
+        float weight = lis->getSpillWeight(hasDef, hasUse, loopDepth);
+        if (hasDef && isExiting) {
+          // Looks like this is a loop count variable update.
+          SlotIndex defIdx = lis->getInstructionIndex(mi).getDefIndex();
+          const LiveRange *dlr =
+            lis->getInterval(reg).getLiveRangeContaining(defIdx);
+          if (dlr->end > mbbEnd)
+            weight *= 3.0F;
+        }
+        regInt.weight += weight;
+      }
+      processed.clear();
+    }
+  }
+
+  for (LiveIntervals::iterator I = lis->begin(), E = lis->end(); I != E; ++I) {
+    LiveInterval &li = *I->second;
+    if (TargetRegisterInfo::isVirtualRegister(li.reg)) {
+      // If the live interval length is essentially zero, i.e. in every live
+      // range the use follows def immediately, it doesn't make sense to spill
+      // it and hope it will be easier to allocate for this li.
+      if (isZeroLengthInterval(&li)) {
+        li.weight = HUGE_VALF;
+        continue;
+      }
+
+      bool isLoad = false;
+      SmallVector<LiveInterval*, 4> spillIs;
+      if (lis->isReMaterializable(li, spillIs, isLoad)) {
+        // If all of the definitions of the interval are re-materializable,
+        // it is a preferred candidate for spilling. If non of the defs are
+        // loads, then it's potentially very cheap to re-materialize.
+        // FIXME: this gets much more complicated once we support non-trivial
+        // re-materialization.
+        if (isLoad)
+          li.weight *= 0.9F;
+        else
+          li.weight *= 0.5F;
+      }
+
+      // Slightly prefer live interval that has been assigned a preferred reg.
+      std::pair<unsigned, unsigned> Hint = mri->getRegAllocationHint(li.reg);
+      if (Hint.first || Hint.second)
+        li.weight *= 1.01F;
+
+      // Divide the weight of the interval by its size.  This encourages
+      // spilling of intervals that are large and have few uses, and
+      // discourages spilling of small intervals with many uses.
+      li.weight /= lis->getApproximateInstructionCount(li) * SlotIndex::NUM;
+    }
+  }
+  
+  return false;
+}
+
+/// Returns true if the given live interval is zero length.
+bool CalculateSpillWeights::isZeroLengthInterval(LiveInterval *li) const {
+  for (LiveInterval::Ranges::const_iterator
+       i = li->ranges.begin(), e = li->ranges.end(); i != e; ++i)
+    if (i->end.getPrevIndex() > i->start)
+      return false;
+  return true;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
index 80b4b0f..a58286d 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
@@ -13,16 +13,15 @@
 
 #include "llvm/CodeGen/MachineBasicBlock.h"
 #include "llvm/BasicBlock.h"
-#include "llvm/ADT/SmallSet.h"
-#include "llvm/Assembly/Writer.h"
 #include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetInstrDesc.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
-#include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Support/LeakDetector.h"
 #include "llvm/Support/raw_ostream.h"
+#include "llvm/Assembly/Writer.h"
 #include <algorithm>
 using namespace llvm;
 
@@ -449,28 +448,10 @@ void MachineBasicBlock::ReplaceUsesOfBlockWith(MachineBasicBlock *Old,
   addSuccessor(New);
 }
 
-/// BranchesToLandingPad - The basic block is a landing pad or branches only to
-/// a landing pad. No other instructions are present other than the
-/// unconditional branch.
-bool
-MachineBasicBlock::BranchesToLandingPad(const MachineBasicBlock *MBB) const {
-  SmallSet<const MachineBasicBlock*, 32> Visited;
-  const MachineBasicBlock *CurMBB = MBB;
-
-  while (!CurMBB->isLandingPad()) {
-    if (CurMBB->succ_size() != 1) break;
-    if (!Visited.insert(CurMBB)) break;
-    CurMBB = *CurMBB->succ_begin();
-  }
-
-  return CurMBB->isLandingPad();
-}
-
 /// CorrectExtraCFGEdges - Various pieces of code can cause excess edges in the
 /// CFG to be inserted.  If we have proven that MBB can only branch to DestA and
 /// DestB, remove any other MBB successors from the CFG.  DestA and DestB can
 /// be null.
-/// 
 /// Besides DestA and DestB, retain other edges leading to LandingPads
 /// (currently there can be only one; we don't check or require that here).
 /// Note it is possible that DestA and/or DestB are LandingPads.
@@ -483,16 +464,16 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
   MachineFunction::iterator FallThru =
     llvm::next(MachineFunction::iterator(this));
   
-  // If this block ends with a conditional branch that falls through to its
-  // successor, set DestB as the successor.
   if (isCond) {
+    // If this block ends with a conditional branch that falls through to its
+    // successor, set DestB as the successor.
     if (DestB == 0 && FallThru != getParent()->end()) {
       DestB = FallThru;
       AddedFallThrough = true;
     }
   } else {
     // If this is an unconditional branch with no explicit dest, it must just be
-    // a fallthrough into DestB.
+    // a fallthrough into DestA.
     if (DestA == 0 && FallThru != getParent()->end()) {
       DestA = FallThru;
       AddedFallThrough = true;
@@ -500,17 +481,16 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
   }
   
   MachineBasicBlock::succ_iterator SI = succ_begin();
-  const MachineBasicBlock *OrigDestA = DestA, *OrigDestB = DestB;
+  MachineBasicBlock *OrigDestA = DestA, *OrigDestB = DestB;
   while (SI != succ_end()) {
-    const MachineBasicBlock *MBB = *SI;
-    if (MBB == DestA) {
+    if (*SI == DestA) {
       DestA = 0;
       ++SI;
-    } else if (MBB == DestB) {
+    } else if (*SI == DestB) {
       DestB = 0;
       ++SI;
-    } else if (MBB != OrigDestA && MBB != OrigDestB &&
-               BranchesToLandingPad(MBB)) {
+    } else if ((*SI)->isLandingPad() && 
+               *SI!=OrigDestA && *SI!=OrigDestB) {
       ++SI;
     } else {
       // Otherwise, this is a superfluous edge, remove it.
@@ -518,14 +498,12 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
       MadeChange = true;
     }
   }
-
   if (!AddedFallThrough) {
     assert(DestA == 0 && DestB == 0 &&
            "MachineCFG is missing edges!");
   } else if (isCond) {
     assert(DestA == 0 && "MachineCFG is missing edges!");
   }
-
   return MadeChange;
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp b/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
index afd7b88..b0d7a47 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
@@ -16,6 +16,7 @@
 
 #define DEBUG_TYPE "pre-alloc-split"
 #include "VirtRegMap.h"
+#include "llvm/CodeGen/CalcSpillWeights.h"
 #include "llvm/CodeGen/LiveIntervalAnalysis.h"
 #include "llvm/CodeGen/LiveStackAnalysis.h"
 #include "llvm/CodeGen/MachineDominators.h"
@@ -104,6 +105,7 @@ namespace {
       AU.addRequired<LiveStacks>();
       AU.addPreserved<LiveStacks>();
       AU.addPreserved<RegisterCoalescer>();
+      AU.addPreserved<CalculateSpillWeights>();
       if (StrongPHIElim)
         AU.addPreservedID(StrongPHIEliminationID);
       else
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp b/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
index 2a43811..c02d47b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegAllocLinearScan.cpp
@@ -16,6 +16,7 @@
 #include "VirtRegRewriter.h"
 #include "Spiller.h"
 #include "llvm/Function.h"
+#include "llvm/CodeGen/CalcSpillWeights.h"
 #include "llvm/CodeGen/LiveIntervalAnalysis.h"
 #include "llvm/CodeGen/LiveStackAnalysis.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
@@ -187,6 +188,7 @@ namespace {
       // Make sure PassManager knows which analyses to make available
       // to coalescing and which analyses coalescing invalidates.
       AU.addRequiredTransitive<RegisterCoalescer>();
+      AU.addRequired<CalculateSpillWeights>();
       if (PreSplitIntervals)
         AU.addRequiredID(PreAllocSplittingID);
       AU.addRequired<LiveStacks>();
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp b/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp
index c677d34..c2014a7 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp
@@ -36,6 +36,7 @@
 #include "PBQP/Heuristics/Briggs.h"
 #include "VirtRegMap.h"
 #include "VirtRegRewriter.h"
+#include "llvm/CodeGen/CalcSpillWeights.h"
 #include "llvm/CodeGen/LiveIntervalAnalysis.h"
 #include "llvm/CodeGen/LiveStackAnalysis.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
@@ -90,6 +91,7 @@ namespace {
       au.addRequired<LiveIntervals>();
       //au.addRequiredID(SplitCriticalEdgesID);
       au.addRequired<RegisterCoalescer>();
+      au.addRequired<CalculateSpillWeights>();
       au.addRequired<LiveStacks>();
       au.addPreserved<LiveStacks>();
       au.addRequired<MachineLoopInfo>();
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index aee2f20..2b52187 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -3202,6 +3202,19 @@ SDValue DAGCombiner::visitZERO_EXTEND(SDNode *N) {
                        X, DAG.getConstant(Mask, VT));
   }
 
+  // Fold (zext (and x, cst)) -> (and (zext x), cst)
+  if (N0.getOpcode() == ISD::AND &&
+      N0.getOperand(1).getOpcode() == ISD::Constant &&
+      N0.getOperand(0).getOpcode() != ISD::TRUNCATE &&
+      N0.getOperand(0).hasOneUse()) {
+    APInt Mask = cast<ConstantSDNode>(N0.getOperand(1))->getAPIntValue();
+    Mask.zext(VT.getSizeInBits());
+    return DAG.getNode(ISD::AND, N->getDebugLoc(), VT,
+                       DAG.getNode(ISD::ZERO_EXTEND, N->getDebugLoc(), VT,
+                                   N0.getOperand(0)),
+                       DAG.getConstant(Mask, VT));
+  }
+
   // fold (zext (load x)) -> (zext (truncate (zextload x)))
   if (ISD::isNON_EXTLoad(N0.getNode()) &&
       ((!LegalOperations && !cast<LoadSDNode>(N0)->isVolatile()) ||
@@ -3278,6 +3291,26 @@ SDValue DAGCombiner::visitZERO_EXTEND(SDNode *N) {
     if (SCC.getNode()) return SCC;
   }
 
+  // (zext (shl (zext x), cst)) -> (shl (zext x), cst)
+  if ((N0.getOpcode() == ISD::SHL || N0.getOpcode() == ISD::SRL) &&
+      isa<ConstantSDNode>(N0.getOperand(1)) &&
+      N0.getOperand(0).getOpcode() == ISD::ZERO_EXTEND &&
+      N0.hasOneUse()) {
+    if (N0.getOpcode() == ISD::SHL) {
+      // If the original shl may be shifting out bits, do not perform this
+      // transformation.
+      unsigned ShAmt = cast<ConstantSDNode>(N0.getOperand(1))->getZExtValue();
+      unsigned KnownZeroBits = N0.getOperand(0).getValueType().getSizeInBits() -
+        N0.getOperand(0).getOperand(0).getValueType().getSizeInBits();
+      if (ShAmt > KnownZeroBits)
+        return SDValue();
+    }
+    DebugLoc dl = N->getDebugLoc();
+    return DAG.getNode(N0.getOpcode(), dl, VT,
+                       DAG.getNode(ISD::ZERO_EXTEND, dl, VT, N0.getOperand(0)),
+                       DAG.getNode(ISD::ZERO_EXTEND, dl, VT, N0.getOperand(1)));
+  }
+
   return SDValue();
 }
 
@@ -5196,7 +5229,7 @@ SDValue DAGCombiner::visitSTORE(SDNode *N) {
     // SimplifyDemandedBits, which only works if the value has a single use.
     if (SimplifyDemandedBits(Value,
                              APInt::getLowBitsSet(
-                               Value.getValueSizeInBits(),
+                               Value.getValueType().getScalarType().getSizeInBits(),
                                ST->getMemoryVT().getSizeInBits())))
       return SDValue(N, 0);
   }
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
index d53de34..b2ee8bb 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
@@ -20,10 +20,16 @@
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/Target/TargetSubtarget.h"
+#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
+cl::opt<bool>
+DisableInstScheduling("disable-inst-scheduling",
+                      cl::init(false),
+                      cl::desc("Disable instruction scheduling"));
+
 ScheduleDAGSDNodes::ScheduleDAGSDNodes(MachineFunction &mf)
   : ScheduleDAG(mf) {
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index abf36e5..da55e6b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -48,6 +48,8 @@
 #include <cmath>
 using namespace llvm;
 
+extern cl::opt<bool> DisableInstScheduling;
+
 /// makeVTList - Return an instance of the SDVTList struct initialized with the
 /// specified members.
 static SDVTList makeVTList(const EVT *VTs, unsigned NumVTs) {
@@ -552,6 +554,9 @@ void SelectionDAG::RemoveDeadNodes(SmallVectorImpl<SDNode *> &DeadNodes,
     }
 
     DeallocateNode(N);
+
+    // Remove the ordering of this node.
+    if (Ordering) Ordering->remove(N);
   }
 }
 
@@ -577,6 +582,9 @@ void SelectionDAG::DeleteNodeNotInCSEMaps(SDNode *N) {
   N->DropOperands();
 
   DeallocateNode(N);
+
+  // Remove the ordering of this node.
+  if (Ordering) Ordering->remove(N);
 }
 
 void SelectionDAG::DeallocateNode(SDNode *N) {
@@ -588,6 +596,9 @@ void SelectionDAG::DeallocateNode(SDNode *N) {
   N->NodeType = ISD::DELETED_NODE;
 
   NodeAllocator.Deallocate(AllNodes.remove(N));
+
+  // Remove the ordering of this node.
+  if (Ordering) Ordering->remove(N);
 }
 
 /// RemoveNodeFromCSEMaps - Take the specified node out of the CSE map that
@@ -691,7 +702,9 @@ SDNode *SelectionDAG::FindModifiedNodeSlot(SDNode *N, SDValue Op,
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, N->getOpcode(), N->getVTList(), Ops, 1);
   AddNodeIDCustom(ID, N);
-  return CSEMap.FindNodeOrInsertPos(ID, InsertPos);
+  SDNode *Node = CSEMap.FindNodeOrInsertPos(ID, InsertPos);
+  if (Ordering) Ordering->remove(Node);
+  return Node;
 }
 
 /// FindModifiedNodeSlot - Find a slot for the specified node if its operands
@@ -708,7 +721,9 @@ SDNode *SelectionDAG::FindModifiedNodeSlot(SDNode *N,
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, N->getOpcode(), N->getVTList(), Ops, 2);
   AddNodeIDCustom(ID, N);
-  return CSEMap.FindNodeOrInsertPos(ID, InsertPos);
+  SDNode *Node = CSEMap.FindNodeOrInsertPos(ID, InsertPos);
+  if (Ordering) Ordering->remove(Node);
+  return Node;
 }
 
 
@@ -725,7 +740,9 @@ SDNode *SelectionDAG::FindModifiedNodeSlot(SDNode *N,
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, N->getOpcode(), N->getVTList(), Ops, NumOps);
   AddNodeIDCustom(ID, N);
-  return CSEMap.FindNodeOrInsertPos(ID, InsertPos);
+  SDNode *Node = CSEMap.FindNodeOrInsertPos(ID, InsertPos);
+  if (Ordering) Ordering->remove(Node);
+  return Node;
 }
 
 /// VerifyNode - Sanity check the given node.  Aborts if it is invalid.
@@ -778,8 +795,13 @@ unsigned SelectionDAG::getEVTAlignment(EVT VT) const {
 SelectionDAG::SelectionDAG(TargetLowering &tli, FunctionLoweringInfo &fli)
   : TLI(tli), FLI(fli), DW(0),
     EntryNode(ISD::EntryToken, DebugLoc::getUnknownLoc(),
-    getVTList(MVT::Other)), Root(getEntryNode()) {
+              getVTList(MVT::Other)),
+    Root(getEntryNode()), Ordering(0) {
   AllNodes.push_back(&EntryNode);
+  if (DisableInstScheduling) {
+    Ordering = new NodeOrdering();
+    Ordering->add(&EntryNode);
+  }
 }
 
 void SelectionDAG::init(MachineFunction &mf, MachineModuleInfo *mmi,
@@ -792,6 +814,7 @@ void SelectionDAG::init(MachineFunction &mf, MachineModuleInfo *mmi,
 
 SelectionDAG::~SelectionDAG() {
   allnodes_clear();
+  delete Ordering;
 }
 
 void SelectionDAG::allnodes_clear() {
@@ -817,6 +840,10 @@ void SelectionDAG::clear() {
   EntryNode.UseList = 0;
   AllNodes.push_back(&EntryNode);
   Root = getEntryNode();
+  if (DisableInstScheduling) {
+    Ordering = new NodeOrdering();
+    Ordering->add(&EntryNode);
+  }
 }
 
 SDValue SelectionDAG::getSExtOrTrunc(SDValue Op, DebugLoc DL, EVT VT) {
@@ -877,14 +904,17 @@ SDValue SelectionDAG::getConstant(const ConstantInt &Val, EVT VT, bool isT) {
   ID.AddPointer(&Val);
   void *IP = 0;
   SDNode *N = NULL;
-  if ((N = CSEMap.FindNodeOrInsertPos(ID, IP)))
+  if ((N = CSEMap.FindNodeOrInsertPos(ID, IP))) {
+    if (Ordering) Ordering->add(N);
     if (!VT.isVector())
       return SDValue(N, 0);
+  }
   if (!N) {
     N = NodeAllocator.Allocate<ConstantSDNode>();
     new (N) ConstantSDNode(isT, &Val, EltVT);
     CSEMap.InsertNode(N, IP);
     AllNodes.push_back(N);
+    if (Ordering) Ordering->add(N);
   }
 
   SDValue Result(N, 0);
@@ -921,14 +951,17 @@ SDValue SelectionDAG::getConstantFP(const ConstantFP& V, EVT VT, bool isTarget){
   ID.AddPointer(&V);
   void *IP = 0;
   SDNode *N = NULL;
-  if ((N = CSEMap.FindNodeOrInsertPos(ID, IP)))
+  if ((N = CSEMap.FindNodeOrInsertPos(ID, IP))) {
+    if (Ordering) Ordering->add(N);
     if (!VT.isVector())
       return SDValue(N, 0);
+  }
   if (!N) {
     N = NodeAllocator.Allocate<ConstantFPSDNode>();
     new (N) ConstantFPSDNode(isTarget, &V, EltVT);
     CSEMap.InsertNode(N, IP);
     AllNodes.push_back(N);
+    if (Ordering) Ordering->add(N);
   }
 
   SDValue Result(N, 0);
@@ -983,12 +1016,15 @@ SDValue SelectionDAG::getGlobalAddress(const GlobalValue *GV,
   ID.AddInteger(Offset);
   ID.AddInteger(TargetFlags);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<GlobalAddressSDNode>();
   new (N) GlobalAddressSDNode(Opc, GV, VT, Offset, TargetFlags);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -998,12 +1034,15 @@ SDValue SelectionDAG::getFrameIndex(int FI, EVT VT, bool isTarget) {
   AddNodeIDNode(ID, Opc, getVTList(VT), 0, 0);
   ID.AddInteger(FI);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<FrameIndexSDNode>();
   new (N) FrameIndexSDNode(FI, VT, isTarget);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1017,12 +1056,15 @@ SDValue SelectionDAG::getJumpTable(int JTI, EVT VT, bool isTarget,
   ID.AddInteger(JTI);
   ID.AddInteger(TargetFlags);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<JumpTableSDNode>();
   new (N) JumpTableSDNode(JTI, VT, isTarget, TargetFlags);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1042,12 +1084,15 @@ SDValue SelectionDAG::getConstantPool(Constant *C, EVT VT,
   ID.AddPointer(C);
   ID.AddInteger(TargetFlags);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<ConstantPoolSDNode>();
   new (N) ConstantPoolSDNode(isTarget, C, VT, Offset, Alignment, TargetFlags);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1068,12 +1113,15 @@ SDValue SelectionDAG::getConstantPool(MachineConstantPoolValue *C, EVT VT,
   C->AddSelectionDAGCSEId(ID);
   ID.AddInteger(TargetFlags);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<ConstantPoolSDNode>();
   new (N) ConstantPoolSDNode(isTarget, C, VT, Offset, Alignment, TargetFlags);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1082,12 +1130,15 @@ SDValue SelectionDAG::getBasicBlock(MachineBasicBlock *MBB) {
   AddNodeIDNode(ID, ISD::BasicBlock, getVTList(MVT::Other), 0, 0);
   ID.AddPointer(MBB);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<BasicBlockSDNode>();
   new (N) BasicBlockSDNode(MBB);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1103,6 +1154,7 @@ SDValue SelectionDAG::getValueType(EVT VT) {
   N = NodeAllocator.Allocate<VTSDNode>();
   new (N) VTSDNode(VT);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1112,6 +1164,7 @@ SDValue SelectionDAG::getExternalSymbol(const char *Sym, EVT VT) {
   N = NodeAllocator.Allocate<ExternalSymbolSDNode>();
   new (N) ExternalSymbolSDNode(false, Sym, 0, VT);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1124,6 +1177,7 @@ SDValue SelectionDAG::getTargetExternalSymbol(const char *Sym, EVT VT,
   N = NodeAllocator.Allocate<ExternalSymbolSDNode>();
   new (N) ExternalSymbolSDNode(true, Sym, TargetFlags, VT);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1136,6 +1190,7 @@ SDValue SelectionDAG::getCondCode(ISD::CondCode Cond) {
     new (N) CondCodeSDNode(Cond);
     CondCodeNodes[Cond] = N;
     AllNodes.push_back(N);
+    if (Ordering) Ordering->add(N);
   }
   return SDValue(CondCodeNodes[Cond], 0);
 }
@@ -1228,8 +1283,10 @@ SDValue SelectionDAG::getVectorShuffle(EVT VT, DebugLoc dl, SDValue N1,
     ID.AddInteger(MaskVec[i]);
 
   void* IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
 
   // Allocate the mask array for the node out of the BumpPtrAllocator, since
   // SDNode doesn't have access to it.  This memory will be "leaked" when
@@ -1241,6 +1298,7 @@ SDValue SelectionDAG::getVectorShuffle(EVT VT, DebugLoc dl, SDValue N1,
   new (N) ShuffleVectorSDNode(VT, dl, N1, N2, MaskAlloc);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1258,12 +1316,15 @@ SDValue SelectionDAG::getConvertRndSat(EVT VT, DebugLoc dl,
   SDValue Ops[] = { Val, DTy, STy, Rnd, Sat };
   AddNodeIDNode(ID, ISD::CONVERT_RNDSAT, getVTList(VT), &Ops[0], 5);
   void* IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   CvtRndSatSDNode *N = NodeAllocator.Allocate<CvtRndSatSDNode>();
   new (N) CvtRndSatSDNode(VT, dl, Ops, 5, Code);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1272,12 +1333,15 @@ SDValue SelectionDAG::getRegister(unsigned RegNo, EVT VT) {
   AddNodeIDNode(ID, ISD::Register, getVTList(VT), 0, 0);
   ID.AddInteger(RegNo);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<RegisterSDNode>();
   new (N) RegisterSDNode(RegNo, VT);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1289,12 +1353,15 @@ SDValue SelectionDAG::getLabel(unsigned Opcode, DebugLoc dl,
   AddNodeIDNode(ID, Opcode, getVTList(MVT::Other), &Ops[0], 1);
   ID.AddInteger(LabelID);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<LabelSDNode>();
   new (N) LabelSDNode(Opcode, dl, Root, LabelID);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1308,12 +1375,15 @@ SDValue SelectionDAG::getBlockAddress(BlockAddress *BA, EVT VT,
   ID.AddPointer(BA);
   ID.AddInteger(TargetFlags);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<BlockAddressSDNode>();
   new (N) BlockAddressSDNode(Opc, VT, BA, TargetFlags);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -1326,13 +1396,16 @@ SDValue SelectionDAG::getSrcValue(const Value *V) {
   ID.AddPointer(V);
 
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
 
   SDNode *N = NodeAllocator.Allocate<SrcValueSDNode>();
   new (N) SrcValueSDNode(V);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -2243,13 +2316,16 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT) {
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, Opcode, getVTList(VT), 0, 0);
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<SDNode>();
   new (N) SDNode(Opcode, DL, getVTList(VT));
   CSEMap.InsertNode(N, IP);
 
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -2354,6 +2430,10 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL,
     assert(VT.isFloatingPoint() &&
            Operand.getValueType().isFloatingPoint() && "Invalid FP cast!");
     if (Operand.getValueType() == VT) return Operand;  // noop conversion.
+    assert((!VT.isVector() ||
+            VT.getVectorNumElements() ==
+            Operand.getValueType().getVectorNumElements()) &&
+           "Vector element count mismatch!");
     if (Operand.getOpcode() == ISD::UNDEF)
       return getUNDEF(VT);
     break;
@@ -2361,8 +2441,12 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL,
     assert(VT.isInteger() && Operand.getValueType().isInteger() &&
            "Invalid SIGN_EXTEND!");
     if (Operand.getValueType() == VT) return Operand;   // noop extension
-    assert(Operand.getValueType().bitsLT(VT)
-           && "Invalid sext node, dst < src!");
+    assert(Operand.getValueType().getScalarType().bitsLT(VT.getScalarType()) &&
+           "Invalid sext node, dst < src!");
+    assert((!VT.isVector() ||
+            VT.getVectorNumElements() ==
+            Operand.getValueType().getVectorNumElements()) &&
+           "Vector element count mismatch!");
     if (OpOpcode == ISD::SIGN_EXTEND || OpOpcode == ISD::ZERO_EXTEND)
       return getNode(OpOpcode, DL, VT, Operand.getNode()->getOperand(0));
     break;
@@ -2370,8 +2454,12 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL,
     assert(VT.isInteger() && Operand.getValueType().isInteger() &&
            "Invalid ZERO_EXTEND!");
     if (Operand.getValueType() == VT) return Operand;   // noop extension
-    assert(Operand.getValueType().bitsLT(VT)
-           && "Invalid zext node, dst < src!");
+    assert(Operand.getValueType().getScalarType().bitsLT(VT.getScalarType()) &&
+           "Invalid zext node, dst < src!");
+    assert((!VT.isVector() ||
+            VT.getVectorNumElements() ==
+            Operand.getValueType().getVectorNumElements()) &&
+           "Vector element count mismatch!");
     if (OpOpcode == ISD::ZERO_EXTEND)   // (zext (zext x)) -> (zext x)
       return getNode(ISD::ZERO_EXTEND, DL, VT,
                      Operand.getNode()->getOperand(0));
@@ -2380,8 +2468,12 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL,
     assert(VT.isInteger() && Operand.getValueType().isInteger() &&
            "Invalid ANY_EXTEND!");
     if (Operand.getValueType() == VT) return Operand;   // noop extension
-    assert(Operand.getValueType().bitsLT(VT)
-           && "Invalid anyext node, dst < src!");
+    assert(Operand.getValueType().getScalarType().bitsLT(VT.getScalarType()) &&
+           "Invalid anyext node, dst < src!");
+    assert((!VT.isVector() ||
+            VT.getVectorNumElements() ==
+            Operand.getValueType().getVectorNumElements()) &&
+           "Vector element count mismatch!");
     if (OpOpcode == ISD::ZERO_EXTEND || OpOpcode == ISD::SIGN_EXTEND)
       // (ext (zext x)) -> (zext x)  and  (ext (sext x)) -> (sext x)
       return getNode(OpOpcode, DL, VT, Operand.getNode()->getOperand(0));
@@ -2390,14 +2482,19 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL,
     assert(VT.isInteger() && Operand.getValueType().isInteger() &&
            "Invalid TRUNCATE!");
     if (Operand.getValueType() == VT) return Operand;   // noop truncate
-    assert(Operand.getValueType().bitsGT(VT)
-           && "Invalid truncate node, src < dst!");
+    assert(Operand.getValueType().getScalarType().bitsGT(VT.getScalarType()) &&
+           "Invalid truncate node, src < dst!");
+    assert((!VT.isVector() ||
+            VT.getVectorNumElements() ==
+            Operand.getValueType().getVectorNumElements()) &&
+           "Vector element count mismatch!");
     if (OpOpcode == ISD::TRUNCATE)
       return getNode(ISD::TRUNCATE, DL, VT, Operand.getNode()->getOperand(0));
     else if (OpOpcode == ISD::ZERO_EXTEND || OpOpcode == ISD::SIGN_EXTEND ||
              OpOpcode == ISD::ANY_EXTEND) {
       // If the source is smaller than the dest, we still need an extend.
-      if (Operand.getNode()->getOperand(0).getValueType().bitsLT(VT))
+      if (Operand.getNode()->getOperand(0).getValueType().getScalarType()
+            .bitsLT(VT.getScalarType()))
         return getNode(OpOpcode, DL, VT, Operand.getNode()->getOperand(0));
       else if (Operand.getNode()->getOperand(0).getValueType().bitsGT(VT))
         return getNode(ISD::TRUNCATE, DL, VT, Operand.getNode()->getOperand(0));
@@ -2452,8 +2549,10 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL,
     SDValue Ops[1] = { Operand };
     AddNodeIDNode(ID, Opcode, VTs, Ops, 1);
     void *IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+      if (Ordering) Ordering->add(E);
       return SDValue(E, 0);
+    }
     N = NodeAllocator.Allocate<UnarySDNode>();
     new (N) UnarySDNode(Opcode, DL, VTs, Operand);
     CSEMap.InsertNode(N, IP);
@@ -2463,6 +2562,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL,
   }
 
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -2870,8 +2970,10 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opcode, VTs, Ops, 2);
     void *IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+      if (Ordering) Ordering->add(E);
       return SDValue(E, 0);
+    }
     N = NodeAllocator.Allocate<BinarySDNode>();
     new (N) BinarySDNode(Opcode, DL, VTs, N1, N2);
     CSEMap.InsertNode(N, IP);
@@ -2881,6 +2983,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
   }
 
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -2947,8 +3050,10 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opcode, VTs, Ops, 3);
     void *IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+      if (Ordering) Ordering->add(E);
       return SDValue(E, 0);
+    }
     N = NodeAllocator.Allocate<TernarySDNode>();
     new (N) TernarySDNode(Opcode, DL, VTs, N1, N2, N3);
     CSEMap.InsertNode(N, IP);
@@ -2956,7 +3061,9 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
     N = NodeAllocator.Allocate<TernarySDNode>();
     new (N) TernarySDNode(Opcode, DL, VTs, N1, N2, N3);
   }
+
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -3552,12 +3659,14 @@ SDValue SelectionDAG::getAtomic(unsigned Opcode, DebugLoc dl, EVT MemVT,
   void* IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
     cast<AtomicSDNode>(E)->refineAlignment(MMO);
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
   }
   SDNode* N = NodeAllocator.Allocate<AtomicSDNode>();
   new (N) AtomicSDNode(Opcode, dl, VTs, MemVT, Chain, Ptr, Cmp, Swp, MMO);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3615,12 +3724,14 @@ SDValue SelectionDAG::getAtomic(unsigned Opcode, DebugLoc dl, EVT MemVT,
   void* IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
     cast<AtomicSDNode>(E)->refineAlignment(MMO);
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
   }
   SDNode* N = NodeAllocator.Allocate<AtomicSDNode>();
   new (N) AtomicSDNode(Opcode, dl, VTs, MemVT, Chain, Ptr, Val, MMO);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3693,6 +3804,7 @@ SelectionDAG::getMemIntrinsicNode(unsigned Opcode, DebugLoc dl, SDVTList VTList,
     void *IP = 0;
     if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
       cast<MemIntrinsicSDNode>(E)->refineAlignment(MMO);
+      if (Ordering) Ordering->add(E);
       return SDValue(E, 0);
     }
 
@@ -3704,6 +3816,7 @@ SelectionDAG::getMemIntrinsicNode(unsigned Opcode, DebugLoc dl, SDVTList VTList,
     new (N) MemIntrinsicSDNode(Opcode, dl, VTList, Ops, NumOps, MemVT, MMO);
   }
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3743,16 +3856,15 @@ SelectionDAG::getLoad(ISD::MemIndexedMode AM, DebugLoc dl,
     assert(VT == MemVT && "Non-extending load from different memory type!");
   } else {
     // Extending load.
-    if (VT.isVector())
-      assert(MemVT.getVectorNumElements() == VT.getVectorNumElements() &&
-             "Invalid vector extload!");
-    else
-      assert(MemVT.bitsLT(VT) &&
-             "Should only be an extending load, not truncating!");
-    assert((ExtType == ISD::EXTLOAD || VT.isInteger()) &&
-           "Cannot sign/zero extend a FP/Vector load!");
+    assert(MemVT.getScalarType().bitsLT(VT.getScalarType()) &&
+           "Should only be an extending load, not truncating!");
     assert(VT.isInteger() == MemVT.isInteger() &&
            "Cannot convert from FP to Int or Int -> FP!");
+    assert(VT.isVector() == MemVT.isVector() &&
+           "Cannot use trunc store to convert to or from a vector!");
+    assert((!VT.isVector() ||
+            VT.getVectorNumElements() == MemVT.getVectorNumElements()) &&
+           "Cannot use trunc store to change the number of vector elements!");
   }
 
   bool Indexed = AM != ISD::UNINDEXED;
@@ -3769,12 +3881,14 @@ SelectionDAG::getLoad(ISD::MemIndexedMode AM, DebugLoc dl,
   void *IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
     cast<LoadSDNode>(E)->refineAlignment(MMO);
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
   }
   SDNode *N = NodeAllocator.Allocate<LoadSDNode>();
   new (N) LoadSDNode(Ops, dl, VTs, AM, ExtType, MemVT, MMO);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3845,12 +3959,14 @@ SDValue SelectionDAG::getStore(SDValue Chain, DebugLoc dl, SDValue Val,
   void *IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
     cast<StoreSDNode>(E)->refineAlignment(MMO);
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
   }
   SDNode *N = NodeAllocator.Allocate<StoreSDNode>();
   new (N) StoreSDNode(Ops, dl, VTs, ISD::UNINDEXED, false, VT, MMO);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3885,10 +4001,15 @@ SDValue SelectionDAG::getTruncStore(SDValue Chain, DebugLoc dl, SDValue Val,
   if (VT == SVT)
     return getStore(Chain, dl, Val, Ptr, MMO);
 
-  assert(VT.bitsGT(SVT) && "Not a truncation?");
+  assert(SVT.getScalarType().bitsLT(VT.getScalarType()) &&
+         "Should only be a truncating store, not extending!");
   assert(VT.isInteger() == SVT.isInteger() &&
          "Can't do FP-INT conversion!");
-
+  assert(VT.isVector() == SVT.isVector() &&
+         "Cannot use trunc store to convert to or from a vector!");
+  assert((!VT.isVector() ||
+          VT.getVectorNumElements() == SVT.getVectorNumElements()) &&
+         "Cannot use trunc store to change the number of vector elements!");
 
   SDVTList VTs = getVTList(MVT::Other);
   SDValue Undef = getUNDEF(Ptr.getValueType());
@@ -3900,12 +4021,14 @@ SDValue SelectionDAG::getTruncStore(SDValue Chain, DebugLoc dl, SDValue Val,
   void *IP = 0;
   if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
     cast<StoreSDNode>(E)->refineAlignment(MMO);
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
   }
   SDNode *N = NodeAllocator.Allocate<StoreSDNode>();
   new (N) StoreSDNode(Ops, dl, VTs, ISD::UNINDEXED, true, SVT, MMO);
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3922,14 +4045,17 @@ SelectionDAG::getIndexedStore(SDValue OrigStore, DebugLoc dl, SDValue Base,
   ID.AddInteger(ST->getMemoryVT().getRawBits());
   ID.AddInteger(ST->getRawSubclassData());
   void *IP = 0;
-  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+  if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+    if (Ordering) Ordering->add(E);
     return SDValue(E, 0);
+  }
   SDNode *N = NodeAllocator.Allocate<StoreSDNode>();
   new (N) StoreSDNode(Ops, dl, VTs, AM,
                       ST->isTruncatingStore(), ST->getMemoryVT(),
                       ST->getMemOperand());
   CSEMap.InsertNode(N, IP);
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
   return SDValue(N, 0);
 }
 
@@ -3995,8 +4121,10 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
     AddNodeIDNode(ID, Opcode, VTs, Ops, NumOps);
     void *IP = 0;
 
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+      if (Ordering) Ordering->add(E);
       return SDValue(E, 0);
+    }
 
     N = NodeAllocator.Allocate<SDNode>();
     new (N) SDNode(Opcode, DL, VTs, Ops, NumOps);
@@ -4007,6 +4135,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
   }
 
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -4062,8 +4191,10 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, SDVTList VTList,
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opcode, VTList, Ops, NumOps);
     void *IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+      if (Ordering) Ordering->add(E);
       return SDValue(E, 0);
+    }
     if (NumOps == 1) {
       N = NodeAllocator.Allocate<UnarySDNode>();
       new (N) UnarySDNode(Opcode, DL, VTList, Ops[0]);
@@ -4094,6 +4225,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, SDVTList VTList,
     }
   }
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -4177,7 +4309,7 @@ SDVTList SelectionDAG::getVTList(EVT VT1, EVT VT2, EVT VT3, EVT VT4) {
                           I->VTs[2] == VT3 && I->VTs[3] == VT4)
       return *I;
 
-  EVT *Array = Allocator.Allocate<EVT>(3);
+  EVT *Array = Allocator.Allocate<EVT>(4);
   Array[0] = VT1;
   Array[1] = VT2;
   Array[2] = VT3;
@@ -4556,8 +4688,10 @@ SDNode *SelectionDAG::MorphNodeTo(SDNode *N, unsigned Opc,
   if (VTs.VTs[VTs.NumVTs-1] != MVT::Flag) {
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opc, VTs, Ops, NumOps);
-    if (SDNode *ON = CSEMap.FindNodeOrInsertPos(ID, IP))
+    if (SDNode *ON = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+      if (Ordering) Ordering->add(ON);
       return ON;
+    }
   }
 
   if (!RemoveNodeFromCSEMaps(N))
@@ -4621,6 +4755,7 @@ SDNode *SelectionDAG::MorphNodeTo(SDNode *N, unsigned Opc,
 
   if (IP)
     CSEMap.InsertNode(N, IP);   // Memoize the new node.
+  if (Ordering) Ordering->add(N);
   return N;
 }
 
@@ -4759,8 +4894,10 @@ SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc DL, SDVTList VTs,
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, ~Opcode, VTs, Ops, NumOps);
     IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+      if (Ordering) Ordering->add(E);
       return cast<MachineSDNode>(E);
+    }
   }
 
   // Allocate a new MachineSDNode.
@@ -4782,6 +4919,7 @@ SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc DL, SDVTList VTs,
     CSEMap.InsertNode(N, IP);
 
   AllNodes.push_back(N);
+  if (Ordering) Ordering->add(N);
 #ifndef NDEBUG
   VerifyNode(N);
 #endif
@@ -4818,8 +4956,10 @@ SDNode *SelectionDAG::getNodeIfExists(unsigned Opcode, SDVTList VTList,
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opcode, VTList, Ops, NumOps);
     void *IP = 0;
-    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP))
+    if (SDNode *E = CSEMap.FindNodeOrInsertPos(ID, IP)) {
+      if (Ordering) Ordering->add(E);
       return E;
+    }
   }
   return NULL;
 }
@@ -5986,6 +6126,9 @@ void SelectionDAG::dump() const {
   errs() << "\n\n";
 }
 
+void SelectionDAG::NodeOrdering::dump() const {
+}
+
 void SDNode::printr(raw_ostream &OS, const SelectionDAG *G) const {
   print_types(OS, G);
   print_details(OS, G);
@@ -6126,4 +6269,3 @@ bool ShuffleVectorSDNode::isSplatMask(const int *Mask, EVT VT) {
       return false;
   return true;
 }
-
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 2a8b57c..7568384 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -583,6 +583,9 @@ void SelectionDAGBuilder::visit(Instruction &I) {
 }
 
 void SelectionDAGBuilder::visit(unsigned Opcode, User &I) {
+  // Tell the DAG that we're processing a new instruction.
+  DAG.NewInst();
+
   // Note: this doesn't use InstVisitor, because it has to work with
   // ConstantExpr's in addition to instructions.
   switch (Opcode) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
index 93b56e1..a640c7d 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
@@ -390,7 +390,7 @@ static void ResetDebugLoc(SelectionDAGBuilder *SDB,
                           FastISel *FastIS) {
   SDB->setCurDebugLoc(DebugLoc::getUnknownLoc());
   if (FastIS)
-    SDB->setCurDebugLoc(DebugLoc::getUnknownLoc());
+    FastIS->setCurDebugLoc(DebugLoc::getUnknownLoc());
 }
 
 void SelectionDAGISel::SelectBasicBlock(BasicBlock *LLVMBB,
diff --git a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
index 810fabe..ed407eb 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
@@ -2622,114 +2622,6 @@ void SimpleRegisterCoalescing::releaseMemory() {
   ReMatDefs.clear();
 }
 
-/// Returns true if the given live interval is zero length.
-static bool isZeroLengthInterval(LiveInterval *li, LiveIntervals *li_) {
-  for (LiveInterval::Ranges::const_iterator
-         i = li->ranges.begin(), e = li->ranges.end(); i != e; ++i)
-    if (i->end.getPrevIndex() > i->start)
-      return false;
-  return true;
-}
-
-
-void SimpleRegisterCoalescing::CalculateSpillWeights() {
-  SmallSet<unsigned, 4> Processed;
-  for (MachineFunction::iterator mbbi = mf_->begin(), mbbe = mf_->end();
-       mbbi != mbbe; ++mbbi) {
-    MachineBasicBlock* MBB = mbbi;
-    SlotIndex MBBEnd = li_->getMBBEndIdx(MBB);
-    MachineLoop* loop = loopInfo->getLoopFor(MBB);
-    unsigned loopDepth = loop ? loop->getLoopDepth() : 0;
-    bool isExiting = loop ? loop->isLoopExiting(MBB) : false;
-
-    for (MachineBasicBlock::const_iterator mii = MBB->begin(), mie = MBB->end();
-         mii != mie; ++mii) {
-      const MachineInstr *MI = mii;
-      if (tii_->isIdentityCopy(*MI))
-        continue;
-
-      if (MI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF)
-        continue;
-
-      for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
-        const MachineOperand &mopi = MI->getOperand(i);
-        if (!mopi.isReg() || mopi.getReg() == 0)
-          continue;
-        unsigned Reg = mopi.getReg();
-        if (!TargetRegisterInfo::isVirtualRegister(mopi.getReg()))
-          continue;
-        // Multiple uses of reg by the same instruction. It should not
-        // contribute to spill weight again.
-        if (!Processed.insert(Reg))
-          continue;
-
-        bool HasDef = mopi.isDef();
-        bool HasUse = !HasDef;
-        for (unsigned j = i+1; j != e; ++j) {
-          const MachineOperand &mopj = MI->getOperand(j);
-          if (!mopj.isReg() || mopj.getReg() != Reg)
-            continue;
-          HasDef |= mopj.isDef();
-          HasUse |= mopj.isUse();
-          if (HasDef && HasUse)
-            break;
-        }
-
-        LiveInterval &RegInt = li_->getInterval(Reg);
-        float Weight = li_->getSpillWeight(HasDef, HasUse, loopDepth);
-        if (HasDef && isExiting) {
-          // Looks like this is a loop count variable update.
-          SlotIndex DefIdx = li_->getInstructionIndex(MI).getDefIndex();
-          const LiveRange *DLR =
-            li_->getInterval(Reg).getLiveRangeContaining(DefIdx);
-          if (DLR->end > MBBEnd)
-            Weight *= 3.0F;
-        }
-        RegInt.weight += Weight;
-      }
-      Processed.clear();
-    }
-  }
-
-  for (LiveIntervals::iterator I = li_->begin(), E = li_->end(); I != E; ++I) {
-    LiveInterval &LI = *I->second;
-    if (TargetRegisterInfo::isVirtualRegister(LI.reg)) {
-      // If the live interval length is essentially zero, i.e. in every live
-      // range the use follows def immediately, it doesn't make sense to spill
-      // it and hope it will be easier to allocate for this li.
-      if (isZeroLengthInterval(&LI, li_)) {
-        LI.weight = HUGE_VALF;
-        continue;
-      }
-
-      bool isLoad = false;
-      SmallVector<LiveInterval*, 4> SpillIs;
-      if (li_->isReMaterializable(LI, SpillIs, isLoad)) {
-        // If all of the definitions of the interval are re-materializable,
-        // it is a preferred candidate for spilling. If non of the defs are
-        // loads, then it's potentially very cheap to re-materialize.
-        // FIXME: this gets much more complicated once we support non-trivial
-        // re-materialization.
-        if (isLoad)
-          LI.weight *= 0.9F;
-        else
-          LI.weight *= 0.5F;
-      }
-
-      // Slightly prefer live interval that has been assigned a preferred reg.
-      std::pair<unsigned, unsigned> Hint = mri_->getRegAllocationHint(LI.reg);
-      if (Hint.first || Hint.second)
-        LI.weight *= 1.01F;
-
-      // Divide the weight of the interval by its size.  This encourages
-      // spilling of intervals that are large and have few uses, and
-      // discourages spilling of small intervals with many uses.
-      LI.weight /= li_->getApproximateInstructionCount(LI) * InstrSlots::NUM;
-    }
-  }
-}
-
-
 bool SimpleRegisterCoalescing::runOnMachineFunction(MachineFunction &fn) {
   mf_ = &fn;
   mri_ = &fn.getRegInfo();
@@ -2860,8 +2752,6 @@ bool SimpleRegisterCoalescing::runOnMachineFunction(MachineFunction &fn) {
     }
   }
 
-  CalculateSpillWeights();
-
   DEBUG(dump());
   return true;
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h
index 78f8a9a..605a740 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h
+++ b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.h
@@ -244,10 +244,6 @@ namespace llvm {
     MachineOperand *lastRegisterUse(SlotIndex Start, SlotIndex End,
                                     unsigned Reg, SlotIndex &LastUseIdx) const;
 
-    /// CalculateSpillWeights - Compute spill weights for all virtual register
-    /// live intervals.
-    void CalculateSpillWeights();
-
     void printRegName(unsigned reg) const;
   };
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp b/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
index b53ebec..bf58902 100644
--- a/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
@@ -90,7 +90,8 @@ namespace {
                               SmallSetVector<MachineBasicBlock*, 8> &Succs);
     bool TailDuplicateBlocks(MachineFunction &MF);
     bool TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
-                       SmallVector<MachineBasicBlock*, 8> &TDBBs);
+                       SmallVector<MachineBasicBlock*, 8> &TDBBs,
+                       SmallVector<MachineInstr*, 16> &Copies);
     void RemoveDeadBlock(MachineBasicBlock *MBB);
   };
 
@@ -194,7 +195,8 @@ bool TailDuplicatePass::TailDuplicateBlocks(MachineFunction &MF) {
                                                 MBB->succ_end());
 
     SmallVector<MachineBasicBlock*, 8> TDBBs;
-    if (TailDuplicate(MBB, MF, TDBBs)) {
+    SmallVector<MachineInstr*, 16> Copies;
+    if (TailDuplicate(MBB, MF, TDBBs, Copies)) {
       ++NumTails;
 
       // TailBB's immediate successors are now successors of those predecessors
@@ -251,6 +253,21 @@ bool TailDuplicatePass::TailDuplicateBlocks(MachineFunction &MF) {
         SSAUpdateVals.clear();
       }
 
+      // Eliminate some of the copies inserted tail duplication to maintain
+      // SSA form.
+      for (unsigned i = 0, e = Copies.size(); i != e; ++i) {
+        MachineInstr *Copy = Copies[i];
+        unsigned Src, Dst, SrcSR, DstSR;
+        if (TII->isMoveInstr(*Copy, Src, Dst, SrcSR, DstSR)) {
+          MachineRegisterInfo::use_iterator UI = MRI->use_begin(Src);
+          if (++UI == MRI->use_end()) {
+            // Copy is the only use. Do trivial copy propagation here.
+            MRI->replaceRegWith(Dst, Src);
+            Copy->eraseFromParent();
+          }
+        }
+      }
+
       if (PreRegAlloc && TailDupVerify)
         VerifyPHIs(MF, false);
       MadeChange = true;
@@ -418,7 +435,8 @@ TailDuplicatePass::UpdateSuccessorsPHIs(MachineBasicBlock *FromBB, bool isDead,
 /// of its predecessors.
 bool
 TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
-                                 SmallVector<MachineBasicBlock*, 8> &TDBBs) {
+                                 SmallVector<MachineBasicBlock*, 8> &TDBBs,
+                                 SmallVector<MachineInstr*, 16> &Copies) {
   // Don't try to tail-duplicate single-block loops.
   if (TailBB->isSuccessor(TailBB))
     return false;
@@ -502,7 +520,7 @@ TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
 
     // Clone the contents of TailBB into PredBB.
     DenseMap<unsigned, unsigned> LocalVRMap;
-    SmallVector<std::pair<unsigned,unsigned>, 4> Copies;
+    SmallVector<std::pair<unsigned,unsigned>, 4> CopyInfos;
     MachineBasicBlock::iterator I = TailBB->begin();
     while (I != TailBB->end()) {
       MachineInstr *MI = &*I;
@@ -510,7 +528,7 @@ TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
       if (MI->getOpcode() == TargetInstrInfo::PHI) {
         // Replace the uses of the def of the PHI with the register coming
         // from PredBB.
-        ProcessPHI(MI, TailBB, PredBB, LocalVRMap, Copies);
+        ProcessPHI(MI, TailBB, PredBB, LocalVRMap, CopyInfos);
       } else {
         // Replace def of virtual registers with new registers, and update
         // uses with PHI source register or the new registers.
@@ -518,9 +536,12 @@ TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
       }
     }
     MachineBasicBlock::iterator Loc = PredBB->getFirstTerminator();
-    for (unsigned i = 0, e = Copies.size(); i != e; ++i) {
-      const TargetRegisterClass *RC = MRI->getRegClass(Copies[i].first);
-      TII->copyRegToReg(*PredBB, Loc, Copies[i].first, Copies[i].second, RC, RC);
+    for (unsigned i = 0, e = CopyInfos.size(); i != e; ++i) {
+      const TargetRegisterClass *RC = MRI->getRegClass(CopyInfos[i].first);
+      TII->copyRegToReg(*PredBB, Loc, CopyInfos[i].first,
+                        CopyInfos[i].second, RC,RC);
+      MachineInstr *CopyMI = prior(Loc);
+      Copies.push_back(CopyMI);
     }
     NumInstrDups += TailBB->size() - 1; // subtract one for removed branch
 
@@ -553,14 +574,14 @@ TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
           << "From MBB: " << *TailBB);
     if (PreRegAlloc) {
       DenseMap<unsigned, unsigned> LocalVRMap;
-      SmallVector<std::pair<unsigned,unsigned>, 4> Copies;
+      SmallVector<std::pair<unsigned,unsigned>, 4> CopyInfos;
       MachineBasicBlock::iterator I = TailBB->begin();
       // Process PHI instructions first.
       while (I != TailBB->end() && I->getOpcode() == TargetInstrInfo::PHI) {
         // Replace the uses of the def of the PHI with the register coming
         // from PredBB.
         MachineInstr *MI = &*I++;
-        ProcessPHI(MI, TailBB, PrevBB, LocalVRMap, Copies);
+        ProcessPHI(MI, TailBB, PrevBB, LocalVRMap, CopyInfos);
         if (MI->getParent())
           MI->eraseFromParent();
       }
@@ -574,9 +595,12 @@ TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
         MI->eraseFromParent();
       }
       MachineBasicBlock::iterator Loc = PrevBB->getFirstTerminator();
-      for (unsigned i = 0, e = Copies.size(); i != e; ++i) {
-        const TargetRegisterClass *RC = MRI->getRegClass(Copies[i].first);
-        TII->copyRegToReg(*PrevBB, Loc, Copies[i].first, Copies[i].second, RC, RC);
+      for (unsigned i = 0, e = CopyInfos.size(); i != e; ++i) {
+        const TargetRegisterClass *RC = MRI->getRegClass(CopyInfos[i].first);
+        TII->copyRegToReg(*PrevBB, Loc, CopyInfos[i].first,
+                          CopyInfos[i].second, RC, RC);
+        MachineInstr *CopyMI = prior(Loc);
+        Copies.push_back(CopyMI);
       }
     } else {
       // No PHIs to worry about, just splice the instructions over.
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
index 6d781c7..26afa54 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
@@ -208,7 +208,7 @@ ExecutionEngine *JIT::createJIT(ModuleProvider *MP,
                                 JITMemoryManager *JMM,
                                 CodeGenOpt::Level OptLevel,
                                 bool GVsWithCode,
-				CodeModel::Model CMM) {
+                                CodeModel::Model CMM) {
   // Make sure we can resolve symbols in the program as well. The zero arg
   // to the function tells DynamicLibrary to load the program, not a library.
   if (sys::DynamicLibrary::LoadLibraryPermanently(0, ErrorStr))
@@ -681,7 +681,7 @@ void *JIT::getOrEmitGlobalVariable(const GlobalVariable *GV) {
   if (Ptr) return Ptr;
 
   // If the global is external, just remember the address.
-  if (GV->isDeclaration()) {
+  if (GV->isDeclaration() || GV->hasAvailableExternallyLinkage()) {
 #if HAVE___DSO_HANDLE
     if (GV->getName() == "__dso_handle")
       return (void*)&__dso_handle;
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDwarfEmitter.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDwarfEmitter.cpp
index f2b28ad..0193486 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDwarfEmitter.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITDwarfEmitter.cpp
@@ -175,7 +175,6 @@ struct KeyInfo {
   static inline unsigned getTombstoneKey() { return -2U; }
   static unsigned getHashValue(const unsigned &Key) { return Key; }
   static bool isEqual(unsigned LHS, unsigned RHS) { return LHS == RHS; }
-  static bool isPod() { return true; }
 };
 
 /// ActionEntry - Structure describing an entry in the actions table.
diff --git a/libclamav/c++/llvm/lib/Support/raw_ostream.cpp b/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
index 31451cc..0c90e77 100644
--- a/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
+++ b/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
@@ -209,8 +209,7 @@ raw_ostream &raw_ostream::operator<<(const void *P) {
 }
 
 raw_ostream &raw_ostream::operator<<(double N) {
-  this->operator<<(ftostr(N));
-  return *this;
+  return this->operator<<(ftostr(N));
 }
 
 
diff --git a/libclamav/c++/llvm/lib/System/Host.cpp b/libclamav/c++/llvm/lib/System/Host.cpp
index 5b78be3..79897e4 100644
--- a/libclamav/c++/llvm/lib/System/Host.cpp
+++ b/libclamav/c++/llvm/lib/System/Host.cpp
@@ -103,11 +103,8 @@ static void DetectX86FamilyModel(unsigned EAX, unsigned &Family, unsigned &Model
     Model += ((EAX >> 16) & 0xf) << 4; // Bits 16 - 19
   }
 }
-#endif
-
 
 std::string sys::getHostCPUName() {
-#if defined(__x86_64__) || defined(__i386__) || defined (_MSC_VER)
   unsigned EAX = 0, EBX = 0, ECX = 0, EDX = 0;
   if (GetX86CpuIDAndInfo(0x1, &EAX, &EBX, &ECX, &EDX))
     return "generic";
@@ -295,7 +292,10 @@ std::string sys::getHostCPUName() {
       return "generic";
     }
   }
-#endif
-
   return "generic";
 }
+#else
+std::string sys::getHostCPUName() {
+  return "generic";
+}
+#endif
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
index ac6b203..655c762 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -1474,17 +1474,24 @@ ARMTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) {
   }
 }
 
-static SDValue LowerMEMBARRIER(SDValue Op, SelectionDAG &DAG) {
+static SDValue LowerMEMBARRIER(SDValue Op, SelectionDAG &DAG,
+                          const ARMSubtarget *Subtarget) {
   DebugLoc dl = Op.getDebugLoc();
   SDValue Op5 = Op.getOperand(5);
   SDValue Res;
   unsigned isDeviceBarrier = cast<ConstantSDNode>(Op5)->getZExtValue();
   if (isDeviceBarrier) {
-    Res = DAG.getNode(ARMISD::SYNCBARRIER, dl, MVT::Other,
-                              Op.getOperand(0));
+    if (Subtarget->hasV7Ops())
+      Res = DAG.getNode(ARMISD::SYNCBARRIER, dl, MVT::Other, Op.getOperand(0));
+    else
+      Res = DAG.getNode(ARMISD::SYNCBARRIER, dl, MVT::Other, Op.getOperand(0),
+                        DAG.getConstant(0, MVT::i32));
   } else {
-    Res = DAG.getNode(ARMISD::MEMBARRIER, dl, MVT::Other,
-                              Op.getOperand(0));
+    if (Subtarget->hasV7Ops())
+      Res = DAG.getNode(ARMISD::MEMBARRIER, dl, MVT::Other, Op.getOperand(0));
+    else
+      Res = DAG.getNode(ARMISD::MEMBARRIER, dl, MVT::Other, Op.getOperand(0),
+                        DAG.getConstant(0, MVT::i32));
   }
   return Res;
 }
@@ -2991,7 +2998,7 @@ SDValue ARMTargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) {
   case ISD::BR_JT:         return LowerBR_JT(Op, DAG);
   case ISD::DYNAMIC_STACKALLOC: return LowerDYNAMIC_STACKALLOC(Op, DAG);
   case ISD::VASTART:       return LowerVASTART(Op, DAG, VarArgsFrameIndex);
-  case ISD::MEMBARRIER:    return LowerMEMBARRIER(Op, DAG);
+  case ISD::MEMBARRIER:    return LowerMEMBARRIER(Op, DAG, Subtarget);
   case ISD::SINT_TO_FP:
   case ISD::UINT_TO_FP:    return LowerINT_TO_FP(Op, DAG);
   case ISD::FP_TO_SINT:
@@ -3055,13 +3062,23 @@ ARMTargetLowering::EmitAtomicCmpSwap(MachineInstr *MI,
     .createVirtualRegister(ARM::GPRRegisterClass);
   const TargetInstrInfo *TII = getTargetMachine().getInstrInfo();
   DebugLoc dl = MI->getDebugLoc();
+  bool isThumb2 = Subtarget->isThumb2();
 
   unsigned ldrOpc, strOpc;
   switch (Size) {
   default: llvm_unreachable("unsupported size for AtomicCmpSwap!");
-  case 1: ldrOpc = ARM::LDREXB; strOpc = ARM::STREXB; break;
-  case 2: ldrOpc = ARM::LDREXH; strOpc = ARM::STREXH; break;
-  case 4: ldrOpc = ARM::LDREX;  strOpc = ARM::STREX;  break;
+  case 1:
+    ldrOpc = isThumb2 ? ARM::t2LDREXB : ARM::LDREXB;
+    strOpc = isThumb2 ? ARM::t2LDREXB : ARM::STREXB;
+    break;
+  case 2:
+    ldrOpc = isThumb2 ? ARM::t2LDREXH : ARM::LDREXH;
+    strOpc = isThumb2 ? ARM::t2STREXH : ARM::STREXH;
+    break;
+  case 4:
+    ldrOpc = isThumb2 ? ARM::t2LDREX : ARM::LDREX;
+    strOpc = isThumb2 ? ARM::t2STREX : ARM::STREX;
+    break;
   }
 
   MachineFunction *MF = BB->getParent();
@@ -3088,10 +3105,10 @@ ARMTargetLowering::EmitAtomicCmpSwap(MachineInstr *MI,
   //   bne exitMBB
   BB = loop1MBB;
   AddDefaultPred(BuildMI(BB, dl, TII->get(ldrOpc), dest).addReg(ptr));
-  AddDefaultPred(BuildMI(BB, dl, TII->get(ARM::CMPrr))
+  AddDefaultPred(BuildMI(BB, dl, TII->get(isThumb2 ? ARM::t2CMPrr : ARM::CMPrr))
                  .addReg(dest).addReg(oldval));
-  BuildMI(BB, dl, TII->get(ARM::Bcc)).addMBB(exitMBB).addImm(ARMCC::NE)
-    .addReg(ARM::CPSR);
+  BuildMI(BB, dl, TII->get(isThumb2 ? ARM::t2Bcc : ARM::Bcc))
+    .addMBB(exitMBB).addImm(ARMCC::NE).addReg(ARM::CPSR);
   BB->addSuccessor(loop2MBB);
   BB->addSuccessor(exitMBB);
 
@@ -3102,10 +3119,10 @@ ARMTargetLowering::EmitAtomicCmpSwap(MachineInstr *MI,
   BB = loop2MBB;
   AddDefaultPred(BuildMI(BB, dl, TII->get(strOpc), scratch).addReg(newval)
                  .addReg(ptr));
-  AddDefaultPred(BuildMI(BB, dl, TII->get(ARM::CMPri))
+  AddDefaultPred(BuildMI(BB, dl, TII->get(isThumb2 ? ARM::t2CMPri : ARM::CMPri))
                  .addReg(scratch).addImm(0));
-  BuildMI(BB, dl, TII->get(ARM::Bcc)).addMBB(loop1MBB).addImm(ARMCC::NE)
-    .addReg(ARM::CPSR);
+  BuildMI(BB, dl, TII->get(isThumb2 ? ARM::t2Bcc : ARM::Bcc))
+    .addMBB(loop1MBB).addImm(ARMCC::NE).addReg(ARM::CPSR);
   BB->addSuccessor(loop1MBB);
   BB->addSuccessor(exitMBB);
 
@@ -3118,11 +3135,85 @@ ARMTargetLowering::EmitAtomicCmpSwap(MachineInstr *MI,
 MachineBasicBlock *
 ARMTargetLowering::EmitAtomicBinary(MachineInstr *MI, MachineBasicBlock *BB,
                                     unsigned Size, unsigned BinOpcode) const {
-  std::string msg;
-  raw_string_ostream Msg(msg);
-  Msg << "Cannot yet emit: ";
-  MI->print(Msg);
-  llvm_report_error(Msg.str());
+  // This also handles ATOMIC_SWAP, indicated by BinOpcode==0.
+  const TargetInstrInfo *TII = getTargetMachine().getInstrInfo();
+
+  const BasicBlock *LLVM_BB = BB->getBasicBlock();
+  MachineFunction *F = BB->getParent();
+  MachineFunction::iterator It = BB;
+  ++It;
+
+  unsigned dest = MI->getOperand(0).getReg();
+  unsigned ptr = MI->getOperand(1).getReg();
+  unsigned incr = MI->getOperand(2).getReg();
+  DebugLoc dl = MI->getDebugLoc();
+  bool isThumb2 = Subtarget->isThumb2();
+  unsigned ldrOpc, strOpc;
+  switch (Size) {
+  default: llvm_unreachable("unsupported size for AtomicCmpSwap!");
+  case 1:
+    ldrOpc = isThumb2 ? ARM::t2LDREXB : ARM::LDREXB;
+    strOpc = isThumb2 ? ARM::t2LDREXB : ARM::STREXB;
+    break;
+  case 2:
+    ldrOpc = isThumb2 ? ARM::t2LDREXH : ARM::LDREXH;
+    strOpc = isThumb2 ? ARM::t2STREXH : ARM::STREXH;
+    break;
+  case 4:
+    ldrOpc = isThumb2 ? ARM::t2LDREX : ARM::LDREX;
+    strOpc = isThumb2 ? ARM::t2STREX : ARM::STREX;
+    break;
+  }
+
+  MachineBasicBlock *loopMBB = F->CreateMachineBasicBlock(LLVM_BB);
+  MachineBasicBlock *exitMBB = F->CreateMachineBasicBlock(LLVM_BB);
+  F->insert(It, loopMBB);
+  F->insert(It, exitMBB);
+  exitMBB->transferSuccessors(BB);
+
+  MachineRegisterInfo &RegInfo = F->getRegInfo();
+  unsigned scratch = RegInfo.createVirtualRegister(ARM::GPRRegisterClass);
+  unsigned scratch2 = (!BinOpcode) ? incr :
+    RegInfo.createVirtualRegister(ARM::GPRRegisterClass);
+
+  //  thisMBB:
+  //   ...
+  //   fallthrough --> loopMBB
+  BB->addSuccessor(loopMBB);
+
+  //  loopMBB:
+  //   ldrex dest, ptr
+  //   <binop> scratch2, dest, incr
+  //   strex scratch, scratch2, ptr
+  //   cmp scratch, #0
+  //   bne- loopMBB
+  //   fallthrough --> exitMBB
+  BB = loopMBB;
+  AddDefaultPred(BuildMI(BB, dl, TII->get(ldrOpc), dest).addReg(ptr));
+  if (BinOpcode) {
+    // operand order needs to go the other way for NAND
+    if (BinOpcode == ARM::BICrr || BinOpcode == ARM::t2BICrr)
+      AddDefaultPred(BuildMI(BB, dl, TII->get(BinOpcode), scratch2).
+                     addReg(incr).addReg(dest)).addReg(0);
+    else
+      AddDefaultPred(BuildMI(BB, dl, TII->get(BinOpcode), scratch2).
+                     addReg(dest).addReg(incr)).addReg(0);
+  }
+
+  AddDefaultPred(BuildMI(BB, dl, TII->get(strOpc), scratch).addReg(scratch2)
+                 .addReg(ptr));
+  AddDefaultPred(BuildMI(BB, dl, TII->get(isThumb2 ? ARM::t2CMPri : ARM::CMPri))
+                 .addReg(scratch).addImm(0));
+  BuildMI(BB, dl, TII->get(isThumb2 ? ARM::t2Bcc : ARM::Bcc))
+    .addMBB(loopMBB).addImm(ARMCC::NE).addReg(ARM::CPSR);
+
+  BB->addSuccessor(loopMBB);
+  BB->addSuccessor(exitMBB);
+
+  //  exitMBB:
+  //   ...
+  BB = exitMBB;
+  return BB;
 }
 
 MachineBasicBlock *
@@ -3131,38 +3222,57 @@ ARMTargetLowering::EmitInstrWithCustomInserter(MachineInstr *MI,
                    DenseMap<MachineBasicBlock*, MachineBasicBlock*> *EM) const {
   const TargetInstrInfo *TII = getTargetMachine().getInstrInfo();
   DebugLoc dl = MI->getDebugLoc();
+  bool isThumb2 = Subtarget->isThumb2();
   switch (MI->getOpcode()) {
   default:
     MI->dump();
     llvm_unreachable("Unexpected instr type to insert");
 
-  case ARM::ATOMIC_LOAD_ADD_I8:  return EmitAtomicBinary(MI, BB, 1, ARM::ADDrr);
-  case ARM::ATOMIC_LOAD_ADD_I16: return EmitAtomicBinary(MI, BB, 2, ARM::ADDrr);
-  case ARM::ATOMIC_LOAD_ADD_I32: return EmitAtomicBinary(MI, BB, 4, ARM::ADDrr);
-
-  case ARM::ATOMIC_LOAD_AND_I8:  return EmitAtomicBinary(MI, BB, 1, ARM::ANDrr);
-  case ARM::ATOMIC_LOAD_AND_I16: return EmitAtomicBinary(MI, BB, 2, ARM::ANDrr);
-  case ARM::ATOMIC_LOAD_AND_I32: return EmitAtomicBinary(MI, BB, 4, ARM::ANDrr);
-
-  case ARM::ATOMIC_LOAD_OR_I8:   return EmitAtomicBinary(MI, BB, 1, ARM::ORRrr);
-  case ARM::ATOMIC_LOAD_OR_I16:  return EmitAtomicBinary(MI, BB, 2, ARM::ORRrr);
-  case ARM::ATOMIC_LOAD_OR_I32:  return EmitAtomicBinary(MI, BB, 4, ARM::ORRrr);
-
-  case ARM::ATOMIC_LOAD_XOR_I8:  return EmitAtomicBinary(MI, BB, 1, ARM::EORrr);
-  case ARM::ATOMIC_LOAD_XOR_I16: return EmitAtomicBinary(MI, BB, 2, ARM::EORrr);
-  case ARM::ATOMIC_LOAD_XOR_I32: return EmitAtomicBinary(MI, BB, 4, ARM::EORrr);
-
-  case ARM::ATOMIC_LOAD_NAND_I8: return EmitAtomicBinary(MI, BB, 1, ARM::BICrr);
-  case ARM::ATOMIC_LOAD_NAND_I16:return EmitAtomicBinary(MI, BB, 2, ARM::BICrr);
-  case ARM::ATOMIC_LOAD_NAND_I32:return EmitAtomicBinary(MI, BB, 4, ARM::BICrr);
-
-  case ARM::ATOMIC_LOAD_SUB_I8:  return EmitAtomicBinary(MI, BB, 1, ARM::SUBrr);
-  case ARM::ATOMIC_LOAD_SUB_I16: return EmitAtomicBinary(MI, BB, 2, ARM::SUBrr);
-  case ARM::ATOMIC_LOAD_SUB_I32: return EmitAtomicBinary(MI, BB, 4, ARM::SUBrr);
-
-  case ARM::ATOMIC_SWAP_I8:      return EmitAtomicBinary(MI, BB, 1, 0);
-  case ARM::ATOMIC_SWAP_I16:     return EmitAtomicBinary(MI, BB, 2, 0);
-  case ARM::ATOMIC_SWAP_I32:     return EmitAtomicBinary(MI, BB, 4, 0);
+  case ARM::ATOMIC_LOAD_ADD_I8:
+     return EmitAtomicBinary(MI, BB, 1, isThumb2 ? ARM::t2ADDrr : ARM::ADDrr);
+  case ARM::ATOMIC_LOAD_ADD_I16:
+     return EmitAtomicBinary(MI, BB, 2, isThumb2 ? ARM::t2ADDrr : ARM::ADDrr);
+  case ARM::ATOMIC_LOAD_ADD_I32:
+     return EmitAtomicBinary(MI, BB, 4, isThumb2 ? ARM::t2ADDrr : ARM::ADDrr);
+
+  case ARM::ATOMIC_LOAD_AND_I8:
+     return EmitAtomicBinary(MI, BB, 1, isThumb2 ? ARM::t2ANDrr : ARM::ANDrr);
+  case ARM::ATOMIC_LOAD_AND_I16:
+     return EmitAtomicBinary(MI, BB, 2, isThumb2 ? ARM::t2ANDrr : ARM::ANDrr);
+  case ARM::ATOMIC_LOAD_AND_I32:
+     return EmitAtomicBinary(MI, BB, 4, isThumb2 ? ARM::t2ANDrr : ARM::ANDrr);
+
+  case ARM::ATOMIC_LOAD_OR_I8:
+     return EmitAtomicBinary(MI, BB, 1, isThumb2 ? ARM::t2ORRrr : ARM::ORRrr);
+  case ARM::ATOMIC_LOAD_OR_I16:
+     return EmitAtomicBinary(MI, BB, 2, isThumb2 ? ARM::t2ORRrr : ARM::ORRrr);
+  case ARM::ATOMIC_LOAD_OR_I32:
+     return EmitAtomicBinary(MI, BB, 4, isThumb2 ? ARM::t2ORRrr : ARM::ORRrr);
+
+  case ARM::ATOMIC_LOAD_XOR_I8:
+     return EmitAtomicBinary(MI, BB, 1, isThumb2 ? ARM::t2EORrr : ARM::EORrr);
+  case ARM::ATOMIC_LOAD_XOR_I16:
+     return EmitAtomicBinary(MI, BB, 2, isThumb2 ? ARM::t2EORrr : ARM::EORrr);
+  case ARM::ATOMIC_LOAD_XOR_I32:
+     return EmitAtomicBinary(MI, BB, 4, isThumb2 ? ARM::t2EORrr : ARM::EORrr);
+
+  case ARM::ATOMIC_LOAD_NAND_I8:
+     return EmitAtomicBinary(MI, BB, 1, isThumb2 ? ARM::t2BICrr : ARM::BICrr);
+  case ARM::ATOMIC_LOAD_NAND_I16:
+     return EmitAtomicBinary(MI, BB, 2, isThumb2 ? ARM::t2BICrr : ARM::BICrr);
+  case ARM::ATOMIC_LOAD_NAND_I32:
+     return EmitAtomicBinary(MI, BB, 4, isThumb2 ? ARM::t2BICrr : ARM::BICrr);
+
+  case ARM::ATOMIC_LOAD_SUB_I8:
+     return EmitAtomicBinary(MI, BB, 1, isThumb2 ? ARM::t2SUBrr : ARM::SUBrr);
+  case ARM::ATOMIC_LOAD_SUB_I16:
+     return EmitAtomicBinary(MI, BB, 2, isThumb2 ? ARM::t2SUBrr : ARM::SUBrr);
+  case ARM::ATOMIC_LOAD_SUB_I32:
+     return EmitAtomicBinary(MI, BB, 4, isThumb2 ? ARM::t2SUBrr : ARM::SUBrr);
+
+  case ARM::ATOMIC_SWAP_I8:  return EmitAtomicBinary(MI, BB, 1, 0);
+  case ARM::ATOMIC_SWAP_I16: return EmitAtomicBinary(MI, BB, 2, 0);
+  case ARM::ATOMIC_SWAP_I32: return EmitAtomicBinary(MI, BB, 4, 0);
 
   case ARM::ATOMIC_CMP_SWAP_I8:  return EmitAtomicCmpSwap(MI, BB, 1);
   case ARM::ATOMIC_CMP_SWAP_I16: return EmitAtomicCmpSwap(MI, BB, 2);
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
index 9ce93d1..cf0edff 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
@@ -201,6 +201,19 @@ class I<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
   let Pattern = pattern;
   list<Predicate> Predicates = [IsARM];
 }
+// A few are not predicable
+class InoP<dag oops, dag iops, AddrMode am, SizeFlagVal sz,
+        IndexMode im, Format f, InstrItinClass itin, 
+        string opc, string asm, string cstr,
+        list<dag> pattern>
+  : InstARM<am, sz, im, f, GenericDomain, cstr, itin> {
+  let OutOperandList = oops;
+  let InOperandList = iops;
+  let AsmString   = !strconcat(opc, asm);
+  let Pattern = pattern;
+  let isPredicable = 0;
+  list<Predicate> Predicates = [IsARM];
+}
 
 // Same as I except it can optionally modify CPSR. Note it's modeled as
 // an input operand since by default it's a zero register. It will
@@ -241,6 +254,10 @@ class AXI<dag oops, dag iops, Format f, InstrItinClass itin,
           string asm, list<dag> pattern>
   : XI<oops, iops, AddrModeNone, Size4Bytes, IndexModeNone, f, itin,
        asm, "", pattern>;
+class AInoP<dag oops, dag iops, Format f, InstrItinClass itin,
+         string opc, string asm, list<dag> pattern>
+  : InoP<oops, iops, AddrModeNone, Size4Bytes, IndexModeNone, f, itin,
+      opc, asm, "", pattern>;
 
 // Ctrl flow instructions
 class ABI<bits<4> opcod, dag oops, dag iops, InstrItinClass itin,
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
index a0798a6..e14696a 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
@@ -46,8 +46,10 @@ def SDT_ARMPICAdd  : SDTypeProfile<1, 2, [SDTCisSameAs<0, 1>,
 def SDT_ARMThreadPointer : SDTypeProfile<1, 0, [SDTCisPtrTy<0>]>;
 def SDT_ARMEH_SJLJ_Setjmp : SDTypeProfile<1, 1, [SDTCisInt<0>, SDTCisPtrTy<1>]>;
 
-def SDT_ARMMEMBARRIER  : SDTypeProfile<0, 0, []>;
-def SDT_ARMSYNCBARRIER : SDTypeProfile<0, 0, []>;
+def SDT_ARMMEMBARRIERV7  : SDTypeProfile<0, 0, []>;
+def SDT_ARMSYNCBARRIERV7 : SDTypeProfile<0, 0, []>;
+def SDT_ARMMEMBARRIERV6  : SDTypeProfile<0, 1, [SDTCisInt<0>]>;
+def SDT_ARMSYNCBARRIERV6 : SDTypeProfile<0, 1, [SDTCisInt<0>]>;
 
 // Node definitions.
 def ARMWrapper       : SDNode<"ARMISD::Wrapper",     SDTIntUnaryOp>;
@@ -96,9 +98,13 @@ def ARMrrx           : SDNode<"ARMISD::RRX"     , SDTIntUnaryOp, [SDNPInFlag ]>;
 def ARMthread_pointer: SDNode<"ARMISD::THREAD_POINTER", SDT_ARMThreadPointer>;
 def ARMeh_sjlj_setjmp: SDNode<"ARMISD::EH_SJLJ_SETJMP", SDT_ARMEH_SJLJ_Setjmp>;
 
-def ARMMemBarrier    : SDNode<"ARMISD::MEMBARRIER", SDT_ARMMEMBARRIER,
+def ARMMemBarrierV7  : SDNode<"ARMISD::MEMBARRIER", SDT_ARMMEMBARRIERV7,
                               [SDNPHasChain]>;
-def ARMSyncBarrier   : SDNode<"ARMISD::SYNCBARRIER", SDT_ARMMEMBARRIER,
+def ARMSyncBarrierV7 : SDNode<"ARMISD::SYNCBARRIER", SDT_ARMMEMBARRIERV7,
+                              [SDNPHasChain]>;
+def ARMMemBarrierV6  : SDNode<"ARMISD::MEMBARRIER", SDT_ARMMEMBARRIERV6,
+                              [SDNPHasChain]>;
+def ARMSyncBarrierV6 : SDNode<"ARMISD::SYNCBARRIER", SDT_ARMMEMBARRIERV6,
                               [SDNPHasChain]>;
 
 //===----------------------------------------------------------------------===//
@@ -780,6 +786,7 @@ let isBranch = 1, isTerminator = 1 in {
   def BR_JTr : JTI<(outs), (ins GPR:$target, jtblock_operand:$jt, i32imm:$id),
                     IIC_Br, "mov\tpc, $target \n$jt",
                     [(ARMbrjt GPR:$target, tjumptable:$jt, imm:$id)]> {
+    let Inst{11-4}  = 0b00000000;
     let Inst{15-12} = 0b1111;
     let Inst{20}    = 0; // S Bit
     let Inst{24-21} = 0b1101;
@@ -1574,26 +1581,44 @@ def MOVCCi : AI1<0b1101, (outs GPR:$dst),
 //
 
 // memory barriers protect the atomic sequences
-let isPredicable = 0, hasSideEffects = 1 in {
-def Int_MemBarrierV7 : AI<(outs), (ins),
+let hasSideEffects = 1 in {
+def Int_MemBarrierV7 : AInoP<(outs), (ins),
                         Pseudo, NoItinerary,
                         "dmb", "",
-                        [(ARMMemBarrier)]>,
-                        Requires<[HasV7]> {
+                        [(ARMMemBarrierV7)]>,
+                        Requires<[IsARM, HasV7]> {
   let Inst{31-4} = 0xf57ff05;
   // FIXME: add support for options other than a full system DMB
   let Inst{3-0} = 0b1111;
 }
 
-def Int_SyncBarrierV7 : AI<(outs), (ins),
+def Int_SyncBarrierV7 : AInoP<(outs), (ins),
                         Pseudo, NoItinerary,
                         "dsb", "",
-                        [(ARMSyncBarrier)]>,
-                        Requires<[HasV7]> {
+                        [(ARMSyncBarrierV7)]>,
+                        Requires<[IsARM, HasV7]> {
   let Inst{31-4} = 0xf57ff04;
   // FIXME: add support for options other than a full system DSB
   let Inst{3-0} = 0b1111;
 }
+
+def Int_MemBarrierV6 : AInoP<(outs), (ins GPR:$zero),
+                       Pseudo, NoItinerary,
+                       "mcr", "\tp15, 0, $zero, c7, c10, 5",
+                       [(ARMMemBarrierV6 GPR:$zero)]>,
+                       Requires<[IsARM, HasV6]> {
+  // FIXME: add support for options other than a full system DMB
+  // FIXME: add encoding
+}
+
+def Int_SyncBarrierV6 : AInoP<(outs), (ins GPR:$zero),
+                        Pseudo, NoItinerary,
+                        "mcr", "\tp15, 0, $zero, c7, c10, 4",
+                        [(ARMSyncBarrierV6 GPR:$zero)]>,
+                        Requires<[IsARM, HasV6]> {
+  // FIXME: add support for options other than a full system DSB
+  // FIXME: add encoding
+}
 }
 
 let usesCustomInserter = 1 in {
@@ -1684,7 +1709,6 @@ let usesCustomInserter = 1 in {
       "${:comment} ATOMIC_SWAP_I32 PSEUDO!",
       [(set GPR:$dst, (atomic_swap_32 GPR:$ptr, GPR:$new))]>;
 
-
     def ATOMIC_CMP_SWAP_I8 : PseudoInst<
       (outs GPR:$dst), (ins GPR:$ptr, GPR:$old, GPR:$new), NoItinerary,
       "${:comment} ATOMIC_CMP_SWAP_I8 PSEUDO!",
@@ -1710,11 +1734,15 @@ def LDREXH : AIldrex<0b11, (outs GPR:$dest), (ins GPR:$ptr), NoItinerary,
 def LDREX  : AIldrex<0b00, (outs GPR:$dest), (ins GPR:$ptr), NoItinerary,
                     "ldrex", "\t$dest, [$ptr]",
                     []>;
+def LDREXD : AIldrex<0b01, (outs GPR:$dest, GPR:$dest2), (ins GPR:$ptr),
+                    NoItinerary,
+                    "ldrexd", "\t$dest, $dest2, [$ptr]",
+                    []>;
 }
 
 let mayStore = 1 in {
 def STREXB : AIstrex<0b10, (outs GPR:$success), (ins GPR:$src, GPR:$ptr),
-                     NoItinerary,
+                    NoItinerary,
                     "strexb", "\t$success, $src, [$ptr]",
                     []>;
 def STREXH : AIstrex<0b11, (outs GPR:$success), (ins GPR:$src, GPR:$ptr),
@@ -1722,9 +1750,14 @@ def STREXH : AIstrex<0b11, (outs GPR:$success), (ins GPR:$src, GPR:$ptr),
                     "strexh", "\t$success, $src, [$ptr]",
                     []>;
 def STREX  : AIstrex<0b00, (outs GPR:$success), (ins GPR:$src, GPR:$ptr),
-                     NoItinerary,
+                    NoItinerary,
                     "strex", "\t$success, $src, [$ptr]",
                     []>;
+def STREXD : AIstrex<0b01, (outs GPR:$success),
+                    (ins GPR:$src, GPR:$src2, GPR:$ptr),
+                    NoItinerary,
+                    "strexd", "\t$success, $src, $src2, [$ptr]",
+                    []>;
 }
 
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
index 9489815..949ce73 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
@@ -1065,6 +1065,68 @@ def t2MOVCCror : T2I<(outs GPR:$dst), (ins GPR:$false, GPR:$true, i32imm:$rhs),
                    RegConstraint<"$false = $dst">;
 
 //===----------------------------------------------------------------------===//
+// Atomic operations intrinsics
+//
+
+// memory barriers protect the atomic sequences
+let hasSideEffects = 1 in {
+def t2Int_MemBarrierV7 : AInoP<(outs), (ins),
+                        Pseudo, NoItinerary,
+                        "dmb", "",
+                        [(ARMMemBarrierV7)]>,
+                        Requires<[IsThumb2]> {
+  // FIXME: add support for options other than a full system DMB
+}
+
+def t2Int_SyncBarrierV7 : AInoP<(outs), (ins),
+                        Pseudo, NoItinerary,
+                        "dsb", "",
+                        [(ARMSyncBarrierV7)]>,
+                        Requires<[IsThumb2]> {
+  // FIXME: add support for options other than a full system DSB
+}
+}
+
+let mayLoad = 1 in {
+def t2LDREXB : Thumb2I<(outs GPR:$dest), (ins GPR:$ptr), AddrModeNone,
+                      Size4Bytes, NoItinerary,
+                      "ldrexb", "\t$dest, [$ptr]", "",
+                      []>;
+def t2LDREXH : Thumb2I<(outs GPR:$dest), (ins GPR:$ptr), AddrModeNone,
+                      Size4Bytes, NoItinerary,
+                      "ldrexh", "\t$dest, [$ptr]", "",
+                      []>;
+def t2LDREX  : Thumb2I<(outs GPR:$dest), (ins GPR:$ptr), AddrModeNone,
+                      Size4Bytes, NoItinerary,
+                      "ldrex", "\t$dest, [$ptr]", "",
+                      []>;
+def t2LDREXD : Thumb2I<(outs GPR:$dest, GPR:$dest2), (ins GPR:$ptr),
+                      AddrModeNone, Size4Bytes, NoItinerary,
+                      "ldrexd", "\t$dest, $dest2, [$ptr]", "",
+                      []>;
+}
+
+let mayStore = 1 in {
+def t2STREXB : Thumb2I<(outs GPR:$success), (ins GPR:$src, GPR:$ptr),
+                      AddrModeNone, Size4Bytes, NoItinerary,
+                      "strexb", "\t$success, $src, [$ptr]", "",
+                      []>;
+def t2STREXH : Thumb2I<(outs GPR:$success), (ins GPR:$src, GPR:$ptr),
+                      AddrModeNone, Size4Bytes, NoItinerary,
+                      "strexh", "\t$success, $src, [$ptr]", "",
+                      []>;
+def t2STREX  : Thumb2I<(outs GPR:$success), (ins GPR:$src, GPR:$ptr),
+                      AddrModeNone, Size4Bytes, NoItinerary,
+                      "strex", "\t$success, $src, [$ptr]", "",
+                      []>;
+def t2STREXD : Thumb2I<(outs GPR:$success),
+                      (ins GPR:$src, GPR:$src2, GPR:$ptr),
+                      AddrModeNone, Size4Bytes, NoItinerary,
+                      "strexd", "\t$success, $src, $src2, [$ptr]", "",
+                      []>;
+}
+
+//===----------------------------------------------------------------------===//
 // TLS Instructions
 //
 
diff --git a/libclamav/c++/llvm/lib/Target/README.txt b/libclamav/c++/llvm/lib/Target/README.txt
index c788360..e1772c2 100644
--- a/libclamav/c++/llvm/lib/Target/README.txt
+++ b/libclamav/c++/llvm/lib/Target/README.txt
@@ -801,8 +801,21 @@ void bar(unsigned n) {
     true();
 }
 
-I think this basically amounts to a dag combine to simplify comparisons against
-multiply hi's into a comparison against the mullo.
+This is equivalent to the following, where 2863311531 is the multiplicative
+inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
+void bar(unsigned n) {
+  if (n * 2863311531U < 1431655766U)
+    true();
+}
+
+The same transformation can work with an even modulo with the addition of a
+rotate: rotate the result of the multiply to the right by the number of bits
+which need to be zero for the condition to be true, and shrink the compare RHS
+by the same amount.  Unless the target supports rotates, though, that
+transformation probably isn't worthwhile.
+
+The transformation can also easily be made to work with non-zero equality
+comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
 
 //===---------------------------------------------------------------------===//
 
@@ -823,20 +836,6 @@ int main() {
 
 //===---------------------------------------------------------------------===//
 
-Instcombine will merge comparisons like (x >= 10) && (x < 20) by producing (x -
-10) u< 10, but only when the comparisons have matching sign.
-
-This could be converted with a similiar technique. (PR1941)
-
-define i1 @test(i8 %x) {
-  %A = icmp uge i8 %x, 5
-  %B = icmp slt i8 %x, 20
-  %C = and i1 %A, %B
-  ret i1 %C
-}
-
-//===---------------------------------------------------------------------===//
-
 These functions perform the same computation, but produce different assembly.
 
 define i8 @select(i8 %x) readnone nounwind {
@@ -884,18 +883,6 @@ The expression should optimize to something like
 
 //===---------------------------------------------------------------------===//
 
-From GCC Bug 15241:
-unsigned int
-foo (unsigned int a, unsigned int b)
-{
- if (a <= 7 && b <= 7)
-   baz ();
-}
-Should combine to "(a|b) <= 7".  Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
-
-//===---------------------------------------------------------------------===//
-
 From GCC Bug 3756:
 int
 pn (int n)
@@ -907,19 +894,6 @@ Should combine to (n >> 31) | 1.  Currently not optimized with "clang
 
 //===---------------------------------------------------------------------===//
 
-From GCC Bug 28685:
-int test(int a, int b)
-{
- int lt = a < b;
- int eq = a == b;
-
- return (lt || eq);
-}
-Should combine to "a <= b".  Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts | llc".
-
-//===---------------------------------------------------------------------===//
-
 void a(int variable)
 {
  if (variable == 4 || variable == 6)
@@ -993,12 +967,6 @@ Should combine to 0.  Currently not optimized with "clang
 
 //===---------------------------------------------------------------------===//
 
-int a(unsigned char* b) {return *b > 99;}
-There's an unnecessary zext in the generated code with "clang
--emit-llvm-bc | opt -std-compile-opts".
-
-//===---------------------------------------------------------------------===//
-
 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
 Should be combined to  "((b >> 1) | b) & 1".  Currently not optimized
 with "clang -emit-llvm-bc | opt -std-compile-opts".
@@ -1011,12 +979,6 @@ Should combine to "x | (y & 3)".  Currently not optimized with "clang
 
 //===---------------------------------------------------------------------===//
 
-unsigned a(unsigned a) {return ((a | 1) & 3) | (a & -4);}
-Should combine to "a | 1".  Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
-
-//===---------------------------------------------------------------------===//
-
 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
 Should fold to "(~a & c) | (a & b)".  Currently not optimized with
 "clang -emit-llvm-bc | opt -std-compile-opts".
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86CallingConv.td b/libclamav/c++/llvm/lib/Target/X86/X86CallingConv.td
index d77f039..12d3d04 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86CallingConv.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86CallingConv.td
@@ -64,11 +64,18 @@ def RetCC_X86_32_C : CallingConv<[
 // X86-32 FastCC return-value convention.
 def RetCC_X86_32_Fast : CallingConv<[
   // The X86-32 fastcc returns 1, 2, or 3 FP values in XMM0-2 if the target has
-  // SSE2, otherwise it is the the C calling conventions.
+  // SSE2.
   // This can happen when a float, 2 x float, or 3 x float vector is split by
   // target lowering, and is returned in 1-3 sse regs.
   CCIfType<[f32], CCIfSubtarget<"hasSSE2()", CCAssignToReg<[XMM0,XMM1,XMM2]>>>,
   CCIfType<[f64], CCIfSubtarget<"hasSSE2()", CCAssignToReg<[XMM0,XMM1,XMM2]>>>,
+
+  // For integers, ECX can be used as an extra return register
+  CCIfType<[i8],  CCAssignToReg<[AL, DL, CL]>>,
+  CCIfType<[i16], CCAssignToReg<[AX, DX, CX]>>,
+  CCIfType<[i32], CCAssignToReg<[EAX, EDX, ECX]>>,
+
+  // Otherwise, it is the same as the common X86 calling convention.
   CCDelegateTo<RetCC_X86Common>
 ]>;
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
index 8c3b707..0517b56 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -596,6 +596,17 @@ X86TargetLowering::X86TargetLowering(X86TargetMachine &TM)
     setOperationAction(ISD::UINT_TO_FP, (MVT::SimpleValueType)VT, Expand);
     setOperationAction(ISD::SINT_TO_FP, (MVT::SimpleValueType)VT, Expand);
     setOperationAction(ISD::SIGN_EXTEND_INREG, (MVT::SimpleValueType)VT,Expand);
+    setOperationAction(ISD::TRUNCATE,  (MVT::SimpleValueType)VT, Expand);
+    setOperationAction(ISD::SIGN_EXTEND,  (MVT::SimpleValueType)VT, Expand);
+    setOperationAction(ISD::ZERO_EXTEND,  (MVT::SimpleValueType)VT, Expand);
+    setOperationAction(ISD::ANY_EXTEND,  (MVT::SimpleValueType)VT, Expand);
+    for (unsigned InnerVT = (unsigned)MVT::FIRST_VECTOR_VALUETYPE;
+         InnerVT <= (unsigned)MVT::LAST_VECTOR_VALUETYPE; ++InnerVT)
+      setTruncStoreAction((MVT::SimpleValueType)VT,
+                          (MVT::SimpleValueType)InnerVT, Expand);
+    setLoadExtAction(ISD::SEXTLOAD, (MVT::SimpleValueType)VT, Expand);
+    setLoadExtAction(ISD::ZEXTLOAD, (MVT::SimpleValueType)VT, Expand);
+    setLoadExtAction(ISD::EXTLOAD, (MVT::SimpleValueType)VT, Expand);
   }
 
   // FIXME: In order to prevent SSE instructions being expanded to MMX ones
@@ -672,8 +683,6 @@ X86TargetLowering::X86TargetLowering(X86TargetMachine &TM)
 
     setOperationAction(ISD::INSERT_VECTOR_ELT,  MVT::v4i16, Custom);
 
-    setTruncStoreAction(MVT::v8i16,             MVT::v8i8, Expand);
-    setOperationAction(ISD::TRUNCATE,           MVT::v8i8, Expand);
     setOperationAction(ISD::SELECT,             MVT::v8i8, Promote);
     setOperationAction(ISD::SELECT,             MVT::v4i16, Promote);
     setOperationAction(ISD::SELECT,             MVT::v2i32, Promote);
@@ -5741,6 +5750,17 @@ SDValue X86TargetLowering::LowerSETCC(SDValue Op, SelectionDAG &DAG) {
     return SDValue();
 
   SDValue Cond = EmitCmp(Op0, Op1, X86CC, DAG);
+
+  // Use sbb x, x to materialize carry bit into a GPR.
+  // FIXME: Temporarily disabled since it breaks self-hosting. It's apparently
+  // miscompiling ARMISelDAGToDAG.cpp.
+  if (0 && !isFP && X86CC == X86::COND_B) {
+    return DAG.getNode(ISD::AND, dl, MVT::i8,
+                       DAG.getNode(X86ISD::SETCC_CARRY, dl, MVT::i8,
+                                   DAG.getConstant(X86CC, MVT::i8), Cond),
+                       DAG.getConstant(1, MVT::i8));
+  }
+
   return DAG.getNode(X86ISD::SETCC, dl, MVT::i8,
                      DAG.getConstant(X86CC, MVT::i8), Cond);
 }
@@ -5893,9 +5913,18 @@ SDValue X86TargetLowering::LowerSELECT(SDValue Op, SelectionDAG &DAG) {
       Cond = NewCond;
   }
 
+  // Look pass (and (setcc_carry (cmp ...)), 1).
+  if (Cond.getOpcode() == ISD::AND &&
+      Cond.getOperand(0).getOpcode() == X86ISD::SETCC_CARRY) {
+    ConstantSDNode *C = dyn_cast<ConstantSDNode>(Cond.getOperand(1));
+    if (C && C->getAPIntValue() == 1) 
+      Cond = Cond.getOperand(0);
+  }
+
   // If condition flag is set by a X86ISD::CMP, then use it as the condition
   // setting operand in place of the X86ISD::SETCC.
-  if (Cond.getOpcode() == X86ISD::SETCC) {
+  if (Cond.getOpcode() == X86ISD::SETCC ||
+      Cond.getOpcode() == X86ISD::SETCC_CARRY) {
     CC = Cond.getOperand(0);
 
     SDValue Cmp = Cond.getOperand(1);
@@ -5978,9 +6007,18 @@ SDValue X86TargetLowering::LowerBRCOND(SDValue Op, SelectionDAG &DAG) {
     Cond = LowerXALUO(Cond, DAG);
 #endif
 
+  // Look pass (and (setcc_carry (cmp ...)), 1).
+  if (Cond.getOpcode() == ISD::AND &&
+      Cond.getOperand(0).getOpcode() == X86ISD::SETCC_CARRY) {
+    ConstantSDNode *C = dyn_cast<ConstantSDNode>(Cond.getOperand(1));
+    if (C && C->getAPIntValue() == 1) 
+      Cond = Cond.getOperand(0);
+  }
+
   // If condition flag is set by a X86ISD::CMP, then use it as the condition
   // setting operand in place of the X86ISD::SETCC.
-  if (Cond.getOpcode() == X86ISD::SETCC) {
+  if (Cond.getOpcode() == X86ISD::SETCC ||
+      Cond.getOpcode() == X86ISD::SETCC_CARRY) {
     CC = Cond.getOperand(0);
 
     SDValue Cmp = Cond.getOperand(1);
@@ -7367,6 +7405,7 @@ const char *X86TargetLowering::getTargetNodeName(unsigned Opcode) const {
   case X86ISD::COMI:               return "X86ISD::COMI";
   case X86ISD::UCOMI:              return "X86ISD::UCOMI";
   case X86ISD::SETCC:              return "X86ISD::SETCC";
+  case X86ISD::SETCC_CARRY:        return "X86ISD::SETCC_CARRY";
   case X86ISD::CMOV:               return "X86ISD::CMOV";
   case X86ISD::BRCOND:             return "X86ISD::BRCOND";
   case X86ISD::RET_FLAG:           return "X86ISD::RET_FLAG";
@@ -8941,11 +8980,42 @@ static SDValue PerformMulCombine(SDNode *N, SelectionDAG &DAG,
   return SDValue();
 }
 
+static SDValue PerformSHLCombine(SDNode *N, SelectionDAG &DAG) {
+  SDValue N0 = N->getOperand(0);
+  SDValue N1 = N->getOperand(1);
+  ConstantSDNode *N1C = dyn_cast<ConstantSDNode>(N1);
+  EVT VT = N0.getValueType();
+
+  // fold (shl (and (setcc_c), c1), c2) -> (and setcc_c, (c1 << c2))
+  // since the result of setcc_c is all zero's or all ones.
+  if (N1C && N0.getOpcode() == ISD::AND &&
+      N0.getOperand(1).getOpcode() == ISD::Constant) {
+    SDValue N00 = N0.getOperand(0);
+    if (N00.getOpcode() == X86ISD::SETCC_CARRY ||
+        ((N00.getOpcode() == ISD::ANY_EXTEND ||
+          N00.getOpcode() == ISD::ZERO_EXTEND) &&
+         N00.getOperand(0).getOpcode() == X86ISD::SETCC_CARRY)) {
+      APInt Mask = cast<ConstantSDNode>(N0.getOperand(1))->getAPIntValue();
+      APInt ShAmt = N1C->getAPIntValue();
+      Mask = Mask.shl(ShAmt);
+      if (Mask != 0)
+        return DAG.getNode(ISD::AND, N->getDebugLoc(), VT,
+                           N00, DAG.getConstant(Mask, VT));
+    }
+  }
+
+  return SDValue();
+}
 
 /// PerformShiftCombine - Transforms vector shift nodes to use vector shifts
 ///                       when possible.
 static SDValue PerformShiftCombine(SDNode* N, SelectionDAG &DAG,
                                    const X86Subtarget *Subtarget) {
+  EVT VT = N->getValueType(0);
+  if (!VT.isVector() && VT.isInteger() &&
+      N->getOpcode() == ISD::SHL)
+    return PerformSHLCombine(N, DAG);
+
   // On X86 with SSE2 support, we can transform this to a vector shift if
   // all elements are shifted by the same amount.  We can't do this in legalize
   // because the a constant vector is typically transformed to a constant pool
@@ -8953,7 +9023,6 @@ static SDValue PerformShiftCombine(SDNode* N, SelectionDAG &DAG,
   if (!Subtarget->hasSSE2())
     return SDValue();
 
-  EVT VT = N->getValueType(0);
   if (VT != MVT::v2i64 && VT != MVT::v4i32 && VT != MVT::v8i16)
     return SDValue();
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
index 89b773d..64bc70c 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
@@ -118,6 +118,10 @@ namespace llvm {
       /// operand produced by a CMP instruction.
       SETCC,
 
+      // Same as SETCC except it's materialized with a sbb and the value is all
+      // one's or all zero's.
+      SETCC_CARRY,
+
       /// X86 conditional moves. Operand 0 and operand 1 are the two values
       /// to select from. Operand 2 is the condition code, and operand 3 is the
       /// flag operand produced by a CMP or TEST instruction. It also writes a
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td b/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
index b5fa862..b6a2c05 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
@@ -1333,6 +1333,15 @@ def CMOVNO64rm : RI<0x41, MRMSrcMem,       // if !overflow, GR64 = [mem64]
                                      X86_COND_NO, EFLAGS))]>, TB;
 } // isTwoAddress
 
+// Use sbb to materialize carry flag into a GPR.
+let Defs = [EFLAGS], Uses = [EFLAGS], isCodeGenOnly = 1 in
+def SETB_C64r : RI<0x19, MRMInitReg, (outs GR64:$dst), (ins),
+                  "sbb{q}\t$dst, $dst",
+                 [(set GR64:$dst, (zext (X86setcc_c X86_COND_B, EFLAGS)))]>;
+
+def : Pat<(i64 (anyext (X86setcc_c X86_COND_B, EFLAGS))),
+          (SETB_C64r)>;
+
 //===----------------------------------------------------------------------===//
 //  Conversion Instructions...
 //
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
index d45dcce..1947d35 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
@@ -1058,7 +1058,7 @@ static bool hasLiveCondCodeDef(MachineInstr *MI) {
   return false;
 }
 
-/// convertToThreeAddressWithLEA - Helper for convertToThreeAddress when 16-bit
+/// convertToThreeAddressWithLEA - Helper for convertToThreeAddress when
 /// 16-bit LEA is disabled, use 32-bit LEA to form 3-address code by promoting
 /// to a 32-bit superregister and then truncating back down to a 16-bit
 /// subregister.
@@ -1081,6 +1081,11 @@ X86InstrInfo::convertToThreeAddressWithLEA(unsigned MIOpc,
             
   // Build and insert into an implicit UNDEF value. This is OK because
   // well be shifting and then extracting the lower 16-bits. 
+  // This has the potential to cause partial register stall. e.g.
+  //   movw    (%rbp,%rcx,2), %dx
+  //   leal    -65(%rdx), %esi
+  // But testing has shown this *does* help performance in 64-bit mode (at
+  // least on modern x86 machines).
   BuildMI(*MFI, MBBI, MI->getDebugLoc(), get(X86::IMPLICIT_DEF), leaInReg);
   MachineInstr *InsMI =
     BuildMI(*MFI, MBBI, MI->getDebugLoc(), get(X86::INSERT_SUBREG),leaInReg)
@@ -1184,7 +1189,9 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
   MachineInstr *NewMI = NULL;
   // FIXME: 16-bit LEA's are really slow on Athlons, but not bad on P4's.  When
   // we have better subtarget support, enable the 16-bit LEA generation here.
+  // 16-bit LEA is also slow on Core2.
   bool DisableLEA16 = true;
+  bool is64Bit = TM.getSubtarget<X86Subtarget>().is64Bit();
 
   unsigned MIOpc = MI->getOpcode();
   switch (MIOpc) {
@@ -1223,8 +1230,7 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     unsigned ShAmt = MI->getOperand(2).getImm();
     if (ShAmt == 0 || ShAmt >= 4) return 0;
 
-    unsigned Opc = TM.getSubtarget<X86Subtarget>().is64Bit() ?
-      X86::LEA64_32r : X86::LEA32r;
+    unsigned Opc = is64Bit ? X86::LEA64_32r : X86::LEA32r;
     NewMI = BuildMI(MF, MI->getDebugLoc(), get(Opc))
       .addReg(Dest, RegState::Define | getDeadRegState(isDead))
       .addReg(0).addImm(1 << ShAmt)
@@ -1239,7 +1245,7 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     if (ShAmt == 0 || ShAmt >= 4) return 0;
 
     if (DisableLEA16)
-      return convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV);
+      return is64Bit ? convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV) : 0;
     NewMI = BuildMI(MF, MI->getDebugLoc(), get(X86::LEA16r))
       .addReg(Dest, RegState::Define | getDeadRegState(isDead))
       .addReg(0).addImm(1 << ShAmt)
@@ -1254,7 +1260,6 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     if (hasLiveCondCodeDef(MI))
       return 0;
 
-    bool is64Bit = TM.getSubtarget<X86Subtarget>().is64Bit();
     switch (MIOpc) {
     default: return 0;
     case X86::INC64r:
@@ -1272,7 +1277,7 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     case X86::INC16r:
     case X86::INC64_16r:
       if (DisableLEA16)
-        return convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV);
+        return is64Bit ? convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV) : 0;
       assert(MI->getNumOperands() >= 2 && "Unknown inc instruction!");
       NewMI = addRegOffset(BuildMI(MF, MI->getDebugLoc(), get(X86::LEA16r))
                            .addReg(Dest, RegState::Define |
@@ -1294,7 +1299,7 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     case X86::DEC16r:
     case X86::DEC64_16r:
       if (DisableLEA16)
-        return convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV);
+        return is64Bit ? convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV) : 0;
       assert(MI->getNumOperands() >= 2 && "Unknown dec instruction!");
       NewMI = addRegOffset(BuildMI(MF, MI->getDebugLoc(), get(X86::LEA16r))
                            .addReg(Dest, RegState::Define |
@@ -1318,7 +1323,7 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     }
     case X86::ADD16rr: {
       if (DisableLEA16)
-        return convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV);
+        return is64Bit ? convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV) : 0;
       assert(MI->getNumOperands() >= 3 && "Unknown add instruction!");
       unsigned Src2 = MI->getOperand(2).getReg();
       bool isKill2 = MI->getOperand(2).isKill();
@@ -1351,7 +1356,7 @@ X86InstrInfo::convertToThreeAddress(MachineFunction::iterator &MFI,
     case X86::ADD16ri:
     case X86::ADD16ri8:
       if (DisableLEA16)
-        return convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV);
+        return is64Bit ? convertToThreeAddressWithLEA(MIOpc, MFI, MBBI, LV) : 0;
       assert(MI->getNumOperands() >= 3 && "Unknown add instruction!");
       NewMI = addLeaRegOffset(BuildMI(MF, MI->getDebugLoc(), get(X86::LEA16r))
                               .addReg(Dest, RegState::Define |
@@ -1619,14 +1624,17 @@ bool X86InstrInfo::AnalyzeBranch(MachineBasicBlock &MBB,
   MachineBasicBlock::iterator I = MBB.end();
   while (I != MBB.begin()) {
     --I;
-    // Working from the bottom, when we see a non-terminator
-    // instruction, we're done.
+
+    // Working from the bottom, when we see a non-terminator instruction, we're
+    // done.
     if (!isBrAnalysisUnpredicatedTerminator(I, *this))
       break;
-    // A terminator that isn't a branch can't easily be handled
-    // by this analysis.
+
+    // A terminator that isn't a branch can't easily be handled by this
+    // analysis.
     if (!I->getDesc().isBranch())
       return true;
+
     // Handle unconditional branches.
     if (I->getOpcode() == X86::JMP) {
       if (!AllowModify) {
@@ -1637,8 +1645,10 @@ bool X86InstrInfo::AnalyzeBranch(MachineBasicBlock &MBB,
       // If the block has any instructions after a JMP, delete them.
       while (llvm::next(I) != MBB.end())
         llvm::next(I)->eraseFromParent();
+
       Cond.clear();
       FBB = 0;
+
       // Delete the JMP if it's equivalent to a fall-through.
       if (MBB.isLayoutSuccessor(I->getOperand(0).getMBB())) {
         TBB = 0;
@@ -1646,14 +1656,17 @@ bool X86InstrInfo::AnalyzeBranch(MachineBasicBlock &MBB,
         I = MBB.end();
         continue;
       }
+
       // TBB is used to indicate the unconditinal destination.
       TBB = I->getOperand(0).getMBB();
       continue;
     }
+
     // Handle conditional branches.
     X86::CondCode BranchCode = GetCondFromBranchOpc(I->getOpcode());
     if (BranchCode == X86::COND_INVALID)
       return true;  // Can't handle indirect branch.
+
     // Working from the bottom, handle the first conditional branch.
     if (Cond.empty()) {
       FBB = TBB;
@@ -1661,24 +1674,26 @@ bool X86InstrInfo::AnalyzeBranch(MachineBasicBlock &MBB,
       Cond.push_back(MachineOperand::CreateImm(BranchCode));
       continue;
     }
-    // Handle subsequent conditional branches. Only handle the case
-    // where all conditional branches branch to the same destination
-    // and their condition opcodes fit one of the special
-    // multi-branch idioms.
+
+    // Handle subsequent conditional branches. Only handle the case where all
+    // conditional branches branch to the same destination and their condition
+    // opcodes fit one of the special multi-branch idioms.
     assert(Cond.size() == 1);
     assert(TBB);
-    // Only handle the case where all conditional branches branch to
-    // the same destination.
+
+    // Only handle the case where all conditional branches branch to the same
+    // destination.
     if (TBB != I->getOperand(0).getMBB())
       return true;
-    X86::CondCode OldBranchCode = (X86::CondCode)Cond[0].getImm();
+
     // If the conditions are the same, we can leave them alone.
+    X86::CondCode OldBranchCode = (X86::CondCode)Cond[0].getImm();
     if (OldBranchCode == BranchCode)
       continue;
-    // If they differ, see if they fit one of the known patterns.
-    // Theoretically we could handle more patterns here, but
-    // we shouldn't expect to see them if instruction selection
-    // has done a reasonable job.
+
+    // If they differ, see if they fit one of the known patterns. Theoretically,
+    // we could handle more patterns here, but we shouldn't expect to see them
+    // if instruction selection has done a reasonable job.
     if ((OldBranchCode == X86::COND_NP &&
          BranchCode == X86::COND_E) ||
         (OldBranchCode == X86::COND_E &&
@@ -1691,6 +1706,7 @@ bool X86InstrInfo::AnalyzeBranch(MachineBasicBlock &MBB,
       BranchCode = X86::COND_NE_OR_P;
     else
       return true;
+
     // Update the MachineOperand.
     Cond[0].setImm(BranchCode);
   }
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
index 90ef1f4..3cc1853 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
@@ -87,6 +87,7 @@ def X86cmov    : SDNode<"X86ISD::CMOV",     SDTX86Cmov>;
 def X86brcond  : SDNode<"X86ISD::BRCOND",   SDTX86BrCond,
                         [SDNPHasChain]>;
 def X86setcc   : SDNode<"X86ISD::SETCC",    SDTX86SetCC>;
+def X86setcc_c : SDNode<"X86ISD::SETCC_CARRY", SDTX86SetCC>;
 
 def X86cas : SDNode<"X86ISD::LCMPXCHG_DAG", SDTX86cas,
                         [SDNPHasChain, SDNPInFlag, SDNPOutFlag, SDNPMayStore,
@@ -816,7 +817,7 @@ def BSR32rm  : I<0xBD, MRMSrcMem, (outs GR32:$dst), (ins i32mem:$src),
 
 let neverHasSideEffects = 1 in
 def LEA16r   : I<0x8D, MRMSrcMem,
-                 (outs GR16:$dst), (ins i32mem:$src),
+                 (outs GR16:$dst), (ins lea32mem:$src),
                  "lea{w}\t{$src|$dst}, {$dst|$src}", []>, OpSize;
 let isReMaterializable = 1 in
 def LEA32r   : I<0x8D, MRMSrcMem,
@@ -3059,6 +3060,21 @@ let Defs = [AH], Uses = [EFLAGS], neverHasSideEffects = 1 in
 def LAHF     : I<0x9F, RawFrm, (outs),  (ins), "lahf", []>;  // AH = flags
 
 let Uses = [EFLAGS] in {
+// Use sbb to materialize carry bit.
+
+let Defs = [EFLAGS], isCodeGenOnly = 1 in {
+def SETB_C8r : I<0x18, MRMInitReg, (outs GR8:$dst), (ins),
+                 "sbb{b}\t$dst, $dst",
+                 [(set GR8:$dst, (X86setcc_c X86_COND_B, EFLAGS))]>;
+def SETB_C16r : I<0x19, MRMInitReg, (outs GR16:$dst), (ins),
+                  "sbb{w}\t$dst, $dst",
+                 [(set GR16:$dst, (zext (X86setcc_c X86_COND_B, EFLAGS)))]>,
+                OpSize;
+def SETB_C32r : I<0x19, MRMInitReg, (outs GR32:$dst), (ins),
+                  "sbb{l}\t$dst, $dst",
+                 [(set GR32:$dst, (zext (X86setcc_c X86_COND_B, EFLAGS)))]>;
+} // isCodeGenOnly
+
 def SETEr    : I<0x94, MRM0r, 
                  (outs GR8   :$dst), (ins),
                  "sete\t$dst",
@@ -4169,6 +4185,12 @@ def : Pat<(store (shld (loadi16 addr:$dst), (i8 imm:$amt1),
                        GR16:$src2, (i8 imm:$amt2)), addr:$dst),
           (SHLD16mri8 addr:$dst, GR16:$src2, (i8 imm:$amt1))>;
 
+// (anyext (setcc_carry)) -> (zext (setcc_carry))
+def : Pat<(i16 (anyext (X86setcc_c X86_COND_B, EFLAGS))),
+          (SETB_C16r)>;
+def : Pat<(i32 (anyext (X86setcc_c X86_COND_B, EFLAGS))),
+          (SETB_C32r)>;
+
 //===----------------------------------------------------------------------===//
 // EFLAGS-defining Patterns
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
index dcc9dd4..222792b 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
@@ -190,8 +190,11 @@ template <> struct DenseMapInfo<Expression> {
   static bool isEqual(const Expression &LHS, const Expression &RHS) {
     return LHS == RHS;
   }
-  static bool isPod() { return true; }
 };
+  
+template <>
+struct isPodLike<Expression> { static const bool value = true; };
+
 }
 
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
index 2b4b66b..b9c536f 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/InstructionCombining.cpp
@@ -11200,8 +11200,9 @@ namespace llvm {
       return LHS.PN == RHS.PN && LHS.Shift == RHS.Shift &&
              LHS.Width == RHS.Width;
     }
-    static bool isPod() { return true; }
   };
+  template <>
+  struct isPodLike<LoweredPHIRecord> { static const bool value = true; };
 }
 
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
index 564c7ac..85cc712 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
@@ -24,18 +24,14 @@
 #include "llvm/Constants.h"
 #include "llvm/Instructions.h"
 #include "llvm/IntrinsicInst.h"
-#include "llvm/Type.h"
 #include "llvm/DerivedTypes.h"
-#include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/IVUsers.h"
-#include "llvm/Analysis/LoopInfo.h"
 #include "llvm/Analysis/LoopPass.h"
 #include "llvm/Analysis/ScalarEvolutionExpander.h"
 #include "llvm/Transforms/Utils/AddrModeMatcher.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Transforms/Utils/Local.h"
 #include "llvm/ADT/Statistic.h"
-#include "llvm/Support/CFG.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ValueHandle.h"
@@ -85,8 +81,6 @@ namespace {
 
   class LoopStrengthReduce : public LoopPass {
     IVUsers *IU;
-    LoopInfo *LI;
-    DominatorTree *DT;
     ScalarEvolution *SE;
     bool Changed;
 
@@ -94,10 +88,6 @@ namespace {
     /// particular stride.
     std::map<const SCEV *, IVsOfOneStride> IVsByStride;
 
-    /// StrideNoReuse - Keep track of all the strides whose ivs cannot be
-    /// reused (nor should they be rewritten to reuse other strides).
-    SmallSet<const SCEV *, 4> StrideNoReuse;
-
     /// DeadInsts - Keep track of instructions we may have made dead, so that
     /// we can remove them after we are done working.
     SmallVector<WeakVH, 16> DeadInsts;
@@ -109,8 +99,7 @@ namespace {
   public:
     static char ID; // Pass ID, replacement for typeid
     explicit LoopStrengthReduce(const TargetLowering *tli = NULL) :
-      LoopPass(&ID), TLI(tli) {
-    }
+      LoopPass(&ID), TLI(tli) {}
 
     bool runOnLoop(Loop *L, LPPassManager &LPM);
 
@@ -118,13 +107,11 @@ namespace {
       // We split critical edges, so we change the CFG.  However, we do update
       // many analyses if they are around.
       AU.addPreservedID(LoopSimplifyID);
-      AU.addPreserved<LoopInfo>();
-      AU.addPreserved<DominanceFrontier>();
-      AU.addPreserved<DominatorTree>();
+      AU.addPreserved("loops");
+      AU.addPreserved("domfrontier");
+      AU.addPreserved("domtree");
 
       AU.addRequiredID(LoopSimplifyID);
-      AU.addRequired<LoopInfo>();
-      AU.addRequired<DominatorTree>();
       AU.addRequired<ScalarEvolution>();
       AU.addPreserved<ScalarEvolution>();
       AU.addRequired<IVUsers>();
@@ -228,19 +215,17 @@ void LoopStrengthReduce::DeleteTriviallyDeadInstructions() {
   if (DeadInsts.empty()) return;
 
   while (!DeadInsts.empty()) {
-    Instruction *I = dyn_cast_or_null<Instruction>(DeadInsts.back());
-    DeadInsts.pop_back();
+    Instruction *I = dyn_cast_or_null<Instruction>(DeadInsts.pop_back_val());
 
     if (I == 0 || !isInstructionTriviallyDead(I))
       continue;
 
-    for (User::op_iterator OI = I->op_begin(), E = I->op_end(); OI != E; ++OI) {
+    for (User::op_iterator OI = I->op_begin(), E = I->op_end(); OI != E; ++OI)
       if (Instruction *U = dyn_cast<Instruction>(*OI)) {
         *OI = 0;
         if (U->use_empty())
           DeadInsts.push_back(U);
       }
-    }
 
     I->eraseFromParent();
     Changed = true;
@@ -265,7 +250,7 @@ static bool containsAddRecFromDifferentLoop(const SCEV *S, Loop *L) {
       if (newLoop == L)
         return false;
       // if newLoop is an outer loop of L, this is OK.
-      if (!LoopInfo::isNotAlreadyContainedIn(L, newLoop))
+      if (newLoop->contains(L->getHeader()))
         return false;
     }
     return true;
@@ -338,9 +323,6 @@ namespace {
   /// BasedUser - For a particular base value, keep information about how we've
   /// partitioned the expression so far.
   struct BasedUser {
-    /// SE - The current ScalarEvolution object.
-    ScalarEvolution *SE;
-
     /// Base - The Base value for the PHI node that needs to be inserted for
     /// this use.  As the use is processed, information gets moved from this
     /// field to the Imm field (below).  BasedUser values are sorted by this
@@ -372,9 +354,9 @@ namespace {
     bool isUseOfPostIncrementedValue;
 
     BasedUser(IVStrideUse &IVSU, ScalarEvolution *se)
-      : SE(se), Base(IVSU.getOffset()), Inst(IVSU.getUser()),
+      : Base(IVSU.getOffset()), Inst(IVSU.getUser()),
         OperandValToReplace(IVSU.getOperandValToReplace()),
-        Imm(SE->getIntegerSCEV(0, Base->getType())),
+        Imm(se->getIntegerSCEV(0, Base->getType())),
         isUseOfPostIncrementedValue(IVSU.isUseOfPostIncrementedValue()) {}
 
     // Once we rewrite the code to insert the new IVs we want, update the
@@ -383,14 +365,14 @@ namespace {
     void RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
                                         Instruction *InsertPt,
                                        SCEVExpander &Rewriter, Loop *L, Pass *P,
-                                        LoopInfo &LI,
-                                        SmallVectorImpl<WeakVH> &DeadInsts);
+                                        SmallVectorImpl<WeakVH> &DeadInsts,
+                                        ScalarEvolution *SE);
 
     Value *InsertCodeForBaseAtPosition(const SCEV *const &NewBase,
                                        const Type *Ty,
                                        SCEVExpander &Rewriter,
-                                       Instruction *IP, Loop *L,
-                                       LoopInfo &LI);
+                                       Instruction *IP,
+                                       ScalarEvolution *SE);
     void dump() const;
   };
 }
@@ -404,27 +386,12 @@ void BasedUser::dump() const {
 Value *BasedUser::InsertCodeForBaseAtPosition(const SCEV *const &NewBase,
                                               const Type *Ty,
                                               SCEVExpander &Rewriter,
-                                              Instruction *IP, Loop *L,
-                                              LoopInfo &LI) {
-  // Figure out where we *really* want to insert this code.  In particular, if
-  // the user is inside of a loop that is nested inside of L, we really don't
-  // want to insert this expression before the user, we'd rather pull it out as
-  // many loops as possible.
-  Instruction *BaseInsertPt = IP;
-
-  // Figure out the most-nested loop that IP is in.
-  Loop *InsertLoop = LI.getLoopFor(IP->getParent());
-
-  // If InsertLoop is not L, and InsertLoop is nested inside of L, figure out
-  // the preheader of the outer-most loop where NewBase is not loop invariant.
-  if (L->contains(IP->getParent()))
-    while (InsertLoop && NewBase->isLoopInvariant(InsertLoop)) {
-      BaseInsertPt = InsertLoop->getLoopPreheader()->getTerminator();
-      InsertLoop = InsertLoop->getParentLoop();
-    }
-
-  Value *Base = Rewriter.expandCodeFor(NewBase, 0, BaseInsertPt);
+                                              Instruction *IP,
+                                              ScalarEvolution *SE) {
+  Value *Base = Rewriter.expandCodeFor(NewBase, 0, IP);
 
+  // Wrap the base in a SCEVUnknown so that ScalarEvolution doesn't try to
+  // re-analyze it.
   const SCEV *NewValSCEV = SE->getUnknown(Base);
 
   // Always emit the immediate into the same block as the user.
@@ -443,8 +410,8 @@ Value *BasedUser::InsertCodeForBaseAtPosition(const SCEV *const &NewBase,
 void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
                                                Instruction *NewBasePt,
                                       SCEVExpander &Rewriter, Loop *L, Pass *P,
-                                      LoopInfo &LI,
-                                      SmallVectorImpl<WeakVH> &DeadInsts) {
+                                      SmallVectorImpl<WeakVH> &DeadInsts,
+                                      ScalarEvolution *SE) {
   if (!isa<PHINode>(Inst)) {
     // By default, insert code at the user instruction.
     BasicBlock::iterator InsertPt = Inst;
@@ -473,7 +440,7 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
     }
     Value *NewVal = InsertCodeForBaseAtPosition(NewBase,
                                                 OperandValToReplace->getType(),
-                                                Rewriter, InsertPt, L, LI);
+                                                Rewriter, InsertPt, SE);
     // Replace the use of the operand Value with the new Phi we just created.
     Inst->replaceUsesOfWith(OperandValToReplace, NewVal);
 
@@ -535,7 +502,7 @@ void BasedUser::RewriteInstructionToUseNewBase(const SCEV *const &NewBase,
                                 PHIPred->getTerminator() :
                                 OldLoc->getParent()->getTerminator();
         Code = InsertCodeForBaseAtPosition(NewBase, PN->getType(),
-                                           Rewriter, InsertPt, L, LI);
+                                           Rewriter, InsertPt, SE);
 
         DEBUG(errs() << "      Changing PHI use to ");
         DEBUG(WriteAsOperand(errs(), Code, /*PrintType=*/false));
@@ -1011,17 +978,13 @@ const SCEV *LoopStrengthReduce::CheckForIVReuse(bool HasBaseReg,
                                 const SCEV *const &Stride,
                                 IVExpr &IV, const Type *Ty,
                                 const std::vector<BasedUser>& UsersToProcess) {
-  if (StrideNoReuse.count(Stride))
-    return SE->getIntegerSCEV(0, Stride->getType());
-
   if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(Stride)) {
     int64_t SInt = SC->getValue()->getSExtValue();
     for (unsigned NewStride = 0, e = IU->StrideOrder.size();
          NewStride != e; ++NewStride) {
       std::map<const SCEV *, IVsOfOneStride>::iterator SI =
                 IVsByStride.find(IU->StrideOrder[NewStride]);
-      if (SI == IVsByStride.end() || !isa<SCEVConstant>(SI->first) ||
-          StrideNoReuse.count(SI->first))
+      if (SI == IVsByStride.end() || !isa<SCEVConstant>(SI->first))
         continue;
       // The other stride has no uses, don't reuse it.
       std::map<const SCEV *, IVUsersOfOneStride *>::iterator UI =
@@ -1780,8 +1743,8 @@ LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *const &Stride,
         RewriteExpr = SE->getAddExpr(RewriteExpr, SE->getUnknown(BaseV));
 
       User.RewriteInstructionToUseNewBase(RewriteExpr, NewBasePt,
-                                          Rewriter, L, this, *LI,
-                                          DeadInsts);
+                                          Rewriter, L, this,
+                                          DeadInsts, SE);
 
       // Mark old value we replaced as possibly dead, so that it is eliminated
       // if we just replaced the last use of that value.
@@ -2745,8 +2708,6 @@ bool LoopStrengthReduce::OptimizeLoopCountIV(Loop *L) {
 
 bool LoopStrengthReduce::runOnLoop(Loop *L, LPPassManager &LPM) {
   IU = &getAnalysis<IVUsers>();
-  LI = &getAnalysis<LoopInfo>();
-  DT = &getAnalysis<DominatorTree>();
   SE = &getAnalysis<ScalarEvolution>();
   Changed = false;
 
@@ -2792,15 +2753,14 @@ bool LoopStrengthReduce::runOnLoop(Loop *L, LPPassManager &LPM) {
     // After all sharing is done, see if we can adjust the loop to test against
     // zero instead of counting up to a maximum.  This is usually faster.
     OptimizeLoopCountIV(L);
-  }
 
-  // We're done analyzing this loop; release all the state we built up for it.
-  IVsByStride.clear();
-  StrideNoReuse.clear();
+    // We're done analyzing this loop; release all the state we built up for it.
+    IVsByStride.clear();
 
-  // Clean up after ourselves
-  if (!DeadInsts.empty())
-    DeleteTriviallyDeadInstructions();
+    // Clean up after ourselves
+    if (!DeadInsts.empty())
+      DeleteTriviallyDeadInstructions();
+  }
 
   // At this point, it is worth checking to see if any recurrence PHIs are also
   // dead, so that we can remove them as well.
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
index db87874..dbc82e1 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SCCVN.cpp
@@ -154,8 +154,10 @@ template <> struct DenseMapInfo<Expression> {
   static bool isEqual(const Expression &LHS, const Expression &RHS) {
     return LHS == RHS;
   }
-  static bool isPod() { return true; }
 };
+template <>
+struct isPodLike<Expression> { static const bool value = true; };
+
 }
 
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
index 4b686cc..b040a27 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
@@ -102,27 +102,25 @@ namespace {
 
     int isSafeAllocaToScalarRepl(AllocaInst *AI);
 
-    void isSafeForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
-                             uint64_t ArrayOffset, AllocaInfo &Info);
-    void isSafeGEP(GetElementPtrInst *GEPI, AllocaInst *AI, uint64_t &Offset,
-                   uint64_t &ArrayOffset, AllocaInfo &Info);
-    void isSafeMemAccess(AllocaInst *AI, uint64_t Offset, uint64_t ArrayOffset,
-                         uint64_t MemSize, const Type *MemOpType, bool isStore,
-                         AllocaInfo &Info);
-    bool TypeHasComponent(const Type *T, uint64_t Offset, uint64_t Size);
-    unsigned FindElementAndOffset(const Type *&T, uint64_t &Offset);
+    void isSafeUseOfAllocation(Instruction *User, AllocaInst *AI,
+                               AllocaInfo &Info);
+    void isSafeElementUse(Value *Ptr, bool isFirstElt, AllocaInst *AI,
+                          AllocaInfo &Info);
+    void isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocaInst *AI,
+                                        unsigned OpNo, AllocaInfo &Info);
+    void isSafeUseOfBitCastedAllocation(BitCastInst *User, AllocaInst *AI,
+                                        AllocaInfo &Info);
     
     void DoScalarReplacement(AllocaInst *AI, 
                              std::vector<AllocaInst*> &WorkList);
     void CleanupGEP(GetElementPtrInst *GEP);
-    void CleanupAllocaUsers(Value *V);
+    void CleanupAllocaUsers(AllocaInst *AI);
     AllocaInst *AddNewAlloca(Function &F, const Type *Ty, AllocaInst *Base);
     
-    void RewriteForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
-                              SmallVector<AllocaInst*, 32> &NewElts);
-    void RewriteGEP(GetElementPtrInst *GEPI, AllocaInst *AI, uint64_t Offset,
-                    SmallVector<AllocaInst*, 32> &NewElts);
-    void RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
+    void RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocaInst *AI,
+                                    SmallVector<AllocaInst*, 32> &NewElts);
+    
+    void RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
                                       AllocaInst *AI,
                                       SmallVector<AllocaInst*, 32> &NewElts);
     void RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
@@ -362,12 +360,176 @@ void SROA::DoScalarReplacement(AllocaInst *AI,
     }
   }
 
-  // Now that we have created the new alloca instructions, rewrite all the
-  // uses of the old alloca.
-  RewriteForScalarRepl(AI, AI, 0, ElementAllocas);
+  // Now that we have created the alloca instructions that we want to use,
+  // expand the getelementptr instructions to use them.
+  while (!AI->use_empty()) {
+    Instruction *User = cast<Instruction>(AI->use_back());
+    if (BitCastInst *BCInst = dyn_cast<BitCastInst>(User)) {
+      RewriteBitCastUserOfAlloca(BCInst, AI, ElementAllocas);
+      BCInst->eraseFromParent();
+      continue;
+    }
+    
+    // Replace:
+    //   %res = load { i32, i32 }* %alloc
+    // with:
+    //   %load.0 = load i32* %alloc.0
+    //   %insert.0 insertvalue { i32, i32 } zeroinitializer, i32 %load.0, 0 
+    //   %load.1 = load i32* %alloc.1
+    //   %insert = insertvalue { i32, i32 } %insert.0, i32 %load.1, 1 
+    // (Also works for arrays instead of structs)
+    if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
+      Value *Insert = UndefValue::get(LI->getType());
+      for (unsigned i = 0, e = ElementAllocas.size(); i != e; ++i) {
+        Value *Load = new LoadInst(ElementAllocas[i], "load", LI);
+        Insert = InsertValueInst::Create(Insert, Load, i, "insert", LI);
+      }
+      LI->replaceAllUsesWith(Insert);
+      LI->eraseFromParent();
+      continue;
+    }
+
+    // Replace:
+    //   store { i32, i32 } %val, { i32, i32 }* %alloc
+    // with:
+    //   %val.0 = extractvalue { i32, i32 } %val, 0 
+    //   store i32 %val.0, i32* %alloc.0
+    //   %val.1 = extractvalue { i32, i32 } %val, 1 
+    //   store i32 %val.1, i32* %alloc.1
+    // (Also works for arrays instead of structs)
+    if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
+      Value *Val = SI->getOperand(0);
+      for (unsigned i = 0, e = ElementAllocas.size(); i != e; ++i) {
+        Value *Extract = ExtractValueInst::Create(Val, i, Val->getName(), SI);
+        new StoreInst(Extract, ElementAllocas[i], SI);
+      }
+      SI->eraseFromParent();
+      continue;
+    }
+    
+    GetElementPtrInst *GEPI = cast<GetElementPtrInst>(User);
+    // We now know that the GEP is of the form: GEP <ptr>, 0, <cst>
+    unsigned Idx =
+       (unsigned)cast<ConstantInt>(GEPI->getOperand(2))->getZExtValue();
+
+    assert(Idx < ElementAllocas.size() && "Index out of range?");
+    AllocaInst *AllocaToUse = ElementAllocas[Idx];
+
+    Value *RepValue;
+    if (GEPI->getNumOperands() == 3) {
+      // Do not insert a new getelementptr instruction with zero indices, only
+      // to have it optimized out later.
+      RepValue = AllocaToUse;
+    } else {
+      // We are indexing deeply into the structure, so we still need a
+      // getelement ptr instruction to finish the indexing.  This may be
+      // expanded itself once the worklist is rerun.
+      //
+      SmallVector<Value*, 8> NewArgs;
+      NewArgs.push_back(Constant::getNullValue(
+                                           Type::getInt32Ty(AI->getContext())));
+      NewArgs.append(GEPI->op_begin()+3, GEPI->op_end());
+      RepValue = GetElementPtrInst::Create(AllocaToUse, NewArgs.begin(),
+                                           NewArgs.end(), "", GEPI);
+      RepValue->takeName(GEPI);
+    }
+    
+    // If this GEP is to the start of the aggregate, check for memcpys.
+    if (Idx == 0 && GEPI->hasAllZeroIndices())
+      RewriteBitCastUserOfAlloca(GEPI, AI, ElementAllocas);
+
+    // Move all of the users over to the new GEP.
+    GEPI->replaceAllUsesWith(RepValue);
+    // Delete the old GEP
+    GEPI->eraseFromParent();
+  }
+
+  // Finally, delete the Alloca instruction
+  AI->eraseFromParent();
   NumReplaced++;
 }
-    
+
+/// isSafeElementUse - Check to see if this use is an allowed use for a
+/// getelementptr instruction of an array aggregate allocation.  isFirstElt
+/// indicates whether Ptr is known to the start of the aggregate.
+void SROA::isSafeElementUse(Value *Ptr, bool isFirstElt, AllocaInst *AI,
+                            AllocaInfo &Info) {
+  for (Value::use_iterator I = Ptr->use_begin(), E = Ptr->use_end();
+       I != E; ++I) {
+    Instruction *User = cast<Instruction>(*I);
+    switch (User->getOpcode()) {
+    case Instruction::Load:  break;
+    case Instruction::Store:
+      // Store is ok if storing INTO the pointer, not storing the pointer
+      if (User->getOperand(0) == Ptr) return MarkUnsafe(Info);
+      break;
+    case Instruction::GetElementPtr: {
+      GetElementPtrInst *GEP = cast<GetElementPtrInst>(User);
+      bool AreAllZeroIndices = isFirstElt;
+      if (GEP->getNumOperands() > 1 &&
+          (!isa<ConstantInt>(GEP->getOperand(1)) ||
+           !cast<ConstantInt>(GEP->getOperand(1))->isZero()))
+        // Using pointer arithmetic to navigate the array.
+        return MarkUnsafe(Info);
+      
+      // Verify that any array subscripts are in range.
+      for (gep_type_iterator GEPIt = gep_type_begin(GEP),
+           E = gep_type_end(GEP); GEPIt != E; ++GEPIt) {
+        // Ignore struct elements, no extra checking needed for these.
+        if (isa<StructType>(*GEPIt))
+          continue;
+
+        // This GEP indexes an array.  Verify that this is an in-range
+        // constant integer. Specifically, consider A[0][i]. We cannot know that
+        // the user isn't doing invalid things like allowing i to index an
+        // out-of-range subscript that accesses A[1].  Because of this, we have
+        // to reject SROA of any accesses into structs where any of the
+        // components are variables. 
+        ConstantInt *IdxVal = dyn_cast<ConstantInt>(GEPIt.getOperand());
+        if (!IdxVal) return MarkUnsafe(Info);
+        
+        // Are all indices still zero?
+        AreAllZeroIndices &= IdxVal->isZero();
+        
+        if (const ArrayType *AT = dyn_cast<ArrayType>(*GEPIt)) {
+          if (IdxVal->getZExtValue() >= AT->getNumElements())
+            return MarkUnsafe(Info);
+        } else if (const VectorType *VT = dyn_cast<VectorType>(*GEPIt)) {
+          if (IdxVal->getZExtValue() >= VT->getNumElements())
+            return MarkUnsafe(Info);
+        }
+      }
+      
+      isSafeElementUse(GEP, AreAllZeroIndices, AI, Info);
+      if (Info.isUnsafe) return;
+      break;
+    }
+    case Instruction::BitCast:
+      if (isFirstElt) {
+        isSafeUseOfBitCastedAllocation(cast<BitCastInst>(User), AI, Info);
+        if (Info.isUnsafe) return;
+        break;
+      }
+      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
+      return MarkUnsafe(Info);
+    case Instruction::Call:
+      if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(User)) {
+        if (isFirstElt) {
+          isSafeMemIntrinsicOnAllocation(MI, AI, I.getOperandNo(), Info);
+          if (Info.isUnsafe) return;
+          break;
+        }
+      }
+      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
+      return MarkUnsafe(Info);
+    default:
+      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
+      return MarkUnsafe(Info);
+    }
+  }
+  return;  // All users look ok :)
+}
+
 /// AllUsersAreLoads - Return true if all users of this value are loads.
 static bool AllUsersAreLoads(Value *Ptr) {
   for (Value::use_iterator I = Ptr->use_begin(), E = Ptr->use_end();
@@ -377,116 +539,72 @@ static bool AllUsersAreLoads(Value *Ptr) {
   return true;
 }
 
-/// isSafeForScalarRepl - Check if instruction I is a safe use with regard to
-/// performing scalar replacement of alloca AI.  The results are flagged in
-/// the Info parameter.  Offset and ArrayOffset indicate the position within
-/// AI that is referenced by this instruction.
-void SROA::isSafeForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
-                               uint64_t ArrayOffset, AllocaInfo &Info) {
-  for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI!=E; ++UI) {
-    Instruction *User = cast<Instruction>(*UI);
-
-    if (BitCastInst *BC = dyn_cast<BitCastInst>(User)) {
-      isSafeForScalarRepl(BC, AI, Offset, ArrayOffset, Info);
-    } else if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(User)) {
-      uint64_t GEPArrayOffset = ArrayOffset;
-      uint64_t GEPOffset = Offset;
-      isSafeGEP(GEPI, AI, GEPOffset, GEPArrayOffset, Info);
-      if (!Info.isUnsafe)
-        isSafeForScalarRepl(GEPI, AI, GEPOffset, GEPArrayOffset, Info);
-    } else if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(UI)) {
-      ConstantInt *Length = dyn_cast<ConstantInt>(MI->getLength());
-      if (Length)
-        isSafeMemAccess(AI, Offset, ArrayOffset, Length->getZExtValue(), 0,
-                        UI.getOperandNo() == 1, Info);
-      else
-        MarkUnsafe(Info);
-    } else if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
-      if (!LI->isVolatile()) {
-        const Type *LIType = LI->getType();
-        isSafeMemAccess(AI, Offset, ArrayOffset, TD->getTypeAllocSize(LIType),
-                        LIType, false, Info);
-      } else
-        MarkUnsafe(Info);
-    } else if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
-      // Store is ok if storing INTO the pointer, not storing the pointer
-      if (!SI->isVolatile() && SI->getOperand(0) != I) {
-        const Type *SIType = SI->getOperand(0)->getType();
-        isSafeMemAccess(AI, Offset, ArrayOffset, TD->getTypeAllocSize(SIType),
-                        SIType, true, Info);
-      } else
-        MarkUnsafe(Info);
-    } else if (isa<DbgInfoIntrinsic>(UI)) {
-      // If one user is DbgInfoIntrinsic then check if all users are
-      // DbgInfoIntrinsics.
-      if (OnlyUsedByDbgInfoIntrinsics(I)) {
-        Info.needsCleanup = true;
-        return;
-      }
-      MarkUnsafe(Info);
-    } else {
-      DEBUG(errs() << "  Transformation preventing inst: " << *User << '\n');
-      MarkUnsafe(Info);
-    }
-    if (Info.isUnsafe) return;
-  }
-}
+/// isSafeUseOfAllocation - Check if this user is an allowed use for an
+/// aggregate allocation.
+void SROA::isSafeUseOfAllocation(Instruction *User, AllocaInst *AI,
+                                 AllocaInfo &Info) {
+  if (BitCastInst *C = dyn_cast<BitCastInst>(User))
+    return isSafeUseOfBitCastedAllocation(C, AI, Info);
+
+  if (LoadInst *LI = dyn_cast<LoadInst>(User))
+    if (!LI->isVolatile())
+      return;// Loads (returning a first class aggregrate) are always rewritable
+
+  if (StoreInst *SI = dyn_cast<StoreInst>(User))
+    if (!SI->isVolatile() && SI->getOperand(0) != AI)
+      return;// Store is ok if storing INTO the pointer, not storing the pointer
+ 
+  GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(User);
+  if (GEPI == 0)
+    return MarkUnsafe(Info);
 
-/// isSafeGEP - Check if a GEP instruction can be handled for scalar
-/// replacement.  It is safe when all the indices are constant, in-bounds
-/// references, and when the resulting offset corresponds to an element within
-/// the alloca type.  The results are flagged in the Info parameter.  Upon
-/// return, Offset is adjusted as specified by the GEP indices.  For the
-/// special case of a variable index to a 2-element array, ArrayOffset is set
-/// to the array element size.
-void SROA::isSafeGEP(GetElementPtrInst *GEPI, AllocaInst *AI,
-                     uint64_t &Offset, uint64_t &ArrayOffset,
-                     AllocaInfo &Info) {
-  gep_type_iterator GEPIt = gep_type_begin(GEPI), E = gep_type_end(GEPI);
-  if (GEPIt == E)
-    return;
+  gep_type_iterator I = gep_type_begin(GEPI), E = gep_type_end(GEPI);
 
-  // The first GEP index must be zero.
-  if (!isa<ConstantInt>(GEPIt.getOperand()) ||
-      !cast<ConstantInt>(GEPIt.getOperand())->isZero())
+  // The GEP is not safe to transform if not of the form "GEP <ptr>, 0, <cst>".
+  if (I == E ||
+      I.getOperand() != Constant::getNullValue(I.getOperand()->getType())) {
     return MarkUnsafe(Info);
-  if (++GEPIt == E)
-    return;
+  }
+
+  ++I;
+  if (I == E) return MarkUnsafe(Info);  // ran out of GEP indices??
 
+  bool IsAllZeroIndices = true;
+  
   // If the first index is a non-constant index into an array, see if we can
   // handle it as a special case.
-  const Type *ArrayEltTy = 0;
-  if (ArrayOffset == 0 && Offset == 0) {
-    if (const ArrayType *AT = dyn_cast<ArrayType>(*GEPIt)) {
-      if (!isa<ConstantInt>(GEPIt.getOperand())) {
-        uint64_t NumElements = AT->getNumElements();
-
-        // If this is an array index and the index is not constant, we cannot
-        // promote... that is unless the array has exactly one or two elements
-        // in it, in which case we CAN promote it, but we have to canonicalize
-        // this out if this is the only problem.
-        if ((NumElements != 1 && NumElements != 2) || !AllUsersAreLoads(GEPI))
-          return MarkUnsafe(Info);
+  if (const ArrayType *AT = dyn_cast<ArrayType>(*I)) {
+    if (!isa<ConstantInt>(I.getOperand())) {
+      IsAllZeroIndices = 0;
+      uint64_t NumElements = AT->getNumElements();
+      
+      // If this is an array index and the index is not constant, we cannot
+      // promote... that is unless the array has exactly one or two elements in
+      // it, in which case we CAN promote it, but we have to canonicalize this
+      // out if this is the only problem.
+      if ((NumElements == 1 || NumElements == 2) &&
+          AllUsersAreLoads(GEPI)) {
         Info.needsCleanup = true;
-        ArrayOffset = TD->getTypeAllocSizeInBits(AT->getElementType());
-        ArrayEltTy = AT->getElementType();
-        ++GEPIt;
+        return;  // Canonicalization required!
       }
+      return MarkUnsafe(Info);
     }
   }
-
+ 
   // Walk through the GEP type indices, checking the types that this indexes
   // into.
-  for (; GEPIt != E; ++GEPIt) {
+  for (; I != E; ++I) {
     // Ignore struct elements, no extra checking needed for these.
-    if (isa<StructType>(*GEPIt))
+    if (isa<StructType>(*I))
       continue;
+    
+    ConstantInt *IdxVal = dyn_cast<ConstantInt>(I.getOperand());
+    if (!IdxVal) return MarkUnsafe(Info);
 
-    ConstantInt *IdxVal = dyn_cast<ConstantInt>(GEPIt.getOperand());
-    if (!IdxVal)
-      return MarkUnsafe(Info);
-
-    if (const ArrayType *AT = dyn_cast<ArrayType>(*GEPIt)) {
+    // Are all indices still zero?
+    IsAllZeroIndices &= IdxVal->isZero();
+    
+    if (const ArrayType *AT = dyn_cast<ArrayType>(*I)) {
       // This GEP indexes an array.  Verify that this is an in-range constant
       // integer. Specifically, consider A[0][i]. We cannot know that the user
       // isn't doing invalid things like allowing i to index an out-of-range
@@ -494,254 +612,144 @@ void SROA::isSafeGEP(GetElementPtrInst *GEPI, AllocaInst *AI,
       // of any accesses into structs where any of the components are variables.
       if (IdxVal->getZExtValue() >= AT->getNumElements())
         return MarkUnsafe(Info);
-    } else {
-      const VectorType *VT = dyn_cast<VectorType>(*GEPIt);
-      assert(VT && "unexpected type in GEP type iterator");
+    } else if (const VectorType *VT = dyn_cast<VectorType>(*I)) {
       if (IdxVal->getZExtValue() >= VT->getNumElements())
         return MarkUnsafe(Info);
     }
   }
-
-  // All the indices are safe.  Now compute the offset due to this GEP and
-  // check if the alloca has a component element at that offset.
-  if (ArrayOffset == 0) {
-    SmallVector<Value*, 8> Indices(GEPI->op_begin() + 1, GEPI->op_end());
-    Offset += TD->getIndexedOffset(GEPI->getPointerOperandType(),
-                                   &Indices[0], Indices.size());
-  } else {
-    // Both array elements have the same type, so it suffices to check one of
-    // them.  Copy the GEP indices starting from the array index, but replace
-    // that variable index with a constant zero.
-    SmallVector<Value*, 8> Indices(GEPI->op_begin() + 2, GEPI->op_end());
-    Indices[0] = Constant::getNullValue(Type::getInt32Ty(GEPI->getContext()));
-    const Type *ArrayEltPtr = PointerType::getUnqual(ArrayEltTy);
-    Offset += TD->getIndexedOffset(ArrayEltPtr, &Indices[0], Indices.size());
-  }
-  if (!TypeHasComponent(AI->getAllocatedType(), Offset, 0))
-    MarkUnsafe(Info);
-}
-
-/// isSafeMemAccess - Check if a load/store/memcpy operates on the entire AI
-/// alloca or has an offset and size that corresponds to a component element
-/// within it.  The offset checked here may have been formed from a GEP with a
-/// pointer bitcasted to a different type.
-void SROA::isSafeMemAccess(AllocaInst *AI, uint64_t Offset,
-                           uint64_t ArrayOffset, uint64_t MemSize,
-                           const Type *MemOpType, bool isStore,
-                           AllocaInfo &Info) {
-  // Check if this is a load/store of the entire alloca.
-  if (Offset == 0 && ArrayOffset == 0 &&
-      MemSize == TD->getTypeAllocSize(AI->getAllocatedType())) {
-    bool UsesAggregateType = (MemOpType == AI->getAllocatedType());
-    // This is safe for MemIntrinsics (where MemOpType is 0), integer types
-    // (which are essentially the same as the MemIntrinsics, especially with
-    // regard to copying padding between elements), or references using the
-    // aggregate type of the alloca.
-    if (!MemOpType || isa<IntegerType>(MemOpType) || UsesAggregateType) {
-      if (!UsesAggregateType) {
-        if (isStore)
-          Info.isMemCpyDst = true;
-        else
-          Info.isMemCpySrc = true;
-      }
-      return;
-    }
-  }
-  // Check if the offset/size correspond to a component within the alloca type.
-  const Type *T = AI->getAllocatedType();
-  if (TypeHasComponent(T, Offset, MemSize) &&
-      (ArrayOffset == 0 || TypeHasComponent(T, Offset + ArrayOffset, MemSize)))
-    return;
-
-  return MarkUnsafe(Info);
+  
+  // If there are any non-simple uses of this getelementptr, make sure to reject
+  // them.
+  return isSafeElementUse(GEPI, IsAllZeroIndices, AI, Info);
 }
 
-/// TypeHasComponent - Return true if T has a component type with the
-/// specified offset and size.  If Size is zero, do not check the size.
-bool SROA::TypeHasComponent(const Type *T, uint64_t Offset, uint64_t Size) {
-  const Type *EltTy;
-  uint64_t EltSize;
-  if (const StructType *ST = dyn_cast<StructType>(T)) {
-    const StructLayout *Layout = TD->getStructLayout(ST);
-    unsigned EltIdx = Layout->getElementContainingOffset(Offset);
-    EltTy = ST->getContainedType(EltIdx);
-    EltSize = TD->getTypeAllocSize(EltTy);
-    Offset -= Layout->getElementOffset(EltIdx);
-  } else if (const ArrayType *AT = dyn_cast<ArrayType>(T)) {
-    EltTy = AT->getElementType();
-    EltSize = TD->getTypeAllocSize(EltTy);
-    Offset %= EltSize;
-  } else {
-    return false;
+/// isSafeMemIntrinsicOnAllocation - Check if the specified memory
+/// intrinsic can be promoted by SROA.  At this point, we know that the operand
+/// of the memintrinsic is a pointer to the beginning of the allocation.
+void SROA::isSafeMemIntrinsicOnAllocation(MemIntrinsic *MI, AllocaInst *AI,
+                                          unsigned OpNo, AllocaInfo &Info) {
+  // If not constant length, give up.
+  ConstantInt *Length = dyn_cast<ConstantInt>(MI->getLength());
+  if (!Length) return MarkUnsafe(Info);
+  
+  // If not the whole aggregate, give up.
+  if (Length->getZExtValue() !=
+      TD->getTypeAllocSize(AI->getType()->getElementType()))
+    return MarkUnsafe(Info);
+  
+  // We only know about memcpy/memset/memmove.
+  if (!isa<MemIntrinsic>(MI))
+    return MarkUnsafe(Info);
+  
+  // Otherwise, we can transform it.  Determine whether this is a memcpy/set
+  // into or out of the aggregate.
+  if (OpNo == 1)
+    Info.isMemCpyDst = true;
+  else {
+    assert(OpNo == 2);
+    Info.isMemCpySrc = true;
   }
-  if (Offset == 0 && (Size == 0 || EltSize == Size))
-    return true;
-  // Check if the component spans multiple elements.
-  if (Offset + Size > EltSize)
-    return false;
-  return TypeHasComponent(EltTy, Offset, Size);
 }
 
-/// RewriteForScalarRepl - Alloca AI is being split into NewElts, so rewrite
-/// the instruction I, which references it, to use the separate elements.
-/// Offset indicates the position within AI that is referenced by this
-/// instruction.
-void SROA::RewriteForScalarRepl(Instruction *I, AllocaInst *AI, uint64_t Offset,
-                                SmallVector<AllocaInst*, 32> &NewElts) {
-  for (Value::use_iterator UI = I->use_begin(), E = I->use_end(); UI != E; ) {
-    Instruction *User = cast<Instruction>(*UI++);
+/// isSafeUseOfBitCastedAllocation - Check if all users of this bitcast
+/// from an alloca are safe for SROA of that alloca.
+void SROA::isSafeUseOfBitCastedAllocation(BitCastInst *BC, AllocaInst *AI,
+                                          AllocaInfo &Info) {
+  for (Value::use_iterator UI = BC->use_begin(), E = BC->use_end();
+       UI != E; ++UI) {
+    if (BitCastInst *BCU = dyn_cast<BitCastInst>(UI)) {
+      isSafeUseOfBitCastedAllocation(BCU, AI, Info);
+    } else if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(UI)) {
+      isSafeMemIntrinsicOnAllocation(MI, AI, UI.getOperandNo(), Info);
+    } else if (StoreInst *SI = dyn_cast<StoreInst>(UI)) {
+      if (SI->isVolatile())
+        return MarkUnsafe(Info);
+      
+      // If storing the entire alloca in one chunk through a bitcasted pointer
+      // to integer, we can transform it.  This happens (for example) when you
+      // cast a {i32,i32}* to i64* and store through it.  This is similar to the
+      // memcpy case and occurs in various "byval" cases and emulated memcpys.
+      if (isa<IntegerType>(SI->getOperand(0)->getType()) &&
+          TD->getTypeAllocSize(SI->getOperand(0)->getType()) ==
+          TD->getTypeAllocSize(AI->getType()->getElementType())) {
+        Info.isMemCpyDst = true;
+        continue;
+      }
+      return MarkUnsafe(Info);
+    } else if (LoadInst *LI = dyn_cast<LoadInst>(UI)) {
+      if (LI->isVolatile())
+        return MarkUnsafe(Info);
 
-    if (BitCastInst *BC = dyn_cast<BitCastInst>(User)) {
-      if (BC->getOperand(0) == AI)
-        BC->setOperand(0, NewElts[0]);
-      // If the bitcast type now matches the operand type, it will be removed
-      // after processing its uses.
-      RewriteForScalarRepl(BC, AI, Offset, NewElts);
-    } else if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(User)) {
-      RewriteGEP(GEPI, AI, Offset, NewElts);
-    } else if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(User)) {
-      ConstantInt *Length = dyn_cast<ConstantInt>(MI->getLength());
-      uint64_t MemSize = Length->getZExtValue();
-      if (Offset == 0 &&
-          MemSize == TD->getTypeAllocSize(AI->getAllocatedType()))
-        RewriteMemIntrinUserOfAlloca(MI, I, AI, NewElts);
-    } else if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
-      const Type *LIType = LI->getType();
-      if (LIType == AI->getAllocatedType()) {
-        // Replace:
-        //   %res = load { i32, i32 }* %alloc
-        // with:
-        //   %load.0 = load i32* %alloc.0
-        //   %insert.0 insertvalue { i32, i32 } zeroinitializer, i32 %load.0, 0
-        //   %load.1 = load i32* %alloc.1
-        //   %insert = insertvalue { i32, i32 } %insert.0, i32 %load.1, 1
-        // (Also works for arrays instead of structs)
-        Value *Insert = UndefValue::get(LIType);
-        for (unsigned i = 0, e = NewElts.size(); i != e; ++i) {
-          Value *Load = new LoadInst(NewElts[i], "load", LI);
-          Insert = InsertValueInst::Create(Insert, Load, i, "insert", LI);
-        }
-        LI->replaceAllUsesWith(Insert);
-        LI->eraseFromParent();
-      } else if (isa<IntegerType>(LIType) &&
-                 TD->getTypeAllocSize(LIType) ==
-                 TD->getTypeAllocSize(AI->getAllocatedType())) {
-        // If this is a load of the entire alloca to an integer, rewrite it.
-        RewriteLoadUserOfWholeAlloca(LI, AI, NewElts);
+      // If loading the entire alloca in one chunk through a bitcasted pointer
+      // to integer, we can transform it.  This happens (for example) when you
+      // cast a {i32,i32}* to i64* and load through it.  This is similar to the
+      // memcpy case and occurs in various "byval" cases and emulated memcpys.
+      if (isa<IntegerType>(LI->getType()) &&
+          TD->getTypeAllocSize(LI->getType()) ==
+          TD->getTypeAllocSize(AI->getType()->getElementType())) {
+        Info.isMemCpySrc = true;
+        continue;
       }
-    } else if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
-      Value *Val = SI->getOperand(0);
-      const Type *SIType = Val->getType();
-      if (SIType == AI->getAllocatedType()) {
-        // Replace:
-        //   store { i32, i32 } %val, { i32, i32 }* %alloc
-        // with:
-        //   %val.0 = extractvalue { i32, i32 } %val, 0
-        //   store i32 %val.0, i32* %alloc.0
-        //   %val.1 = extractvalue { i32, i32 } %val, 1
-        //   store i32 %val.1, i32* %alloc.1
-        // (Also works for arrays instead of structs)
-        for (unsigned i = 0, e = NewElts.size(); i != e; ++i) {
-          Value *Extract = ExtractValueInst::Create(Val, i, Val->getName(), SI);
-          new StoreInst(Extract, NewElts[i], SI);
-        }
-        SI->eraseFromParent();
-      } else if (isa<IntegerType>(SIType) &&
-                 TD->getTypeAllocSize(SIType) ==
-                 TD->getTypeAllocSize(AI->getAllocatedType())) {
-        // If this is a store of the entire alloca from an integer, rewrite it.
-        RewriteStoreUserOfWholeAlloca(SI, AI, NewElts);
+      return MarkUnsafe(Info);
+    } else if (isa<DbgInfoIntrinsic>(UI)) {
+      // If one user is DbgInfoIntrinsic then check if all users are
+      // DbgInfoIntrinsics.
+      if (OnlyUsedByDbgInfoIntrinsics(BC)) {
+        Info.needsCleanup = true;
+        return;
       }
+      else
+        MarkUnsafe(Info);
     }
-  }
-  // Delete unused instructions and identity bitcasts.
-  if (I->use_empty())
-    I->eraseFromParent();
-  else if (BitCastInst *BC = dyn_cast<BitCastInst>(I)) {
-    if (BC->getDestTy() == BC->getSrcTy()) {
-      BC->replaceAllUsesWith(BC->getOperand(0));
-      BC->eraseFromParent();
+    else {
+      return MarkUnsafe(Info);
     }
+    if (Info.isUnsafe) return;
   }
 }
 
-/// FindElementAndOffset - Return the index of the element containing Offset
-/// within the specified type, which must be either a struct or an array.
-/// Sets T to the type of the element and Offset to the offset within that
-/// element.
-unsigned SROA::FindElementAndOffset(const Type *&T, uint64_t &Offset) {
-  unsigned Idx = 0;
-  if (const StructType *ST = dyn_cast<StructType>(T)) {
-    const StructLayout *Layout = TD->getStructLayout(ST);
-    Idx = Layout->getElementContainingOffset(Offset);
-    T = ST->getContainedType(Idx);
-    Offset -= Layout->getElementOffset(Idx);
-  } else {
-    const ArrayType *AT = dyn_cast<ArrayType>(T);
-    assert(AT && "unexpected type for scalar replacement");
-    T = AT->getElementType();
-    uint64_t EltSize = TD->getTypeAllocSize(T);
-    Idx = (unsigned)(Offset / EltSize);
-    Offset -= Idx * EltSize;
-  }
-  return Idx;
-}
+/// RewriteBitCastUserOfAlloca - BCInst (transitively) bitcasts AI, or indexes
+/// to its first element.  Transform users of the cast to use the new values
+/// instead.
+void SROA::RewriteBitCastUserOfAlloca(Instruction *BCInst, AllocaInst *AI,
+                                      SmallVector<AllocaInst*, 32> &NewElts) {
+  Value::use_iterator UI = BCInst->use_begin(), UE = BCInst->use_end();
+  while (UI != UE) {
+    Instruction *User = cast<Instruction>(*UI++);
+    if (BitCastInst *BCU = dyn_cast<BitCastInst>(User)) {
+      RewriteBitCastUserOfAlloca(BCU, AI, NewElts);
+      if (BCU->use_empty()) BCU->eraseFromParent();
+      continue;
+    }
 
-/// RewriteGEP - Check if this GEP instruction moves the pointer across
-/// elements of the alloca that are being split apart, and if so, rewrite
-/// the GEP to be relative to the new element.
-void SROA::RewriteGEP(GetElementPtrInst *GEPI, AllocaInst *AI, uint64_t Offset,
-                      SmallVector<AllocaInst*, 32> &NewElts) {
-  Instruction *Val = GEPI;
-
-  uint64_t OldOffset = Offset;
-  SmallVector<Value*, 8> Indices(GEPI->op_begin() + 1, GEPI->op_end());
-  Offset += TD->getIndexedOffset(GEPI->getPointerOperandType(),
-                                 &Indices[0], Indices.size());
-
-  const Type *T = AI->getAllocatedType();
-  unsigned OldIdx = FindElementAndOffset(T, OldOffset);
-  if (GEPI->getOperand(0) == AI)
-    OldIdx = ~0U; // Force the GEP to be rewritten.
-
-  T = AI->getAllocatedType();
-  uint64_t EltOffset = Offset;
-  unsigned Idx = FindElementAndOffset(T, EltOffset);
-
-  // If this GEP moves the pointer across elements of the alloca that are
-  // being split, then it needs to be rewritten.
-  if (Idx != OldIdx) {
-    const Type *i32Ty = Type::getInt32Ty(AI->getContext());
-    SmallVector<Value*, 8> NewArgs;
-    NewArgs.push_back(Constant::getNullValue(i32Ty));
-    while (EltOffset != 0) {
-      unsigned EltIdx = FindElementAndOffset(T, EltOffset);
-      NewArgs.push_back(ConstantInt::get(i32Ty, EltIdx));
+    if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(User)) {
+      // This must be memcpy/memmove/memset of the entire aggregate.
+      // Split into one per element.
+      RewriteMemIntrinUserOfAlloca(MI, BCInst, AI, NewElts);
+      continue;
     }
-    if (NewArgs.size() > 1) {
-      Val = GetElementPtrInst::CreateInBounds(NewElts[Idx], NewArgs.begin(),
-                                              NewArgs.end(), "", GEPI);
-      Val->takeName(GEPI);
-      if (Val->getType() != GEPI->getType())
-        Val = new BitCastInst(Val, GEPI->getType(), Val->getNameStr(), GEPI);
-    } else {
-      Val = NewElts[Idx];
-      // Insert a new bitcast.  If the types match, it will be removed after
-      // handling all of its uses.
-      Val = new BitCastInst(Val, GEPI->getType(), Val->getNameStr(), GEPI);
-      Val->takeName(GEPI);
+      
+    if (StoreInst *SI = dyn_cast<StoreInst>(User)) {
+      // If this is a store of the entire alloca from an integer, rewrite it.
+      RewriteStoreUserOfWholeAlloca(SI, AI, NewElts);
+      continue;
     }
 
-    GEPI->replaceAllUsesWith(Val);
-    GEPI->eraseFromParent();
+    if (LoadInst *LI = dyn_cast<LoadInst>(User)) {
+      // If this is a load of the entire alloca to an integer, rewrite it.
+      RewriteLoadUserOfWholeAlloca(LI, AI, NewElts);
+      continue;
+    }
+    
+    // Otherwise it must be some other user of a gep of the first pointer.  Just
+    // leave these alone.
+    continue;
   }
-
-  RewriteForScalarRepl(Val, AI, Offset, NewElts);
 }
 
 /// RewriteMemIntrinUserOfAlloca - MI is a memcpy/memset/memmove from or to AI.
 /// Rewrite it to copy or set the elements of the scalarized memory.
-void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
+void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *BCInst,
                                         AllocaInst *AI,
                                         SmallVector<AllocaInst*, 32> &NewElts) {
   
@@ -753,10 +761,10 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
   LLVMContext &Context = MI->getContext();
   unsigned MemAlignment = MI->getAlignment();
   if (MemTransferInst *MTI = dyn_cast<MemTransferInst>(MI)) { // memmove/memcopy
-    if (Inst == MTI->getRawDest())
+    if (BCInst == MTI->getRawDest())
       OtherPtr = MTI->getRawSource();
     else {
-      assert(Inst == MTI->getRawSource());
+      assert(BCInst == MTI->getRawSource());
       OtherPtr = MTI->getRawDest();
     }
   }
@@ -790,7 +798,7 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
   // Process each element of the aggregate.
   Value *TheFn = MI->getOperand(0);
   const Type *BytePtrTy = MI->getRawDest()->getType();
-  bool SROADest = MI->getRawDest() == Inst;
+  bool SROADest = MI->getRawDest() == BCInst;
   
   Constant *Zero = Constant::getNullValue(Type::getInt32Ty(MI->getContext()));
 
@@ -802,9 +810,9 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
     if (OtherPtr) {
       Value *Idx[2] = { Zero,
                       ConstantInt::get(Type::getInt32Ty(MI->getContext()), i) };
-      OtherElt = GetElementPtrInst::CreateInBounds(OtherPtr, Idx, Idx + 2,
+      OtherElt = GetElementPtrInst::Create(OtherPtr, Idx, Idx + 2,
                                            OtherPtr->getNameStr()+"."+Twine(i),
-                                                   MI);
+                                           MI);
       uint64_t EltOffset;
       const PointerType *OtherPtrTy = cast<PointerType>(OtherPtr->getType());
       if (const StructType *ST =
@@ -929,9 +937,15 @@ void SROA::RewriteStoreUserOfWholeAlloca(StoreInst *SI, AllocaInst *AI,
   // Extract each element out of the integer according to its structure offset
   // and store the element value to the individual alloca.
   Value *SrcVal = SI->getOperand(0);
-  const Type *AllocaEltTy = AI->getAllocatedType();
+  const Type *AllocaEltTy = AI->getType()->getElementType();
   uint64_t AllocaSizeBits = TD->getTypeAllocSizeInBits(AllocaEltTy);
   
+  // If this isn't a store of an integer to the whole alloca, it may be a store
+  // to the first element.  Just ignore the store in this case and normal SROA
+  // will handle it.
+  if (!isa<IntegerType>(SrcVal->getType()) ||
+      TD->getTypeAllocSizeInBits(SrcVal->getType()) != AllocaSizeBits)
+    return;
   // Handle tail padding by extending the operand
   if (TD->getTypeSizeInBits(SrcVal->getType()) != AllocaSizeBits)
     SrcVal = new ZExtInst(SrcVal,
@@ -1045,9 +1059,16 @@ void SROA::RewriteLoadUserOfWholeAlloca(LoadInst *LI, AllocaInst *AI,
                                         SmallVector<AllocaInst*, 32> &NewElts) {
   // Extract each element out of the NewElts according to its structure offset
   // and form the result value.
-  const Type *AllocaEltTy = AI->getAllocatedType();
+  const Type *AllocaEltTy = AI->getType()->getElementType();
   uint64_t AllocaSizeBits = TD->getTypeAllocSizeInBits(AllocaEltTy);
   
+  // If this isn't a load of the whole alloca to an integer, it may be a load
+  // of the first element.  Just ignore the load in this case and normal SROA
+  // will handle it.
+  if (!isa<IntegerType>(LI->getType()) ||
+      TD->getTypeAllocSizeInBits(LI->getType()) != AllocaSizeBits)
+    return;
+  
   DEBUG(errs() << "PROMOTING LOAD OF WHOLE ALLOCA: " << *AI << '\n' << *LI
                << '\n');
   
@@ -1121,6 +1142,7 @@ void SROA::RewriteLoadUserOfWholeAlloca(LoadInst *LI, AllocaInst *AI,
   LI->eraseFromParent();
 }
 
+
 /// HasPadding - Return true if the specified type has any structure or
 /// alignment padding, false otherwise.
 static bool HasPadding(const Type *Ty, const TargetData &TD) {
@@ -1170,10 +1192,14 @@ int SROA::isSafeAllocaToScalarRepl(AllocaInst *AI) {
   // the users are safe to transform.
   AllocaInfo Info;
   
-  isSafeForScalarRepl(AI, AI, 0, 0, Info);
-  if (Info.isUnsafe) {
-    DEBUG(errs() << "Cannot transform: " << *AI << '\n');
-    return 0;
+  for (Value::use_iterator I = AI->use_begin(), E = AI->use_end();
+       I != E; ++I) {
+    isSafeUseOfAllocation(cast<Instruction>(*I), AI, Info);
+    if (Info.isUnsafe) {
+      DEBUG(errs() << "Cannot transform: " << *AI << "\n  due to user: "
+                   << **I << '\n');
+      return 0;
+    }
   }
   
   // Okay, we know all the users are promotable.  If the aggregate is a memcpy
@@ -1182,7 +1208,7 @@ int SROA::isSafeAllocaToScalarRepl(AllocaInst *AI) {
   // types, but may actually be used.  In these cases, we refuse to promote the
   // struct.
   if (Info.isMemCpySrc && Info.isMemCpyDst &&
-      HasPadding(AI->getAllocatedType(), *TD))
+      HasPadding(AI->getType()->getElementType(), *TD))
     return 0;
 
   // If we require cleanup, return 1, otherwise return 3.
@@ -1219,15 +1245,15 @@ void SROA::CleanupGEP(GetElementPtrInst *GEPI) {
   // Insert the new GEP instructions, which are properly indexed.
   SmallVector<Value*, 8> Indices(GEPI->op_begin()+1, GEPI->op_end());
   Indices[1] = Constant::getNullValue(Type::getInt32Ty(GEPI->getContext()));
-  Value *ZeroIdx = GetElementPtrInst::CreateInBounds(GEPI->getOperand(0),
-                                                     Indices.begin(),
-                                                     Indices.end(),
-                                                     GEPI->getName()+".0",GEPI);
+  Value *ZeroIdx = GetElementPtrInst::Create(GEPI->getOperand(0),
+                                             Indices.begin(),
+                                             Indices.end(),
+                                             GEPI->getName()+".0", GEPI);
   Indices[1] = ConstantInt::get(Type::getInt32Ty(GEPI->getContext()), 1);
-  Value *OneIdx = GetElementPtrInst::CreateInBounds(GEPI->getOperand(0),
-                                                    Indices.begin(),
-                                                    Indices.end(),
-                                                    GEPI->getName()+".1", GEPI);
+  Value *OneIdx = GetElementPtrInst::Create(GEPI->getOperand(0),
+                                            Indices.begin(),
+                                            Indices.end(),
+                                            GEPI->getName()+".1", GEPI);
   // Replace all loads of the variable index GEP with loads from both
   // indexes and a select.
   while (!GEPI->use_empty()) {
@@ -1238,24 +1264,22 @@ void SROA::CleanupGEP(GetElementPtrInst *GEPI) {
     LI->replaceAllUsesWith(R);
     LI->eraseFromParent();
   }
+  GEPI->eraseFromParent();
 }
 
+
 /// CleanupAllocaUsers - If SROA reported that it can promote the specified
 /// allocation, but only if cleaned up, perform the cleanups required.
-void SROA::CleanupAllocaUsers(Value *V) {
+void SROA::CleanupAllocaUsers(AllocaInst *AI) {
   // At this point, we know that the end result will be SROA'd and promoted, so
   // we can insert ugly code if required so long as sroa+mem2reg will clean it
   // up.
-  for (Value::use_iterator UI = V->use_begin(), E = V->use_end();
+  for (Value::use_iterator UI = AI->use_begin(), E = AI->use_end();
        UI != E; ) {
     User *U = *UI++;
-    if (isa<BitCastInst>(U)) {
-      CleanupAllocaUsers(U);
-    } else if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(U)) {
+    if (GetElementPtrInst *GEPI = dyn_cast<GetElementPtrInst>(U))
       CleanupGEP(GEPI);
-      CleanupAllocaUsers(GEPI);
-      if (GEPI->use_empty()) GEPI->eraseFromParent();
-    } else {
+    else {
       Instruction *I = cast<Instruction>(U);
       SmallVector<DbgInfoIntrinsic *, 2> DbgInUses;
       if (!isa<StoreInst>(I) && OnlyUsedByDbgInfoIntrinsics(I, &DbgInUses)) {
@@ -1371,7 +1395,7 @@ bool SROA::CanConvertToScalar(Value *V, bool &IsNotTrivial, const Type *&VecTy,
       
       // Compute the offset that this GEP adds to the pointer.
       SmallVector<Value*, 8> Indices(GEP->op_begin()+1, GEP->op_end());
-      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getPointerOperandType(),
+      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getOperand(0)->getType(),
                                                 &Indices[0], Indices.size());
       // See if all uses can be converted.
       if (!CanConvertToScalar(GEP, IsNotTrivial, VecTy, SawVec,Offset+GEPOffset,
@@ -1433,7 +1457,7 @@ void SROA::ConvertUsesToScalar(Value *Ptr, AllocaInst *NewAI, uint64_t Offset) {
     if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(User)) {
       // Compute the offset that this GEP adds to the pointer.
       SmallVector<Value*, 8> Indices(GEP->op_begin()+1, GEP->op_end());
-      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getPointerOperandType(),
+      uint64_t GEPOffset = TD->getIndexedOffset(GEP->getOperand(0)->getType(),
                                                 &Indices[0], Indices.size());
       ConvertUsesToScalar(GEP, NewAI, Offset+GEPOffset*8);
       GEP->eraseFromParent();
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
index 0d03e55..6fd884b 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
@@ -2644,10 +2644,11 @@ bool SimplifyLibCalls::doInitialization(Module &M) {
 //   * strcspn("",a) -> 0
 //   * strcspn(s,"") -> strlen(a)
 //
-// strstr:
+// strstr: (PR5783)
 //   * strstr(x,x)  -> x
-//   * strstr(s1,s2) -> offset_of_s2_in(s1)
-//       (if s1 and s2 are constant strings)
+//   * strstr(x, "") -> x
+//   * strstr(x, "a") -> strchr(x, 'a')
+//   * strstr(s1,s2) -> result   (if s1 and s2 are constant strings)
 //
 // tan, tanf, tanl:
 //   * tan(atan(x)) -> x
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp
index e25f9e2..846e432 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp
@@ -55,7 +55,6 @@ struct DenseMapInfo<std::pair<BasicBlock*, unsigned> > {
   static bool isEqual(const EltTy &LHS, const EltTy &RHS) {
     return LHS == RHS;
   }
-  static bool isPod() { return true; }
 };
 }
 
@@ -102,7 +101,7 @@ namespace {
   public:
     typedef std::vector<Value *> ValVector;
     
-    RenamePassData() {}
+    RenamePassData() : BB(NULL), Pred(NULL), Values() {}
     RenamePassData(BasicBlock *B, BasicBlock *P,
                    const ValVector &V) : BB(B), Pred(P), Values(V) {}
     BasicBlock *BB;
diff --git a/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h b/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
index 1c3244b..8a2378e 100644
--- a/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
+++ b/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
@@ -62,7 +62,6 @@ struct DenseMapAPIntKeyInfo {
   static bool isEqual(const KeyTy &LHS, const KeyTy &RHS) {
     return LHS == RHS;
   }
-  static bool isPod() { return false; }
 };
 
 struct DenseMapAPFloatKeyInfo {
@@ -89,7 +88,6 @@ struct DenseMapAPFloatKeyInfo {
   static bool isEqual(const KeyTy &LHS, const KeyTy &RHS) {
     return LHS == RHS;
   }
-  static bool isPod() { return false; }
 };
 
 class LLVMContextImpl {
diff --git a/libclamav/c++/llvm/lib/VMCore/Pass.cpp b/libclamav/c++/llvm/lib/VMCore/Pass.cpp
index 1232fe2..6bea7a8 100644
--- a/libclamav/c++/llvm/lib/VMCore/Pass.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Pass.cpp
@@ -41,6 +41,10 @@ Pass::~Pass() {
 // Force out-of-line virtual method.
 ModulePass::~ModulePass() { }
 
+PassManagerType ModulePass::getPotentialPassManagerType() const {
+  return PMT_ModulePassManager;
+}
+
 bool Pass::mustPreserveAnalysisID(const PassInfo *AnalysisID) const {
   return Resolver->getAnalysisIfAvailable(AnalysisID, true) != 0;
 }
@@ -60,6 +64,27 @@ const char *Pass::getPassName() const {
   return "Unnamed pass: implement Pass::getPassName()";
 }
 
+void Pass::preparePassManager(PMStack &) {
+  // By default, don't do anything.
+}
+
+PassManagerType Pass::getPotentialPassManagerType() const {
+  // Default implementation.
+  return PMT_Unknown; 
+}
+
+void Pass::getAnalysisUsage(AnalysisUsage &) const {
+  // By default, no analysis results are used, all are invalidated.
+}
+
+void Pass::releaseMemory() {
+  // By default, don't do anything.
+}
+
+void Pass::verifyAnalysis() const {
+  // By default, don't do anything.
+}
+
 // print - Print out the internal state of the pass.  This is called by Analyze
 // to print out the contents of an analysis.  Otherwise it is not necessary to
 // implement this method.
@@ -79,6 +104,10 @@ void Pass::dump() const {
 // Force out-of-line virtual method.
 ImmutablePass::~ImmutablePass() { }
 
+void ImmutablePass::initializePass() {
+  // By default, don't do anything.
+}
+
 //===----------------------------------------------------------------------===//
 // FunctionPass Implementation
 //
@@ -107,6 +136,20 @@ bool FunctionPass::run(Function &F) {
   return Changed | doFinalization(*F.getParent());
 }
 
+bool FunctionPass::doInitialization(Module &) {
+  // By default, don't do anything.
+  return false;
+}
+
+bool FunctionPass::doFinalization(Module &) {
+  // By default, don't do anything.
+  return false;
+}
+
+PassManagerType FunctionPass::getPotentialPassManagerType() const {
+  return PMT_FunctionPassManager;
+}
+
 //===----------------------------------------------------------------------===//
 // BasicBlockPass Implementation
 //
@@ -121,6 +164,30 @@ bool BasicBlockPass::runOnFunction(Function &F) {
   return Changed | doFinalization(F);
 }
 
+bool BasicBlockPass::doInitialization(Module &) {
+  // By default, don't do anything.
+  return false;
+}
+
+bool BasicBlockPass::doInitialization(Function &) {
+  // By default, don't do anything.
+  return false;
+}
+
+bool BasicBlockPass::doFinalization(Function &) {
+  // By default, don't do anything.
+  return false;
+}
+
+bool BasicBlockPass::doFinalization(Module &) {
+  // By default, don't do anything.
+  return false;
+}
+
+PassManagerType BasicBlockPass::getPotentialPassManagerType() const {
+  return PMT_BasicBlockPassManager; 
+}
+
 //===----------------------------------------------------------------------===//
 // Pass Registration mechanism
 //
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/3addr-16bit.ll b/libclamav/c++/llvm/test/CodeGen/X86/3addr-16bit.ll
index bf1e0ea..c51247a 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/3addr-16bit.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/3addr-16bit.ll
@@ -1,5 +1,7 @@
-; RUN: llc < %s -mtriple=i386-apple-darwin -asm-verbose=false   | FileCheck %s -check-prefix=32BIT
 ; RUN: llc < %s -mtriple=x86_64-apple-darwin -asm-verbose=false | FileCheck %s -check-prefix=64BIT
+; rdar://7329206
+
+; In 32-bit the partial register stall would degrade performance.
 
 define zeroext i16 @t1(i16 zeroext %c, i16 zeroext %k) nounwind ssp {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/bigstructret.ll b/libclamav/c++/llvm/test/CodeGen/X86/fastcc3struct.ll
similarity index 50%
copy from libclamav/c++/llvm/test/CodeGen/X86/bigstructret.ll
copy to libclamav/c++/llvm/test/CodeGen/X86/fastcc3struct.ll
index 633995d..84f8ef6 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/bigstructret.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/fastcc3struct.ll
@@ -1,17 +1,15 @@
 ; RUN: llc < %s -march=x86 -o %t
-; RUN: grep "movl	.24601, 12(%ecx)" %t
-; RUN: grep "movl	.48, 8(%ecx)" %t
-; RUN: grep "movl	.24, 4(%ecx)" %t
-; RUN: grep "movl	.12, (%ecx)" %t
+; RUN: grep "movl	.48, %ecx" %t
+; RUN: grep "movl	.24, %edx" %t
+; RUN: grep "movl	.12, %eax" %t
 
-%0 = type { i32, i32, i32, i32 }
+%0 = type { i32, i32, i32 }
 
 define internal fastcc %0 @ReturnBigStruct() nounwind readnone {
 entry:
   %0 = insertvalue %0 zeroinitializer, i32 12, 0
   %1 = insertvalue %0 %0, i32 24, 1
   %2 = insertvalue %0 %1, i32 48, 2
-  %3 = insertvalue %0 %2, i32 24601, 3
-  ret %0 %3
+  ret %0 %2
 }
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/setcc.ll b/libclamav/c++/llvm/test/CodeGen/X86/setcc.ll
new file mode 100644
index 0000000..42ce4c1
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/setcc.ll
@@ -0,0 +1,37 @@
+; RUN: llc < %s -mtriple=x86_64-apple-darwin | FileCheck %s
+; XFAIL: *
+; rdar://7329206
+
+; Use sbb x, x to materialize carry bit in a GPR. The value is either
+; all 1's or all 0's.
+
+define zeroext i16 @t1(i16 zeroext %x) nounwind readnone ssp {
+entry:
+; CHECK: t1:
+; CHECK: seta %al
+; CHECK: movzbl %al, %eax
+; CHECK: shll $5, %eax
+  %0 = icmp ugt i16 %x, 26                        ; <i1> [#uses=1]
+  %iftmp.1.0 = select i1 %0, i16 32, i16 0        ; <i16> [#uses=1]
+  ret i16 %iftmp.1.0
+}
+
+define zeroext i16 @t2(i16 zeroext %x) nounwind readnone ssp {
+entry:
+; CHECK: t2:
+; CHECK: sbbl %eax, %eax
+; CHECK: andl $32, %eax
+  %0 = icmp ult i16 %x, 26                        ; <i1> [#uses=1]
+  %iftmp.0.0 = select i1 %0, i16 32, i16 0        ; <i16> [#uses=1]
+  ret i16 %iftmp.0.0
+}
+
+define i64 @t3(i64 %x) nounwind readnone ssp {
+entry:
+; CHECK: t3:
+; CHECK: sbbq %rax, %rax
+; CHECK: andq $64, %rax
+  %0 = icmp ult i64 %x, 18                        ; <i1> [#uses=1]
+  %iftmp.2.0 = select i1 %0, i64 64, i64 0        ; <i64> [#uses=1]
+  ret i64 %iftmp.2.0
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/vec-trunc-store.ll b/libclamav/c++/llvm/test/CodeGen/X86/vec-trunc-store.ll
new file mode 100644
index 0000000..ea1a151
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/vec-trunc-store.ll
@@ -0,0 +1,13 @@
+; RUN: llc < %s -march=x86-64 -disable-mmx | grep punpcklwd | count 2
+
+define void @foo() nounwind {
+  %cti69 = trunc <8 x i32> undef to <8 x i16>     ; <<8 x i16>> [#uses=1]
+  store <8 x i16> %cti69, <8 x i16>* undef
+  ret void
+}
+
+define void @bar() nounwind {
+  %cti44 = trunc <4 x i32> undef to <4 x i16>     ; <<4 x i16>> [#uses=1]
+  store <4 x i16> %cti44, <4 x i16>* undef
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/zext-shl.ll b/libclamav/c++/llvm/test/CodeGen/X86/zext-shl.ll
new file mode 100644
index 0000000..928848e
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/zext-shl.ll
@@ -0,0 +1,25 @@
+; RUN: llc < %s -march=x86 | FileCheck %s
+
+define i32 @t1(i8 zeroext %x) nounwind readnone ssp {
+entry:
+; CHECK: t1:
+; CHECK: shll
+; CHECK-NOT: movzwl
+; CHECK: ret
+  %0 = zext i8 %x to i16
+  %1 = shl i16 %0, 5
+  %2 = zext i16 %1 to i32
+  ret i32 %2
+}
+
+define i32 @t2(i8 zeroext %x) nounwind readnone ssp {
+entry:
+; CHECK: t2:
+; CHECK: shrl
+; CHECK-NOT: movzwl
+; CHECK: ret
+  %0 = zext i8 %x to i16
+  %1 = lshr i16 %0, 3
+  %2 = zext i16 %1 to i32
+  ret i32 %2
+}
diff --git a/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst b/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
index 4cf2a5a..4d80a2a 100644
--- a/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
+++ b/libclamav/c++/llvm/tools/llvmc/doc/LLVMC-Reference.rst
@@ -336,8 +336,8 @@ separate option groups syntactically.
      it is synonymous with ``required``. Incompatible with ``required`` and
      ``zero_or_one``.
 
-   - ``zero_or_one`` - the option can be specified zero or one times. Useful
-     only for list options in conjunction with ``multi_val``. Incompatible with
+   - ``optional`` - the option can be specified zero or one times. Useful only
+     for list options in conjunction with ``multi_val``. Incompatible with
      ``required`` and ``one_or_more``.
 
    - ``hidden`` - the description of this option will not appear in
@@ -356,14 +356,15 @@ separate option groups syntactically.
    - ``multi_val n`` - this option takes *n* arguments (can be useful in some
      special cases). Usage example: ``(parameter_list_option "foo", (multi_val
      3))``; the command-line syntax is '-foo a b c'. Only list options can have
-     this attribute; you can, however, use the ``one_or_more``, ``zero_or_one``
+     this attribute; you can, however, use the ``one_or_more``, ``optional``
      and ``required`` properties.
 
    - ``init`` - this option has a default value, either a string (if it is a
-     parameter), or a boolean (if it is a switch; boolean constants are called
-     ``true`` and ``false``). List options can't have this attribute. Usage
-     examples: ``(switch_option "foo", (init true))``; ``(prefix_option "bar",
-     (init "baz"))``.
+     parameter), or a boolean (if it is a switch; as in C++, boolean constants
+     are called ``true`` and ``false``). List options can't have ``init``
+     attribute.
+     Usage examples: ``(switch_option "foo", (init true))``; ``(prefix_option
+     "bar", (init "baz"))``.
 
    - ``extern`` - this option is defined in some other plugin, see `below`__.
 
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
index 12c6b67..bbf3460 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
@@ -534,6 +534,31 @@ TEST_F(JITTest, FunctionPointersOutliveTheirCreator) {
 #endif
 }
 
+}  // anonymous namespace
+// This variable is intentionally defined differently in the statically-compiled
+// program from the IR input to the JIT to assert that the JIT doesn't use its
+// definition.
+extern "C" int32_t JITTest_AvailableExternallyGlobal;
+int32_t JITTest_AvailableExternallyGlobal = 42;
+namespace {
+
+TEST_F(JITTest, AvailableExternallyGlobalIsntEmitted) {
+  TheJIT->DisableLazyCompilation(true);
+  LoadAssembly("@JITTest_AvailableExternallyGlobal = "
+               "  available_externally global i32 7 "
+               " "
+               "define i32 @loader() { "
+               "  %result = load i32* @JITTest_AvailableExternallyGlobal "
+               "  ret i32 %result "
+               "} ");
+  Function *loaderIR = M->getFunction("loader");
+
+  int32_t (*loader)() = reinterpret_cast<int32_t(*)()>(
+    (intptr_t)TheJIT->getPointerToFunction(loaderIR));
+  EXPECT_EQ(42, loader()) << "func should return 42 from the external global,"
+                          << " not 7 from the IR version.";
+}
+
 // This code is copied from JITEventListenerTest, but it only runs once for all
 // the tests in this directory.  Everything seems fine, but that's strange
 // behavior.
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile
index 048924a..8de390b 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/Makefile
@@ -13,3 +13,6 @@ LINK_COMPONENTS := asmparser core support jit native
 
 include $(LEVEL)/Makefile.config
 include $(LLVM_SRC_ROOT)/unittests/Makefile.unittest
+
+# Permit these tests to use the JIT's symbolic lookup.
+LD.Flags += $(RDYNAMIC)
diff --git a/libclamav/c++/llvm/utils/NewNightlyTest.pl b/libclamav/c++/llvm/utils/NewNightlyTest.pl
index a8cf8de..a306382 100755
--- a/libclamav/c++/llvm/utils/NewNightlyTest.pl
+++ b/libclamav/c++/llvm/utils/NewNightlyTest.pl
@@ -317,9 +317,9 @@ sub RunLoggedCommand {
   } else {
       if ($VERBOSE) {
           print "$Title\n";
-          print "$Command 2>&1 > $Log\n";
+          print "$Command > $Log 2>&1\n";
       }
-      system "$Command 2>&1 > $Log";
+      system "$Command > $Log 2>&1";
   }
 }
 
@@ -336,9 +336,9 @@ sub RunAppendingLoggedCommand {
   } else {
       if ($VERBOSE) {
           print "$Title\n";
-          print "$Command 2>&1 > $Log\n";
+          print "$Command >> $Log 2>&1\n";
       }
-      system "$Command 2>&1 >> $Log";
+      system "$Command >> $Log 2>&1";
   }
 }
 
@@ -393,10 +393,8 @@ sub CopyFile { #filename, newfile
 # to our central server via the post method
 #
 #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-sub SendData {
-    $host = $_[0];
-    $file = $_[1];
-    $variables = $_[2];
+sub WriteSentData {
+    $variables = $_[0];
 
     # Write out the "...-sentdata.txt" file.
 
@@ -406,6 +404,12 @@ sub SendData {
         $sentdata.= "$x  => $value\n";
     }
     WriteFile "$Prefix-sentdata.txt", $sentdata;
+}
+
+sub SendData {
+    $host = $_[0];
+    $file = $_[1];
+    $variables = $_[2];
 
     if (!($SUBMITAUX eq "")) {
         system "$SUBMITAUX \"$Prefix-sentdata.txt\"";
@@ -503,8 +507,8 @@ sub BuildLLVM {
   }
   RunAppendingLoggedCommand("(time -p $NICE $MAKECMD $MAKEOPTS)", $BuildLog, "BUILD");
 
-  if (`grep '^$MAKECMD\[^:]*: .*Error' $BuildLog | wc -l` + 0 ||
-      `grep '^$MAKECMD: \*\*\*.*Stop.' $BuildLog | wc -l` + 0) {
+  if (`grep -a '^$MAKECMD\[^:]*: .*Error' $BuildLog | wc -l` + 0 ||
+      `grep -a '^$MAKECMD: \*\*\*.*Stop.' $BuildLog | wc -l` + 0) {
     return 0;
   }
 
@@ -531,15 +535,15 @@ sub TestDirectory {
   $LLCBetaOpts = `$MAKECMD print-llcbeta-option`;
 
   my $ProgramsTable;
-  if (`grep '^$MAKECMD\[^:]: .*Error' $ProgramTestLog | wc -l` + 0) {
+  if (`grep -a '^$MAKECMD\[^:]: .*Error' $ProgramTestLog | wc -l` + 0) {
     $ProgramsTable="Error running test $SubDir\n";
     print "ERROR TESTING\n";
-  } elsif (`grep '^$MAKECMD\[^:]: .*No rule to make target' $ProgramTestLog | wc -l` + 0) {
+  } elsif (`grep -a '^$MAKECMD\[^:]: .*No rule to make target' $ProgramTestLog | wc -l` + 0) {
     $ProgramsTable="Makefile error running tests $SubDir!\n";
     print "ERROR TESTING\n";
   } else {
     # Create a list of the tests which were run...
-    system "egrep 'TEST-(PASS|FAIL)' < $ProgramTestLog ".
+    system "egrep -a 'TEST-(PASS|FAIL)' < $ProgramTestLog ".
            "| sort > $Prefix-$SubDir-Tests.txt";
   }
   $ProgramsTable = ReadFile "report.nightly.csv";
@@ -797,6 +801,9 @@ my %hash_of_data = (
   'a_file_sizes' => ""
 );
 
+# Write out the "...-sentdata.txt" file.
+WriteSentData \%hash_of_data;
+
 if ($SUBMIT || !($SUBMITAUX eq "")) {
   my $response = SendData $SUBMITSERVER,$SUBMITSCRIPT,\%hash_of_data;
   if( $VERBOSE) { print "============================\n$response"; }
diff --git a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
index 613ae03..5be9ab7 100644
--- a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
@@ -15,8 +15,6 @@
 #include "Record.h"
 
 #include "llvm/ADT/IntrusiveRefCntPtr.h"
-#include "llvm/ADT/SmallVector.h"
-#include "llvm/ADT/StringExtras.h"
 #include "llvm/ADT/StringMap.h"
 #include "llvm/ADT/StringSet.h"
 #include <algorithm>
@@ -211,7 +209,7 @@ OptionType::OptionType stringToOptionType(const std::string& T) {
 namespace OptionDescriptionFlags {
   enum OptionDescriptionFlags { Required = 0x1, Hidden = 0x2,
                                 ReallyHidden = 0x4, Extern = 0x8,
-                                OneOrMore = 0x10, ZeroOrOne = 0x20,
+                                OneOrMore = 0x10, Optional = 0x20,
                                 CommaSeparated = 0x40 };
 }
 
@@ -260,8 +258,8 @@ struct OptionDescription {
   bool isOneOrMore() const;
   void setOneOrMore();
 
-  bool isZeroOrOne() const;
-  void setZeroOrOne();
+  bool isOptional() const;
+  void setOptional();
 
   bool isHidden() const;
   void setHidden();
@@ -331,11 +329,11 @@ void OptionDescription::setOneOrMore() {
   Flags |= OptionDescriptionFlags::OneOrMore;
 }
 
-bool OptionDescription::isZeroOrOne() const {
-  return Flags & OptionDescriptionFlags::ZeroOrOne;
+bool OptionDescription::isOptional() const {
+  return Flags & OptionDescriptionFlags::Optional;
 }
-void OptionDescription::setZeroOrOne() {
-  Flags |= OptionDescriptionFlags::ZeroOrOne;
+void OptionDescription::setOptional() {
+  Flags |= OptionDescriptionFlags::Optional;
 }
 
 bool OptionDescription::isHidden() const {
@@ -548,7 +546,7 @@ public:
       AddHandler("one_or_more", &CollectOptionProperties::onOneOrMore);
       AddHandler("really_hidden", &CollectOptionProperties::onReallyHidden);
       AddHandler("required", &CollectOptionProperties::onRequired);
-      AddHandler("zero_or_one", &CollectOptionProperties::onZeroOrOne);
+      AddHandler("optional", &CollectOptionProperties::onOptional);
       AddHandler("comma_separated", &CollectOptionProperties::onCommaSeparated);
 
       staticMembersInitialized_ = true;
@@ -595,8 +593,8 @@ private:
 
   void onRequired (const DagInit* d) {
     checkNumberOfArguments(d, 0);
-    if (optDesc_.isOneOrMore() || optDesc_.isZeroOrOne())
-      throw "Only one of (required), (zero_or_one) or "
+    if (optDesc_.isOneOrMore() || optDesc_.isOptional())
+      throw "Only one of (required), (optional) or "
         "(one_or_more) properties is allowed!";
     optDesc_.setRequired();
   }
@@ -617,8 +615,8 @@ private:
 
   void onOneOrMore (const DagInit* d) {
     checkNumberOfArguments(d, 0);
-    if (optDesc_.isRequired() || optDesc_.isZeroOrOne())
-      throw "Only one of (required), (zero_or_one) or "
+    if (optDesc_.isRequired() || optDesc_.isOptional())
+      throw "Only one of (required), (optional) or "
         "(one_or_more) properties is allowed!";
     if (!OptionType::IsList(optDesc_.Type))
       llvm::errs() << "Warning: specifying the 'one_or_more' property "
@@ -626,15 +624,15 @@ private:
     optDesc_.setOneOrMore();
   }
 
-  void onZeroOrOne (const DagInit* d) {
+  void onOptional (const DagInit* d) {
     checkNumberOfArguments(d, 0);
     if (optDesc_.isRequired() || optDesc_.isOneOrMore())
-      throw "Only one of (required), (zero_or_one) or "
+      throw "Only one of (required), (optional) or "
         "(one_or_more) properties is allowed!";
     if (!OptionType::IsList(optDesc_.Type))
-      llvm::errs() << "Warning: specifying the 'zero_or_one' property"
+      llvm::errs() << "Warning: specifying the 'optional' property"
         "on a non-list option will have no effect.\n";
-    optDesc_.setZeroOrOne();
+    optDesc_.setOptional();
   }
 
   void onMultiVal (const DagInit* d) {
@@ -1454,9 +1452,9 @@ void EmitCaseConstructHandler(const Init* Case, unsigned IndentLevel,
            EmitCaseStatementCallback<F>(Callback, O), IndentLevel);
 }
 
-/// TokenizeCmdline - converts from "$CALL(HookName, 'Arg1', 'Arg2')/path" to
-/// ["$CALL(", "HookName", "Arg1", "Arg2", ")/path"] .
-/// Helper function used by EmitCmdLineVecFill and.
+/// TokenizeCmdline - converts from
+/// "$CALL(HookName, 'Arg1', 'Arg2')/path -arg1 -arg2" to
+/// ["$CALL(", "HookName", "Arg1", "Arg2", ")/path", "-arg1", "-arg2"].
 void TokenizeCmdline(const std::string& CmdLine, StrVector& Out) {
   const char* Delimiters = " \t\n\v\f\r";
   enum TokenizerState
@@ -1537,62 +1535,99 @@ void TokenizeCmdline(const std::string& CmdLine, StrVector& Out) {
   }
 }
 
-/// SubstituteSpecialCommands - Perform string substitution for $CALL
-/// and $ENV. Helper function used by EmitCmdLineVecFill().
-StrVector::const_iterator SubstituteSpecialCommands
-(StrVector::const_iterator Pos, StrVector::const_iterator End, raw_ostream& O)
+/// SubstituteCall - Given "$CALL(HookName, [Arg1 [, Arg2 [...]]])", output
+/// "hooks::HookName([Arg1 [, Arg2 [, ...]]])". Helper function used by
+/// SubstituteSpecialCommands().
+StrVector::const_iterator
+SubstituteCall (StrVector::const_iterator Pos,
+                StrVector::const_iterator End,
+                bool IsJoin, raw_ostream& O)
 {
+  const char* errorMessage = "Syntax error in $CALL invocation!";
+  checkedIncrement(Pos, End, errorMessage);
+  const std::string& CmdName = *Pos;
 
-  const std::string& cmd = *Pos;
-
-  if (cmd == "$CALL") {
-    checkedIncrement(Pos, End, "Syntax error in $CALL invocation!");
-    const std::string& CmdName = *Pos;
+  if (CmdName == ")")
+    throw "$CALL invocation: empty argument list!";
 
-    if (CmdName == ")")
-      throw "$CALL invocation: empty argument list!";
+  O << "hooks::";
+  O << CmdName << "(";
 
-    O << "hooks::";
-    O << CmdName << "(";
 
+  bool firstIteration = true;
+  while (true) {
+    checkedIncrement(Pos, End, errorMessage);
+    const std::string& Arg = *Pos;
+    assert(Arg.size() != 0);
 
-    bool firstIteration = true;
-    while (true) {
-      checkedIncrement(Pos, End, "Syntax error in $CALL invocation!");
-      const std::string& Arg = *Pos;
-      assert(Arg.size() != 0);
+    if (Arg[0] == ')')
+      break;
 
-      if (Arg[0] == ')')
-        break;
+    if (firstIteration)
+      firstIteration = false;
+    else
+      O << ", ";
 
-      if (firstIteration)
-        firstIteration = false;
+    if (Arg == "$INFILE") {
+      if (IsJoin)
+        throw "$CALL(Hook, $INFILE) can't be used with a Join tool!";
       else
-        O << ", ";
-
+        O << "inFile.c_str()";
+    }
+    else {
       O << '"' << Arg << '"';
     }
+  }
 
-    O << ')';
+  O << ')';
 
-  }
-  else if (cmd == "$ENV") {
-    checkedIncrement(Pos, End, "Syntax error in $ENV invocation!");
-    const std::string& EnvName = *Pos;
+  return Pos;
+}
+
+/// SubstituteEnv - Given '$ENV(VAR_NAME)', output 'getenv("VAR_NAME")'. Helper
+/// function used by SubstituteSpecialCommands().
+StrVector::const_iterator
+SubstituteEnv (StrVector::const_iterator Pos,
+               StrVector::const_iterator End, raw_ostream& O)
+{
+  const char* errorMessage = "Syntax error in $ENV invocation!";
+  checkedIncrement(Pos, End, errorMessage);
+  const std::string& EnvName = *Pos;
+
+  if (EnvName == ")")
+    throw "$ENV invocation: empty argument list!";
+
+  O << "checkCString(std::getenv(\"";
+  O << EnvName;
+  O << "\"))";
+
+  checkedIncrement(Pos, End, errorMessage);
+
+  return Pos;
+}
 
-    if (EnvName == ")")
-      throw "$ENV invocation: empty argument list!";
+/// SubstituteSpecialCommands - Given an invocation of $CALL or $ENV, output
+/// handler code. Helper function used by EmitCmdLineVecFill().
+StrVector::const_iterator
+SubstituteSpecialCommands (StrVector::const_iterator Pos,
+                           StrVector::const_iterator End,
+                           bool IsJoin, raw_ostream& O)
+{
 
-    O << "checkCString(std::getenv(\"";
-    O << EnvName;
-    O << "\"))";
+  const std::string& cmd = *Pos;
 
-    checkedIncrement(Pos, End, "Syntax error in $ENV invocation!");
+  // Perform substitution.
+  if (cmd == "$CALL") {
+    Pos = SubstituteCall(Pos, End, IsJoin, O);
+  }
+  else if (cmd == "$ENV") {
+    Pos = SubstituteEnv(Pos, End, O);
   }
   else {
     throw "Unknown special command: " + cmd;
   }
 
+  // Handle '$CMD(ARG)/additional/text'.
   const std::string& Leftover = *Pos;
   assert(Leftover.at(0) == ')');
   if (Leftover.size() != 1)
@@ -1652,7 +1687,7 @@ void EmitCmdLineVecFill(const Init* CmdLine, const std::string& ToolName,
       }
       else {
         O << "vec.push_back(";
-        I = SubstituteSpecialCommands(I, E, O);
+        I = SubstituteSpecialCommands(I, E, IsJoin, O);
         O << ");\n";
       }
     }
@@ -1665,7 +1700,7 @@ void EmitCmdLineVecFill(const Init* CmdLine, const std::string& ToolName,
 
   O.indent(IndentLevel) << "cmd = ";
   if (StrVec[0][0] == '$')
-    SubstituteSpecialCommands(StrVec.begin(), StrVec.end(), O);
+    SubstituteSpecialCommands(StrVec.begin(), StrVec.end(), IsJoin, O);
   else
     O << '"' << StrVec[0] << '"';
   O << ";\n";
@@ -1786,17 +1821,36 @@ class EmitActionHandlersCallback
   const OptionDescriptions& OptDescs;
   typedef EmitActionHandlersCallbackHandler Handler;
 
-  void onAppendCmd (const DagInit& Dag,
-                    unsigned IndentLevel, raw_ostream& O) const
+  /// EmitHookInvocation - Common code for hook invocation from actions. Used by
+  /// onAppendCmd and onOutputSuffix.
+  void EmitHookInvocation(const std::string& Str,
+                          const char* BlockOpen, const char* BlockClose,
+                          unsigned IndentLevel, raw_ostream& O) const
   {
-    checkNumberOfArguments(&Dag, 1);
-    const std::string& Cmd = InitPtrToString(Dag.getArg(0));
     StrVector Out;
-    llvm::SplitString(Cmd, Out);
+    TokenizeCmdline(Str, Out);
 
     for (StrVector::const_iterator B = Out.begin(), E = Out.end();
-         B != E; ++B)
-      O.indent(IndentLevel) << "vec.push_back(\"" << *B << "\");\n";
+         B != E; ++B) {
+      const std::string& cmd = *B;
+
+      O.indent(IndentLevel) << BlockOpen;
+
+      if (cmd.at(0) == '$')
+        B = SubstituteSpecialCommands(B, E,  /* IsJoin = */ true, O);
+      else
+        O << '"' << cmd << '"';
+
+      O << BlockClose;
+    }
+  }
+
+  void onAppendCmd (const DagInit& Dag,
+                    unsigned IndentLevel, raw_ostream& O) const
+  {
+    checkNumberOfArguments(&Dag, 1);
+    this->EmitHookInvocation(InitPtrToString(Dag.getArg(0)),
+                             "vec.push_back(", ");\n", IndentLevel, O);
   }
 
   void onForward (const DagInit& Dag,
@@ -1845,16 +1899,16 @@ class EmitActionHandlersCallback
     const OptionDescription& D = OptDescs.FindListOrParameter(Name);
 
     O.indent(IndentLevel) << "vec.push_back(" << "hooks::"
-                          << Hook << "(" << D.GenVariableName() << "));\n";
+                          << Hook << "(" << D.GenVariableName()
+                          << (D.isParameter() ? ".c_str()" : "") << "));\n";
   }
 
-
   void onOutputSuffix (const DagInit& Dag,
                        unsigned IndentLevel, raw_ostream& O) const
   {
     checkNumberOfArguments(&Dag, 1);
-    const std::string& OutSuf = InitPtrToString(Dag.getArg(0));
-    O.indent(IndentLevel) << "output_suffix = \"" << OutSuf << "\";\n";
+    this->EmitHookInvocation(InitPtrToString(Dag.getArg(0)),
+                             "output_suffix = ", ";\n", IndentLevel, O);
   }
 
   void onStopCompilation (const DagInit& Dag,
@@ -2115,7 +2169,7 @@ void EmitToolClassDefinition (const ToolDescription& D,
   else
     O << "Tool";
 
-  O << "{\nprivate:\n";
+  O << " {\nprivate:\n";
   O.indent(Indent1) << "static const char* InputLanguages_[];\n\n";
 
   O << "public:\n";
@@ -2174,8 +2228,8 @@ void EmitOptionDefinitions (const OptionDescriptions& descs,
     else if (val.isOneOrMore() && val.isList()) {
         O << ", cl::OneOrMore";
     }
-    else if (val.isZeroOrOne() && val.isList()) {
-        O << ", cl::ZeroOrOne";
+    else if (val.isOptional() && val.isList()) {
+        O << ", cl::Optional";
     }
 
     if (val.isReallyHidden())
@@ -2483,7 +2537,9 @@ public:
   {}
 
   void onAction (const DagInit& Dag) {
-    if (GetOperatorName(Dag) == "forward_transformed_value") {
+    const std::string& Name = GetOperatorName(Dag);
+
+    if (Name == "forward_transformed_value") {
       checkNumberOfArguments(Dag, 2);
       const std::string& OptName = InitPtrToString(Dag.getArg(0));
       const std::string& HookName = InitPtrToString(Dag.getArg(1));
@@ -2492,29 +2548,16 @@ public:
       HookNames_[HookName] = HookInfo(D.isList() ? HookInfo::ListHook
                                       : HookInfo::ArgHook);
     }
-  }
-
-  void operator()(const Init* Arg) {
-
-    // We're invoked on an action (either a dag or a dag list).
-    if (typeid(*Arg) == typeid(DagInit)) {
-      const DagInit& Dag = InitPtrToDag(Arg);
-      this->onAction(Dag);
-      return;
-    }
-    else if (typeid(*Arg) == typeid(ListInit)) {
-      const ListInit& List = InitPtrToList(Arg);
-      for (ListInit::const_iterator B = List.begin(), E = List.end(); B != E;
-           ++B) {
-        const DagInit& Dag = InitPtrToDag(*B);
-        this->onAction(Dag);
-      }
-      return;
+    else if (Name == "append_cmd" || Name == "output_suffix") {
+      checkNumberOfArguments(Dag, 1);
+      this->onCmdLine(InitPtrToString(Dag.getArg(0)));
     }
+  }
 
-    // We're invoked on a command line.
+  void onCmdLine(const std::string& Cmd) {
     StrVector cmds;
-    TokenizeCmdline(InitPtrToString(Arg), cmds);
+    TokenizeCmdline(Cmd, cmds);
+
     for (StrVector::const_iterator B = cmds.begin(), E = cmds.end();
          B != E; ++B) {
       const std::string& cmd = *B;
@@ -2524,7 +2567,6 @@ public:
         checkedIncrement(B, E, "Syntax error in $CALL invocation!");
         const std::string& HookName = *B;
 
-
         if (HookName.at(0) == ')')
           throw "$CALL invoked with no arguments!";
 
@@ -2540,9 +2582,30 @@ public:
             + HookName;
         else
           HookNames_[HookName] = HookInfo(NumArgs);
+      }
+    }
+  }
 
+  void operator()(const Init* Arg) {
+
+    // We're invoked on an action (either a dag or a dag list).
+    if (typeid(*Arg) == typeid(DagInit)) {
+      const DagInit& Dag = InitPtrToDag(Arg);
+      this->onAction(Dag);
+      return;
+    }
+    else if (typeid(*Arg) == typeid(ListInit)) {
+      const ListInit& List = InitPtrToList(Arg);
+      for (ListInit::const_iterator B = List.begin(), E = List.end(); B != E;
+           ++B) {
+        const DagInit& Dag = InitPtrToDag(*B);
+        this->onAction(Dag);
       }
+      return;
     }
+
+    // We're invoked on a command line.
+    this->onCmdLine(InitPtrToString(Arg));
   }
 
   void operator()(const DagInit* Test, unsigned, bool) {

-- 
Debian repository for ClamAV



More information about the Pkg-clamav-commits mailing list