[kernel] r22121 - in dists/squeeze-security/linux-2.6/debian: . patches/bugfix/x86 patches/series

Ben Hutchings benh at moszumanska.debian.org
Sun Dec 7 17:52:05 UTC 2014


Author: benh
Date: Sun Dec  7 17:52:05 2014
New Revision: 22121

Log:
Add fixes for kernel entry/exit security flaws (mostly x86-64)

Added:
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-32-espfix-Remove-filter-for-espfix32-due-to-race.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-Adjust-frame-type-at-paranoid_exit.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-bit-Move-K8-B-step-iret-fixup-to-fault-entry-.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-espfix-Don-t-leak-bits-31-16-of-esp-returning.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-modify_ldt-Ban-16-bit-segments-on-64-bit-kern.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Fix-broken-header-guard.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Make-espfix64-a-Kconfig-option-fix-UML.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Make-it-possible-to-disable-16-bit-suppor.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Move-espfix-definitions-into-a-separate-h.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-xen-Fix-allocation-of-pages-for-paravirt-.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-entry-xen-Do-not-invoke-espfix64-on-Xen.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-traps-Fix-the-espfix64-DF-fixup-and-rewrite-i.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-traps-Rework-bad_iret.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-traps-Stop-using-IST-for-SS.patch
Modified:
   dists/squeeze-security/linux-2.6/debian/changelog
   dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze9

Modified: dists/squeeze-security/linux-2.6/debian/changelog
==============================================================================
--- dists/squeeze-security/linux-2.6/debian/changelog	Sun Dec  7 03:58:14 2014	(r22120)
+++ dists/squeeze-security/linux-2.6/debian/changelog	Sun Dec  7 17:52:05 2014	(r22121)
@@ -312,6 +312,20 @@
   * sctp: Fix double-free introduced by bad backport in 2.6.32.62
   * net: sendmsg: Really fix NULL pointer dereference
   * md/raid6: Fix misapplied backport in 2.6.32.64
+  * [amd64] Move K8 B step iret fixup to fault entry asm
+  * [amd64] Adjust frame type at paranoid_exit:
+  * [amd64] modify_ldt: Ban 16-bit segments on 64-bit kernels
+  * [i386] espfix: Remove filter for espfix32 due to race
+  * [amd64] espfix: Don't leak bits 31:16 of %esp returning to 16-bit
+  * [x86] espfix: Move espfix definitions into a separate header file
+  * [x86] espfix: Fix broken header guard
+  * [x86] espfix: Make espfix64 a Kconfig option, fix UML
+  * [x86] espfix: Make it possible to disable 16-bit support
+  * [x86_64] entry/xen: Do not invoke espfix64 on Xen
+  * [x86] espfix/xen: Fix allocation of pages for paravirt page tables
+  * [x86_64] traps: Stop using IST for #SS
+  * [x86_64] traps: Fix the espfix64 #DF fixup and rewrite it in C
+  * [x86_64] traps: Rework bad_iret
 
  -- Holger Levsen <holger at debian.org>  Sun, 30 Nov 2014 15:57:49 +0100
 

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-32-espfix-Remove-filter-for-espfix32-due-to-race.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-32-espfix-Remove-filter-for-espfix32-due-to-race.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,40 @@
+From 3dad839711808636ebc4be66d66dd744f3786eab Mon Sep 17 00:00:00 2001
+From: "H. Peter Anvin" <hpa at linux.intel.com>
+Date: Wed, 30 Apr 2014 14:03:25 -0700
+Subject: x86-32, espfix: Remove filter for espfix32 due to race
+
+commit 246f2d2ee1d715e1077fc47d61c394569c8ee692 upstream.
+
+It is not safe to use LAR to filter when to go down the espfix path,
+because the LDT is per-process (rather than per-thread) and another
+thread might change the descriptors behind our back.  Fortunately it
+is always *safe* (if a bit slow) to go down the espfix path, and a
+32-bit LDT stack segment is extremely rare.
+
+Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
+Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+(cherry picked from commit 6806fa8b6795aba9be8742a8f598f60eed26f875)
+---
+ arch/x86/kernel/entry_32.S | 5 -----
+ 1 file changed, 5 deletions(-)
+
+diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
+index 8b5370c..db7dbe7 100644
+--- a/arch/x86/kernel/entry_32.S
++++ b/arch/x86/kernel/entry_32.S
+@@ -571,11 +571,6 @@ ENTRY(iret_exc)
+ 
+ 	CFI_RESTORE_STATE
+ ldt_ss:
+-	larl PT_OLDSS(%esp), %eax
+-	jnz restore_nocheck
+-	testl $0x00400000, %eax		# returning to 32bit stack?
+-	jnz restore_nocheck		# allright, normal return
+-
+ #ifdef CONFIG_PARAVIRT
+ 	/*
+ 	 * The kernel can't run on a non-flat stack if paravirt mode
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-Adjust-frame-type-at-paranoid_exit.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-Adjust-frame-type-at-paranoid_exit.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,33 @@
+From d0f5b090fb598446b61d786aae3a218385f76129 Mon Sep 17 00:00:00 2001
+From: Jan Beulich <JBeulich at novell.com>
+Date: Thu, 2 Sep 2010 13:54:32 +0100
+Subject: x86-64: Adjust frame type at paranoid_exit:
+
+As this isn't an exception or interrupt entry point, it doesn't
+have any of the hardware provide frame layouts active.
+
+Signed-off-by: Jan Beulich <jbeulich at novell.com>
+Acked-by: Alexander van Heukelum <heukelum at fastmail.fm>
+LKML-Reference: <4C7FBAA80200007800013F67 at vpn.id2.novell.com>
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+(cherry picked from commit 1f130a783a796f147b080c594488b566c86007d0)
+---
+ arch/x86/kernel/entry_64.S | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index 4f577eb..fb402d5 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -1400,7 +1400,7 @@ paranoidzeroentry machine_check *machine_check_vector(%rip)
+ 
+ 	/* ebx:	no swapgs flag */
+ ENTRY(paranoid_exit)
+-	INTR_FRAME
++	DEFAULT_FRAME
+ 	DISABLE_INTERRUPTS(CLBR_NONE)
+ 	TRACE_IRQS_OFF
+ 	testl %ebx,%ebx				/* swapgs needed? */
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-bit-Move-K8-B-step-iret-fixup-to-fault-entry-.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-bit-Move-K8-B-step-iret-fixup-to-fault-entry-.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,101 @@
+From 39ded7eddf8fbf0960b032536beccb3e0c920f98 Mon Sep 17 00:00:00 2001
+From: Brian Gerst <brgerst at gmail.com>
+Date: Mon, 12 Oct 2009 10:18:23 -0400
+Subject: x86, 64-bit: Move K8 B step iret fixup to fault entry asm
+
+Move the handling of truncated %rip from an iret fault to the fault
+entry path.
+
+This allows x86-64 to use the standard search_extable() function.
+
+Signed-off-by: Brian Gerst <brgerst at gmail.com>
+Cc: Linus Torvalds <torvalds at linux-foundation.org>
+Cc: Jan Beulich <jbeulich at novell.com>
+LKML-Reference: <1255357103-5418-1-git-send-email-brgerst at gmail.com>
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+(cherry picked from commit ae24ffe5ecec17c956ac25371d7c2e12b4b36e53)
+---
+ arch/x86/include/asm/uaccess.h |  1 -
+ arch/x86/kernel/entry_64.S     | 11 ++++++++---
+ arch/x86/mm/extable.c          | 31 -------------------------------
+ 3 files changed, 8 insertions(+), 35 deletions(-)
+
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index 61c5874..99f0ad7 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -570,7 +570,6 @@ extern struct movsl_mask {
+ #ifdef CONFIG_X86_32
+ # include "uaccess_32.h"
+ #else
+-# define ARCH_HAS_SEARCH_EXTABLE
+ # include "uaccess_64.h"
+ #endif
+ 
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index 34a56a9..4f577eb 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -1491,12 +1491,17 @@ error_kernelspace:
+ 	leaq irq_return(%rip),%rcx
+ 	cmpq %rcx,RIP+8(%rsp)
+ 	je error_swapgs
+-	movl %ecx,%ecx	/* zero extend */
+-	cmpq %rcx,RIP+8(%rsp)
+-	je error_swapgs
++	movl %ecx,%eax	/* zero extend */
++	cmpq %rax,RIP+8(%rsp)
++	je bstep_iret
+ 	cmpq $gs_change,RIP+8(%rsp)
+ 	je error_swapgs
+ 	jmp error_sti
++
++bstep_iret:
++	/* Fix truncated RIP */
++	movq %rcx,RIP+8(%rsp)
++	je error_swapgs
+ END(error_entry)
+ 
+ 
+diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
+index 61b41ca..d0474ad 100644
+--- a/arch/x86/mm/extable.c
++++ b/arch/x86/mm/extable.c
+@@ -35,34 +35,3 @@ int fixup_exception(struct pt_regs *regs)
+ 
+ 	return 0;
+ }
+-
+-#ifdef CONFIG_X86_64
+-/*
+- * Need to defined our own search_extable on X86_64 to work around
+- * a B stepping K8 bug.
+- */
+-const struct exception_table_entry *
+-search_extable(const struct exception_table_entry *first,
+-	       const struct exception_table_entry *last,
+-	       unsigned long value)
+-{
+-	/* B stepping K8 bug */
+-	if ((value >> 32) == 0)
+-		value |= 0xffffffffUL << 32;
+-
+-	while (first <= last) {
+-		const struct exception_table_entry *mid;
+-		long diff;
+-
+-		mid = (last - first) / 2 + first;
+-		diff = mid->insn - value;
+-		if (diff == 0)
+-			return mid;
+-		else if (diff < 0)
+-			first = mid+1;
+-		else
+-			last = mid-1;
+-	}
+-	return NULL;
+-}
+-#endif
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-espfix-Don-t-leak-bits-31-16-of-esp-returning.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-espfix-Don-t-leak-bits-31-16-of-esp-returning.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,606 @@
+From aaa1c60cd155f4abd9883c9eb93ce0fd24f9b44e Mon Sep 17 00:00:00 2001
+From: "H. Peter Anvin" <hpa at linux.intel.com>
+Date: Tue, 29 Apr 2014 16:46:09 -0700
+Subject: x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit
+ stack
+
+commit 3891a04aafd668686239349ea58f3314ea2af86b upstream.
+
+The IRET instruction, when returning to a 16-bit segment, only
+restores the bottom 16 bits of the user space stack pointer.  This
+causes some 16-bit software to break, but it also leaks kernel state
+to user space.  We have a software workaround for that ("espfix") for
+the 32-bit kernel, but it relies on a nonzero stack segment base which
+is not available in 64-bit mode.
+
+In checkin:
+
+    b3b42ac2cbae x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
+
+we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
+the logic that 16-bit support is crippled on 64-bit kernels anyway (no
+V86 support), but it turns out that people are doing stuff like
+running old Win16 binaries under Wine and expect it to work.
+
+This works around this by creating percpu "ministacks", each of which
+is mapped 2^16 times 64K apart.  When we detect that the return SS is
+on the LDT, we copy the IRET frame to the ministack and use the
+relevant alias to return to userspace.  The ministacks are mapped
+readonly, so if IRET faults we promote #GP to #DF which is an IST
+vector and thus has its own stack; we then do the fixup in the #DF
+handler.
+
+(Making #GP an IST exception would make the msr_safe functions unsafe
+in NMI/MC context, and quite possibly have other effects.)
+
+Special thanks to:
+
+- Andy Lutomirski, for the suggestion of using very small stack slots
+  and copy (as opposed to map) the IRET frame there, and for the
+  suggestion to mark them readonly and let the fault promote to #DF.
+- Konrad Wilk for paravirt fixup and testing.
+- Borislav Petkov for testing help and useful comments.
+
+Reported-by: Brian Gerst <brgerst at gmail.com>
+Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
+Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
+Cc: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>
+Cc: Borislav Petkov <bp at alien8.de>
+Cc: Andrew Lutomriski <amluto at gmail.com>
+Cc: Linus Torvalds <torvalds at linux-foundation.org>
+Cc: Dirk Hohndel <dirk at hohndel.org>
+Cc: Arjan van de Ven <arjan.van.de.ven at intel.com>
+Cc: comex <comexk at gmail.com>
+Cc: Alexander van Heukelum <heukelum at fastmail.fm>
+Cc: Boris Ostrovsky <boris.ostrovsky at oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+(cherry picked from 3.2 commit e7836514086d53e0ffaee18d67d85d9477ecdb12)
+
+Conflicts:
+	arch/x86/include/asm/setup.h
+	arch/x86/kernel/Makefile
+	arch/x86/kernel/entry_64.S
+	arch/x86/mm/dump_pagetables.c
+
+Notes:
+  - no DECLARE_PER_CPU_READ_MOSTLY, switch to DECLARE_PER_CPU instead
+  - no this_cpu_read(foo), switch to per_cpu(foo, smp_processor_id())
+  - no this_cpu_write(foo, bar), switch to per_cpu(foo, smp_processor_id()) = bar
+---
+ Documentation/x86/x86_64/mm.txt         |   2 +
+ arch/x86/include/asm/pgtable_64_types.h |   2 +
+ arch/x86/kernel/Makefile                |   1 +
+ arch/x86/kernel/entry_64.S              |  72 ++++++++++-
+ arch/x86/kernel/espfix_64.c             | 208 ++++++++++++++++++++++++++++++++
+ arch/x86/kernel/ldt.c                   |  11 --
+ arch/x86/kernel/smpboot.c               |   7 ++
+ arch/x86/mm/dump_pagetables.c           |  38 ++++--
+ init/main.c                             |   4 +
+ 9 files changed, 319 insertions(+), 26 deletions(-)
+ create mode 100644 arch/x86/kernel/espfix_64.c
+
+diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
+index d6498e3..f33a936 100644
+--- a/Documentation/x86/x86_64/mm.txt
++++ b/Documentation/x86/x86_64/mm.txt
+@@ -12,6 +12,8 @@ ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
+ ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
+ ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
+ ... unused hole ...
++ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
++... unused hole ...
+ ffffffff80000000 - ffffffffa0000000 (=512 MB)  kernel text mapping, from phys 0
+ ffffffffa0000000 - fffffffffff00000 (=1536 MB) module mapping space
+ 
+diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
+index 766ea16..51817fa 100644
+--- a/arch/x86/include/asm/pgtable_64_types.h
++++ b/arch/x86/include/asm/pgtable_64_types.h
+@@ -59,5 +59,7 @@ typedef struct { pteval_t pte; } pte_t;
+ #define MODULES_VADDR    _AC(0xffffffffa0000000, UL)
+ #define MODULES_END      _AC(0xffffffffff000000, UL)
+ #define MODULES_LEN   (MODULES_END - MODULES_VADDR)
++#define ESPFIX_PGD_ENTRY _AC(-2, UL)
++#define ESPFIX_BASE_ADDR (ESPFIX_PGD_ENTRY << PGDIR_SHIFT)
+ 
+ #endif /* _ASM_X86_PGTABLE_64_DEFS_H */
+diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
+index d1911ab..f58fd89 100644
+--- a/arch/x86/kernel/Makefile
++++ b/arch/x86/kernel/Makefile
+@@ -40,6 +40,7 @@ obj-$(CONFIG_X86_32)	+= probe_roms_32.o
+ obj-$(CONFIG_X86_32)	+= sys_i386_32.o i386_ksyms_32.o
+ obj-$(CONFIG_X86_64)	+= sys_x86_64.o x8664_ksyms_64.o
+ obj-$(CONFIG_X86_64)	+= syscall_64.o vsyscall_64.o
++obj-$(CONFIG_X86_64)	+= espfix_64.o
+ obj-y			+= bootflag.o e820.o
+ obj-y			+= pci-dma.o quirks.o i8237.o topology.o kdebugfs.o
+ obj-y			+= alternative.o i8253.o pci-nommu.o
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index fb402d5..a69901e 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -53,6 +53,7 @@
+ #include <asm/paravirt.h>
+ #include <asm/ftrace.h>
+ #include <asm/percpu.h>
++#include <asm/pgtable_types.h>
+ 
+ /* Avoid __ASSEMBLER__'ifying <linux/audit.h> just for this.  */
+ #include <linux/elf-em.h>
+@@ -858,10 +859,18 @@ restore_args:
+ 	RESTORE_ARGS 0,8,0
+ 
+ irq_return:
++	/*
++	 * Are we returning to a stack segment from the LDT?  Note: in
++	 * 64-bit mode SS:RSP on the exception stack is always valid.
++	 */
++	testb $4,(SS-RIP)(%rsp)
++	jnz irq_return_ldt
++
++irq_return_iret:
+ 	INTERRUPT_RETURN
+ 
+ 	.section __ex_table, "a"
+-	.quad irq_return, bad_iret
++	.quad irq_return_iret, bad_iret
+ 	.previous
+ 
+ #ifdef CONFIG_PARAVIRT
+@@ -873,6 +882,30 @@ ENTRY(native_iret)
+ 	.previous
+ #endif
+ 
++irq_return_ldt:
++	pushq_cfi %rax
++	pushq_cfi %rdi
++	SWAPGS
++	movq PER_CPU_VAR(espfix_waddr),%rdi
++	movq %rax,(0*8)(%rdi)	/* RAX */
++	movq (2*8)(%rsp),%rax	/* RIP */
++	movq %rax,(1*8)(%rdi)
++	movq (3*8)(%rsp),%rax	/* CS */
++	movq %rax,(2*8)(%rdi)
++	movq (4*8)(%rsp),%rax	/* RFLAGS */
++	movq %rax,(3*8)(%rdi)
++	movq (6*8)(%rsp),%rax	/* SS */
++	movq %rax,(5*8)(%rdi)
++	movq (5*8)(%rsp),%rax	/* RSP */
++	movq %rax,(4*8)(%rdi)
++	andl $0xffff0000,%eax
++	popq_cfi %rdi
++	orq PER_CPU_VAR(espfix_stack),%rax
++	SWAPGS
++	movq %rax,%rsp
++	popq_cfi %rax
++	jmp irq_return_iret
++
+ 	.section .fixup,"ax"
+ bad_iret:
+ 	/*
+@@ -938,10 +971,41 @@ ENTRY(retint_kernel)
+ 	call preempt_schedule_irq
+ 	jmp exit_intr
+ #endif
+-
+ 	CFI_ENDPROC
+ END(common_interrupt)
+ 
++	/*
++	 * If IRET takes a fault on the espfix stack, then we
++	 * end up promoting it to a doublefault.  In that case,
++	 * modify the stack to make it look like we just entered
++	 * the #GP handler from user space, similar to bad_iret.
++	 */
++	ALIGN
++__do_double_fault:
++	XCPT_FRAME 1 RDI+8
++	movq RSP(%rdi),%rax		/* Trap on the espfix stack? */
++	sarq $PGDIR_SHIFT,%rax
++	cmpl $ESPFIX_PGD_ENTRY,%eax
++	jne do_double_fault		/* No, just deliver the fault */
++	cmpl $__KERNEL_CS,CS(%rdi)
++	jne do_double_fault
++	movq RIP(%rdi),%rax
++	cmpq $irq_return_iret,%rax
++#ifdef CONFIG_PARAVIRT
++	je 1f
++	cmpq $native_iret,%rax
++#endif
++	jne do_double_fault		/* This shouldn't happen... */
++1:
++	movq PER_CPU_VAR(kernel_stack),%rax
++	subq $(6*8-KERNEL_STACK_OFFSET),%rax	/* Reset to original stack */
++	movq %rax,RSP(%rdi)
++	movq $0,(%rax)			/* Missing (lost) #GP error code */
++	movq $general_protection,RIP(%rdi)
++	retq
++	CFI_ENDPROC
++END(__do_double_fault)
++
+ /*
+  * APIC interrupts.
+  */
+@@ -1118,7 +1182,7 @@ zeroentry overflow do_overflow
+ zeroentry bounds do_bounds
+ zeroentry invalid_op do_invalid_op
+ zeroentry device_not_available do_device_not_available
+-paranoiderrorentry double_fault do_double_fault
++paranoiderrorentry double_fault __do_double_fault
+ zeroentry coprocessor_segment_overrun do_coprocessor_segment_overrun
+ errorentry invalid_TSS do_invalid_TSS
+ errorentry segment_not_present do_segment_not_present
+@@ -1488,7 +1552,7 @@ error_sti:
+  */
+ error_kernelspace:
+ 	incl %ebx
+-	leaq irq_return(%rip),%rcx
++	leaq irq_return_iret(%rip),%rcx
+ 	cmpq %rcx,RIP+8(%rsp)
+ 	je error_swapgs
+ 	movl %ecx,%eax	/* zero extend */
+diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c
+new file mode 100644
+index 0000000..ae10040
+--- /dev/null
++++ b/arch/x86/kernel/espfix_64.c
+@@ -0,0 +1,208 @@
++/* ----------------------------------------------------------------------- *
++ *
++ *   Copyright 2014 Intel Corporation; author: H. Peter Anvin
++ *
++ *   This program is free software; you can redistribute it and/or modify it
++ *   under the terms and conditions of the GNU General Public License,
++ *   version 2, as published by the Free Software Foundation.
++ *
++ *   This program is distributed in the hope it will be useful, but WITHOUT
++ *   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
++ *   FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
++ *   more details.
++ *
++ * ----------------------------------------------------------------------- */
++
++/*
++ * The IRET instruction, when returning to a 16-bit segment, only
++ * restores the bottom 16 bits of the user space stack pointer.  This
++ * causes some 16-bit software to break, but it also leaks kernel state
++ * to user space.
++ *
++ * This works around this by creating percpu "ministacks", each of which
++ * is mapped 2^16 times 64K apart.  When we detect that the return SS is
++ * on the LDT, we copy the IRET frame to the ministack and use the
++ * relevant alias to return to userspace.  The ministacks are mapped
++ * readonly, so if the IRET fault we promote #GP to #DF which is an IST
++ * vector and thus has its own stack; we then do the fixup in the #DF
++ * handler.
++ *
++ * This file sets up the ministacks and the related page tables.  The
++ * actual ministack invocation is in entry_64.S.
++ */
++
++#include <linux/init.h>
++#include <linux/init_task.h>
++#include <linux/kernel.h>
++#include <linux/percpu.h>
++#include <linux/gfp.h>
++#include <linux/random.h>
++#include <asm/pgtable.h>
++#include <asm/pgalloc.h>
++#include <asm/setup.h>
++
++/*
++ * Note: we only need 6*8 = 48 bytes for the espfix stack, but round
++ * it up to a cache line to avoid unnecessary sharing.
++ */
++#define ESPFIX_STACK_SIZE	(8*8UL)
++#define ESPFIX_STACKS_PER_PAGE	(PAGE_SIZE/ESPFIX_STACK_SIZE)
++
++/* There is address space for how many espfix pages? */
++#define ESPFIX_PAGE_SPACE	(1UL << (PGDIR_SHIFT-PAGE_SHIFT-16))
++
++#define ESPFIX_MAX_CPUS		(ESPFIX_STACKS_PER_PAGE * ESPFIX_PAGE_SPACE)
++#if CONFIG_NR_CPUS > ESPFIX_MAX_CPUS
++# error "Need more than one PGD for the ESPFIX hack"
++#endif
++
++#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO)
++
++/* This contains the *bottom* address of the espfix stack */
++DEFINE_PER_CPU(unsigned long, espfix_stack);
++DEFINE_PER_CPU(unsigned long, espfix_waddr);
++
++/* Initialization mutex - should this be a spinlock? */
++static DEFINE_MUTEX(espfix_init_mutex);
++
++/* Page allocation bitmap - each page serves ESPFIX_STACKS_PER_PAGE CPUs */
++#define ESPFIX_MAX_PAGES  DIV_ROUND_UP(CONFIG_NR_CPUS, ESPFIX_STACKS_PER_PAGE)
++static void *espfix_pages[ESPFIX_MAX_PAGES];
++
++static __page_aligned_bss pud_t espfix_pud_page[PTRS_PER_PUD]
++	__aligned(PAGE_SIZE);
++
++static unsigned int page_random, slot_random;
++
++/*
++ * This returns the bottom address of the espfix stack for a specific CPU.
++ * The math allows for a non-power-of-two ESPFIX_STACK_SIZE, in which case
++ * we have to account for some amount of padding at the end of each page.
++ */
++static inline unsigned long espfix_base_addr(unsigned int cpu)
++{
++	unsigned long page, slot;
++	unsigned long addr;
++
++	page = (cpu / ESPFIX_STACKS_PER_PAGE) ^ page_random;
++	slot = (cpu + slot_random) % ESPFIX_STACKS_PER_PAGE;
++	addr = (page << PAGE_SHIFT) + (slot * ESPFIX_STACK_SIZE);
++	addr = (addr & 0xffffUL) | ((addr & ~0xffffUL) << 16);
++	addr += ESPFIX_BASE_ADDR;
++	return addr;
++}
++
++#define PTE_STRIDE        (65536/PAGE_SIZE)
++#define ESPFIX_PTE_CLONES (PTRS_PER_PTE/PTE_STRIDE)
++#define ESPFIX_PMD_CLONES PTRS_PER_PMD
++#define ESPFIX_PUD_CLONES (65536/(ESPFIX_PTE_CLONES*ESPFIX_PMD_CLONES))
++
++#define PGTABLE_PROT	  ((_KERNPG_TABLE & ~_PAGE_RW) | _PAGE_NX)
++
++static void init_espfix_random(void)
++{
++	unsigned long rand;
++
++	/*
++	 * This is run before the entropy pools are initialized,
++	 * but this is hopefully better than nothing.
++	 */
++	if (!arch_get_random_long(&rand)) {
++		/* The constant is an arbitrary large prime */
++		rdtscll(rand);
++		rand *= 0xc345c6b72fd16123UL;
++	}
++
++	slot_random = rand % ESPFIX_STACKS_PER_PAGE;
++	page_random = (rand / ESPFIX_STACKS_PER_PAGE)
++		& (ESPFIX_PAGE_SPACE - 1);
++}
++
++void __init init_espfix_bsp(void)
++{
++	pgd_t *pgd_p;
++	pteval_t ptemask;
++
++	ptemask = __supported_pte_mask;
++
++	/* Install the espfix pud into the kernel page directory */
++	pgd_p = &init_level4_pgt[pgd_index(ESPFIX_BASE_ADDR)];
++	pgd_populate(&init_mm, pgd_p, (pud_t *)espfix_pud_page);
++
++	/* Randomize the locations */
++	init_espfix_random();
++
++	/* The rest is the same as for any other processor */
++	init_espfix_ap();
++}
++
++void init_espfix_ap(void)
++{
++	unsigned int cpu, page;
++	unsigned long addr;
++	pud_t pud, *pud_p;
++	pmd_t pmd, *pmd_p;
++	pte_t pte, *pte_p;
++	int n;
++	void *stack_page;
++	pteval_t ptemask;
++
++	/* We only have to do this once... */
++	if (likely(per_cpu(espfix_stack, smp_processor_id())))
++		return;		/* Already initialized */
++
++	cpu = smp_processor_id();
++	addr = espfix_base_addr(cpu);
++	page = cpu/ESPFIX_STACKS_PER_PAGE;
++
++	/* Did another CPU already set this up? */
++	stack_page = ACCESS_ONCE(espfix_pages[page]);
++	if (likely(stack_page))
++		goto done;
++
++	mutex_lock(&espfix_init_mutex);
++
++	/* Did we race on the lock? */
++	stack_page = ACCESS_ONCE(espfix_pages[page]);
++	if (stack_page)
++		goto unlock_done;
++
++	ptemask = __supported_pte_mask;
++
++	pud_p = &espfix_pud_page[pud_index(addr)];
++	pud = *pud_p;
++	if (!pud_present(pud)) {
++		pmd_p = (pmd_t *)__get_free_page(PGALLOC_GFP);
++		pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask));
++		paravirt_alloc_pud(&init_mm, __pa(pmd_p) >> PAGE_SHIFT);
++		for (n = 0; n < ESPFIX_PUD_CLONES; n++)
++			set_pud(&pud_p[n], pud);
++	}
++
++	pmd_p = pmd_offset(&pud, addr);
++	pmd = *pmd_p;
++	if (!pmd_present(pmd)) {
++		pte_p = (pte_t *)__get_free_page(PGALLOC_GFP);
++		pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask));
++		paravirt_alloc_pmd(&init_mm, __pa(pte_p) >> PAGE_SHIFT);
++		for (n = 0; n < ESPFIX_PMD_CLONES; n++)
++			set_pmd(&pmd_p[n], pmd);
++	}
++
++	pte_p = pte_offset_kernel(&pmd, addr);
++	stack_page = (void *)__get_free_page(GFP_KERNEL);
++	pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask));
++	paravirt_alloc_pte(&init_mm, __pa(stack_page) >> PAGE_SHIFT);
++	for (n = 0; n < ESPFIX_PTE_CLONES; n++)
++		set_pte(&pte_p[n*PTE_STRIDE], pte);
++
++	/* Job is done for this CPU and any CPU which shares this page */
++	ACCESS_ONCE(espfix_pages[page]) = stack_page;
++
++unlock_done:
++	mutex_unlock(&espfix_init_mutex);
++done:
++	per_cpu(espfix_stack, smp_processor_id()) = addr;
++	per_cpu(espfix_waddr, smp_processor_id()) =
++		(unsigned long)stack_page + (addr & ~PAGE_MASK);
++}
+diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c
+index 75e356c..ec6ef60 100644
+--- a/arch/x86/kernel/ldt.c
++++ b/arch/x86/kernel/ldt.c
+@@ -229,17 +229,6 @@ static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode)
+ 		}
+ 	}
+ 
+-	/*
+-	 * On x86-64 we do not support 16-bit segments due to
+-	 * IRET leaking the high bits of the kernel stack address.
+-	 */
+-#ifdef CONFIG_X86_64
+-	if (!ldt_info.seg_32bit) {
+-		error = -EINVAL;
+-		goto out_unlock;
+-	}
+-#endif
+-
+ 	fill_ldt(&ldt, &ldt_info);
+ 	if (oldmode)
+ 		ldt.avl = 0;
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 7e8e905..0854448 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -326,6 +326,13 @@ notrace static void __cpuinit start_secondary(void *unused)
+ 	wmb();
+ 
+ 	/*
++	 * Enable the espfix hack for this CPU
++	 */
++#ifdef CONFIG_X86_64
++	init_espfix_ap();
++#endif
++
++	/*
+ 	 * We need to hold call_lock, so there is no inconsistency
+ 	 * between the time smp_call_function() determines number of
+ 	 * IPI recipients, and the time when the determination is made
+diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
+index a725b7f..3d6150e 100644
+--- a/arch/x86/mm/dump_pagetables.c
++++ b/arch/x86/mm/dump_pagetables.c
+@@ -30,11 +30,13 @@ struct pg_state {
+ 	unsigned long start_address;
+ 	unsigned long current_address;
+ 	const struct addr_marker *marker;
++	unsigned long lines;
+ };
+ 
+ struct addr_marker {
+ 	unsigned long start_address;
+ 	const char *name;
++	unsigned long max_lines;
+ };
+ 
+ /* Address space markers hints */
+@@ -45,6 +47,7 @@ static struct addr_marker address_markers[] = {
+ 	{ PAGE_OFFSET,		"Low Kernel Mapping" },
+ 	{ VMALLOC_START,        "vmalloc() Area" },
+ 	{ VMEMMAP_START,        "Vmemmap" },
++	{ ESPFIX_BASE_ADDR,	"ESPfix Area", 16 },
+ 	{ __START_KERNEL_map,   "High Kernel Mapping" },
+ 	{ MODULES_VADDR,        "Modules" },
+ 	{ MODULES_END,          "End Modules" },
+@@ -141,7 +144,7 @@ static void note_page(struct seq_file *m, struct pg_state *st,
+ 		      pgprot_t new_prot, int level)
+ {
+ 	pgprotval_t prot, cur;
+-	static const char units[] = "KMGTPE";
++	static const char units[] = "BKMGTPE";
+ 
+ 	/*
+ 	 * If we have a "break" in the series, we need to flush the state that
+@@ -156,6 +159,7 @@ static void note_page(struct seq_file *m, struct pg_state *st,
+ 		st->current_prot = new_prot;
+ 		st->level = level;
+ 		st->marker = address_markers;
++		st->lines = 0;
+ 		seq_printf(m, "---[ %s ]---\n", st->marker->name);
+ 	} else if (prot != cur || level != st->level ||
+ 		   st->current_address >= st->marker[1].start_address) {
+@@ -166,17 +170,21 @@ static void note_page(struct seq_file *m, struct pg_state *st,
+ 		/*
+ 		 * Now print the actual finished series
+ 		 */
+-		seq_printf(m, "0x%0*lx-0x%0*lx   ",
+-			   width, st->start_address,
+-			   width, st->current_address);
+-
+-		delta = (st->current_address - st->start_address) >> 10;
+-		while (!(delta & 1023) && unit[1]) {
+-			delta >>= 10;
+-			unit++;
++		if (!st->marker->max_lines ||
++		    st->lines < st->marker->max_lines) {
++			seq_printf(m, "0x%0*lx-0x%0*lx   ",
++				   width, st->start_address,
++				   width, st->current_address);
++
++			delta = (st->current_address - st->start_address);
++			while (!(delta & 1023) && unit[1]) {
++				delta >>= 10;
++				unit++;
++			}
++			seq_printf(m, "%9lu%c ", delta, *unit);
++			printk_prot(m, st->current_prot, st->level);
+ 		}
+-		seq_printf(m, "%9lu%c ", delta, *unit);
+-		printk_prot(m, st->current_prot, st->level);
++		st->lines++;
+ 
+ 		/*
+ 		 * We print markers for special areas of address space,
+@@ -184,7 +192,15 @@ static void note_page(struct seq_file *m, struct pg_state *st,
+ 		 * This helps in the interpretation.
+ 		 */
+ 		if (st->current_address >= st->marker[1].start_address) {
++			if (st->marker->max_lines &&
++			    st->lines > st->marker->max_lines) {
++				unsigned long nskip =
++					st->lines - st->marker->max_lines;
++				seq_printf(m, "... %lu entr%s skipped ... \n",
++					   nskip, nskip == 1 ? "y" : "ies");
++			}
+ 			st->marker++;
++			st->lines = 0;
+ 			seq_printf(m, "---[ %s ]---\n", st->marker->name);
+ 		}
+ 
+diff --git a/init/main.c b/init/main.c
+index 1eb4bd5..0dfcc1a 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -659,6 +659,10 @@ asmlinkage void __init start_kernel(void)
+ 	if (efi_enabled)
+ 		efi_enter_virtual_mode();
+ #endif
++#ifdef CONFIG_X86_64
++	/* Should be run before the first non-init thread is created */
++	init_espfix_bsp();
++#endif
+ 	thread_info_cache_init();
+ 	cred_init();
+ 	fork_init(totalram_pages);
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-modify_ldt-Ban-16-bit-segments-on-64-bit-kern.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-64-modify_ldt-Ban-16-bit-segments-on-64-bit-kern.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,53 @@
+From ca5fc4c87fa72f39f175b52b05ab67d8aec04530 Mon Sep 17 00:00:00 2001
+From: "H. Peter Anvin" <hpa at linux.intel.com>
+Date: Sun, 16 Mar 2014 15:31:54 -0700
+Subject: x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
+
+commit b3b42ac2cbae1f3cecbb6229964a4d48af31d382 upstream.
+
+The IRET instruction, when returning to a 16-bit segment, only
+restores the bottom 16 bits of the user space stack pointer.  We have
+a software workaround for that ("espfix") for the 32-bit kernel, but
+it relies on a nonzero stack segment base which is not available in
+32-bit mode.
+
+Since 16-bit support is somewhat crippled anyway on a 64-bit kernel
+(no V86 mode), and most (if not quite all) 64-bit processors support
+virtualization for the users who really need it, simply reject
+attempts at creating a 16-bit segment when running on top of a 64-bit
+kernel.
+
+Cc: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
+Link: http://lkml.kernel.org/n/tip-kicdm89kzw9lldryb1br9od0@git.kernel.org
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+(cherry picked from commit a862b5c4076b1ba4dd6c87aebac478853dc6db47)
+---
+ arch/x86/kernel/ldt.c | 11 +++++++++++
+ 1 file changed, 11 insertions(+)
+
+diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c
+index ec6ef60..75e356c 100644
+--- a/arch/x86/kernel/ldt.c
++++ b/arch/x86/kernel/ldt.c
+@@ -229,6 +229,17 @@ static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode)
+ 		}
+ 	}
+ 
++	/*
++	 * On x86-64 we do not support 16-bit segments due to
++	 * IRET leaking the high bits of the kernel stack address.
++	 */
++#ifdef CONFIG_X86_64
++	if (!ldt_info.seg_32bit) {
++		error = -EINVAL;
++		goto out_unlock;
++	}
++#endif
++
+ 	fill_ldt(&ldt, &ldt_info);
+ 	if (oldmode)
+ 		ldt.avl = 0;
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Fix-broken-header-guard.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Fix-broken-header-guard.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,31 @@
+From fba3c1324c8c4030d06cf71069325cdbbed1b879 Mon Sep 17 00:00:00 2001
+From: "H. Peter Anvin" <hpa at linux.intel.com>
+Date: Fri, 2 May 2014 11:33:51 -0700
+Subject: x86, espfix: Fix broken header guard
+
+commit 20b68535cd27183ebd3651ff313afb2b97dac941 upstream.
+
+Header guard is #ifndef, not #ifdef...
+
+Reported-by: Fengguang Wu <fengguang.wu at intel.com>
+Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+(cherry picked from commit 7d4a9eabfe6c7fed70941aceb3b20bf393652bcb)
+---
+ arch/x86/include/asm/espfix.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/include/asm/espfix.h b/arch/x86/include/asm/espfix.h
+index e2fb446..f017535 100644
+--- a/arch/x86/include/asm/espfix.h
++++ b/arch/x86/include/asm/espfix.h
+@@ -1,4 +1,4 @@
+-#ifdef _ASM_X86_ESPFIX_H
++#ifndef _ASM_X86_ESPFIX_H
+ #define _ASM_X86_ESPFIX_H
+ 
+ #ifdef CONFIG_X86_64
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Make-espfix64-a-Kconfig-option-fix-UML.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Make-espfix64-a-Kconfig-option-fix-UML.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,90 @@
+From 64c91778af9435b8fdce5ec6e5a2b0c08635fbff Mon Sep 17 00:00:00 2001
+From: "H. Peter Anvin" <hpa at zytor.com>
+Date: Sun, 4 May 2014 10:00:49 -0700
+Subject: x86, espfix: Make espfix64 a Kconfig option, fix UML
+
+commit 197725de65477bc8509b41388157c1a2283542bb upstream.
+
+Make espfix64 a hidden Kconfig option.  This fixes the x86-64 UML
+build which had broken due to the non-existence of init_espfix_bsp()
+in UML: since UML uses its own Kconfig, this option does not appear in
+the UML build.
+
+This also makes it possible to make support for 16-bit segments a
+configuration option, for the people who want to minimize the size of
+the kernel.
+
+Reported-by: Ingo Molnar <mingo at kernel.org>
+Signed-off-by: H. Peter Anvin <hpa at zytor.com>
+Cc: Richard Weinberger <richard at nod.at>
+Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+(cherry picked from 3.2 commit da22646d97b7322c757f3a7a21805a3475fed231)
+
+Conflicts:
+	arch/x86/kernel/Makefile
+---
+ arch/x86/Kconfig          | 4 ++++
+ arch/x86/kernel/Makefile  | 2 +-
+ arch/x86/kernel/smpboot.c | 2 +-
+ init/main.c               | 2 +-
+ 4 files changed, 7 insertions(+), 3 deletions(-)
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index ee0168d..026fe60 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -887,6 +887,10 @@ config VM86
+ 	  XFree86 to initialize some video cards via BIOS. Disabling this
+ 	  option saves about 6k.
+ 
++config X86_ESPFIX64
++	def_bool y
++	depends on X86_64
++
+ config TOSHIBA
+ 	tristate "Toshiba Laptop support"
+ 	depends on X86_32
+diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
+index f58fd89..945ad6f 100644
+--- a/arch/x86/kernel/Makefile
++++ b/arch/x86/kernel/Makefile
+@@ -40,7 +40,7 @@ obj-$(CONFIG_X86_32)	+= probe_roms_32.o
+ obj-$(CONFIG_X86_32)	+= sys_i386_32.o i386_ksyms_32.o
+ obj-$(CONFIG_X86_64)	+= sys_x86_64.o x8664_ksyms_64.o
+ obj-$(CONFIG_X86_64)	+= syscall_64.o vsyscall_64.o
+-obj-$(CONFIG_X86_64)	+= espfix_64.o
++obj-$(CONFIG_X86_ESPFIX64)	+= espfix_64.o
+ obj-y			+= bootflag.o e820.o
+ obj-y			+= pci-dma.o quirks.o i8237.o topology.o kdebugfs.o
+ obj-y			+= alternative.o i8253.o pci-nommu.o
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 0854448..ca6b3f9 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -328,7 +328,7 @@ notrace static void __cpuinit start_secondary(void *unused)
+ 	/*
+ 	 * Enable the espfix hack for this CPU
+ 	 */
+-#ifdef CONFIG_X86_64
++#ifdef CONFIG_X86_ESPFIX64
+ 	init_espfix_ap();
+ #endif
+ 
+diff --git a/init/main.c b/init/main.c
+index 0dfcc1a..00e6286 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -659,7 +659,7 @@ asmlinkage void __init start_kernel(void)
+ 	if (efi_enabled)
+ 		efi_enter_virtual_mode();
+ #endif
+-#ifdef CONFIG_X86_64
++#ifdef CONFIG_X86_ESPFIX64
+ 	/* Should be run before the first non-init thread is created */
+ 	init_espfix_bsp();
+ #endif
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Make-it-possible-to-disable-16-bit-suppor.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Make-it-possible-to-disable-16-bit-suppor.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,232 @@
+From 7bd9195ee0ccdb831283b81c6f6b9abaa4451614 Mon Sep 17 00:00:00 2001
+From: "H. Peter Anvin" <hpa at zytor.com>
+Date: Sun, 4 May 2014 10:36:22 -0700
+Subject: x86, espfix: Make it possible to disable 16-bit support
+
+commit 34273f41d57ee8d854dcd2a1d754cbb546cb548f upstream.
+
+Embedded systems, which may be very memory-size-sensitive, are
+extremely unlikely to ever encounter any 16-bit software, so make it
+a CONFIG_EXPERT option to turn off support for any 16-bit software
+whatsoever.
+
+Signed-off-by: H. Peter Anvin <hpa at zytor.com>
+Link: http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-hpa@linux.intel.com
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+(cherry picked from 3.2 commit 70d87cbbd92a3611655b39003176ee1033796bf7)
+
+Conflicts:
+	arch/x86/kernel/entry_32.S
+
+Notes:
+  - Fixed arch/x86/kernel/ldt.c (no IS_ENABLED on 2.6.32).
+  - No CONFIG_EXPERT condition in 2.6.32.
+---
+ arch/x86/Kconfig           | 23 ++++++++++++++++++-----
+ arch/x86/kernel/entry_32.S | 12 ++++++++++++
+ arch/x86/kernel/entry_64.S |  8 ++++++++
+ arch/x86/kernel/ldt.c      |  6 ++++++
+ 4 files changed, 44 insertions(+), 5 deletions(-)
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 026fe60..67c3187 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -882,14 +882,27 @@ config VM86
+ 	default y
+ 	depends on X86_32
+ 	---help---
+-	  This option is required by programs like DOSEMU to run 16-bit legacy
+-	  code on X86 processors. It also may be needed by software like
+-	  XFree86 to initialize some video cards via BIOS. Disabling this
+-	  option saves about 6k.
++	  This option is required by programs like DOSEMU to run
++	  16-bit real mode legacy code on x86 processors. It also may
++	  be needed by software like XFree86 to initialize some video
++	  cards via BIOS. Disabling this option saves about 6K.
++
++config X86_16BIT
++	bool "Enable support for 16-bit segments"
++	default y
++	---help---
++	  This option is required by programs like Wine to run 16-bit
++	  protected mode legacy code on x86 processors.  Disabling
++	  this option saves about 300 bytes on i386, or around 6K text
++	  plus 16K runtime memory on x86-64,
++
++config X86_ESPFIX32
++	def_bool y
++	depends on X86_16BIT && X86_32
+ 
+ config X86_ESPFIX64
+ 	def_bool y
+-	depends on X86_64
++	depends on X86_16BIT && X86_64
+ 
+ config TOSHIBA
+ 	tristate "Toshiba Laptop support"
+diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
+index db7dbe7..c1207f7 100644
+--- a/arch/x86/kernel/entry_32.S
++++ b/arch/x86/kernel/entry_32.S
+@@ -543,6 +543,7 @@ syscall_exit:
+ restore_all:
+ 	TRACE_IRQS_IRET
+ restore_all_notrace:
++#ifdef CONFIG_X86_ESPFIX32
+ 	movl PT_EFLAGS(%esp), %eax	# mix EFLAGS, SS and CS
+ 	# Warning: PT_OLDSS(%esp) contains the wrong/random values if we
+ 	# are returning to the kernel.
+@@ -553,6 +554,7 @@ restore_all_notrace:
+ 	cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax
+ 	CFI_REMEMBER_STATE
+ 	je ldt_ss			# returning to user-space with LDT SS
++#endif
+ restore_nocheck:
+ 	RESTORE_REGS 4			# skip orig_eax/error_code
+ 	CFI_ADJUST_CFA_OFFSET -4
+@@ -569,6 +571,7 @@ ENTRY(iret_exc)
+ 	.long irq_return,iret_exc
+ .previous
+ 
++#ifdef CONFIG_X86_ESPFIX32
+ 	CFI_RESTORE_STATE
+ ldt_ss:
+ #ifdef CONFIG_PARAVIRT
+@@ -614,6 +617,7 @@ ldt_ss:
+ 	lss (%esp), %esp		/* switch to espfix segment */
+ 	CFI_ADJUST_CFA_OFFSET -8
+ 	jmp restore_nocheck
++#endif
+ 	CFI_ENDPROC
+ ENDPROC(system_call)
+ 
+@@ -736,6 +740,7 @@ PTREGSCALL(vm86old)
+  * the high word of the segment base from the GDT and swiches to the
+  * normal stack and adjusts ESP with the matching offset.
+  */
++#ifdef CONFIG_X86_ESPFIX32
+ 	/* fixup the stack */
+ 	PER_CPU(gdt_page, %ebx)
+ 	mov GDT_ENTRY_ESPFIX_SS * 8 + 4(%ebx), %al /* bits 16..23 */
+@@ -748,8 +753,10 @@ PTREGSCALL(vm86old)
+ 	CFI_ADJUST_CFA_OFFSET 4
+ 	lss (%esp), %esp		/* switch to the normal stack segment */
+ 	CFI_ADJUST_CFA_OFFSET -8
++#endif
+ .endm
+ .macro UNWIND_ESPFIX_STACK
++#ifdef CONFIG_X86_ESPFIX32
+ 	movl %ss, %eax
+ 	/* see if on espfix stack */
+ 	cmpw $__ESPFIX_SS, %ax
+@@ -760,6 +767,7 @@ PTREGSCALL(vm86old)
+ 	/* switch to normal stack */
+ 	FIXUP_ESPFIX_STACK
+ 27:
++#endif
+ .endm
+ 
+ /*
+@@ -1323,6 +1331,7 @@ END(debug)
+  */
+ ENTRY(nmi)
+ 	RING0_INT_FRAME
++#ifdef CONFIG_X86_ESPFIX32
+ 	pushl %eax
+ 	CFI_ADJUST_CFA_OFFSET 4
+ 	movl %ss, %eax
+@@ -1330,6 +1339,7 @@ ENTRY(nmi)
+ 	popl %eax
+ 	CFI_ADJUST_CFA_OFFSET -4
+ 	je nmi_espfix_stack
++#endif
+ 	cmpl $ia32_sysenter_target,(%esp)
+ 	je nmi_stack_fixup
+ 	pushl %eax
+@@ -1372,6 +1382,7 @@ nmi_debug_stack_check:
+ 	FIX_STACK 24, nmi_stack_correct, 1
+ 	jmp nmi_stack_correct
+ 
++#ifdef CONFIG_X86_ESPFIX32
+ nmi_espfix_stack:
+ 	/* We have a RING0_INT_FRAME here.
+ 	 *
+@@ -1397,6 +1408,7 @@ nmi_espfix_stack:
+ 	lss 12+4(%esp), %esp		# back to espfix stack
+ 	CFI_ADJUST_CFA_OFFSET -24
+ 	jmp irq_return
++#endif
+ 	CFI_ENDPROC
+ END(nmi)
+ 
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index a69901e..cf9e7d2 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -863,8 +863,10 @@ irq_return:
+ 	 * Are we returning to a stack segment from the LDT?  Note: in
+ 	 * 64-bit mode SS:RSP on the exception stack is always valid.
+ 	 */
++#ifdef CONFIG_X86_ESPFIX64
+ 	testb $4,(SS-RIP)(%rsp)
+ 	jnz irq_return_ldt
++#endif
+ 
+ irq_return_iret:
+ 	INTERRUPT_RETURN
+@@ -882,6 +884,7 @@ ENTRY(native_iret)
+ 	.previous
+ #endif
+ 
++#ifdef CONFIG_X86_ESPFIX64
+ irq_return_ldt:
+ 	pushq_cfi %rax
+ 	pushq_cfi %rdi
+@@ -905,6 +908,7 @@ irq_return_ldt:
+ 	movq %rax,%rsp
+ 	popq_cfi %rax
+ 	jmp irq_return_iret
++#endif
+ 
+ 	.section .fixup,"ax"
+ bad_iret:
+@@ -980,6 +984,7 @@ END(common_interrupt)
+ 	 * modify the stack to make it look like we just entered
+ 	 * the #GP handler from user space, similar to bad_iret.
+ 	 */
++#ifdef CONFIG_X86_ESPFIX64
+ 	ALIGN
+ __do_double_fault:
+ 	XCPT_FRAME 1 RDI+8
+@@ -1005,6 +1010,9 @@ __do_double_fault:
+ 	retq
+ 	CFI_ENDPROC
+ END(__do_double_fault)
++#else
++# define __do_double_fault do_double_fault
++#endif
+ 
+ /*
+  * APIC interrupts.
+diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c
+index ec6ef60..4e668bb 100644
+--- a/arch/x86/kernel/ldt.c
++++ b/arch/x86/kernel/ldt.c
+@@ -229,6 +229,12 @@ static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode)
+ 		}
+ 	}
+ 
++#ifndef CONFIG_X86_16BIT
++	if (!ldt_info.seg_32bit) {
++		error = -EINVAL;
++		goto out_unlock;
++	}
++#endif
+ 	fill_ldt(&ldt, &ldt_info);
+ 	if (oldmode)
+ 		ldt.avl = 0;
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Move-espfix-definitions-into-a-separate-h.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-Move-espfix-definitions-into-a-separate-h.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,78 @@
+From 5a07add54f978154b32bb883b98f8bb5addfac7f Mon Sep 17 00:00:00 2001
+From: "H. Peter Anvin" <hpa at linux.intel.com>
+Date: Thu, 1 May 2014 14:12:23 -0700
+Subject: x86, espfix: Move espfix definitions into a separate header file
+
+commit e1fe9ed8d2a4937510d0d60e20705035c2609aea upstream.
+
+Sparse warns that the percpu variables aren't declared before they are
+defined.  Rather than hacking around it, move espfix definitions into
+a proper header file.
+
+Reported-by: Fengguang Wu <fengguang.wu at intel.com>
+Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+(cherry picked from 3.2 commit 62358ee6bb9d3d9dd4761d39ac0e1ede9ba70b0e)
+
+Conflicts:
+	arch/x86/include/asm/setup.h
+
+Note: no DECLARE_PER_CPU_READ_MOSTLY, switch to DECLARE_PER_CPU instead
+---
+ arch/x86/include/asm/espfix.h | 16 ++++++++++++++++
+ arch/x86/include/asm/setup.h  |  2 ++
+ arch/x86/kernel/espfix_64.c   |  1 +
+ 3 files changed, 19 insertions(+)
+ create mode 100644 arch/x86/include/asm/espfix.h
+
+diff --git a/arch/x86/include/asm/espfix.h b/arch/x86/include/asm/espfix.h
+new file mode 100644
+index 0000000..e2fb446
+--- /dev/null
++++ b/arch/x86/include/asm/espfix.h
+@@ -0,0 +1,16 @@
++#ifdef _ASM_X86_ESPFIX_H
++#define _ASM_X86_ESPFIX_H
++
++#ifdef CONFIG_X86_64
++
++#include <asm/percpu.h>
++
++DECLARE_PER_CPU(unsigned long, espfix_stack);
++DECLARE_PER_CPU(unsigned long, espfix_waddr);
++
++extern void init_espfix_bsp(void);
++extern void init_espfix_ap(void);
++
++#endif /* CONFIG_X86_64 */
++
++#endif /* _ASM_X86_ESPFIX_H */
+diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
+index 18e496c..ac45d3b 100644
+--- a/arch/x86/include/asm/setup.h
++++ b/arch/x86/include/asm/setup.h
+@@ -57,6 +57,8 @@ static inline void x86_mrst_early_setup(void) { }
+ 
+ #ifndef _SETUP
+ 
++#include <asm/espfix.h>
++
+ /*
+  * This is set up by the setup-routine at boot-time
+  */
+diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c
+index ae10040..24bd342 100644
+--- a/arch/x86/kernel/espfix_64.c
++++ b/arch/x86/kernel/espfix_64.c
+@@ -40,6 +40,7 @@
+ #include <asm/pgtable.h>
+ #include <asm/pgalloc.h>
+ #include <asm/setup.h>
++#include <asm/espfix.h>
+ 
+ /*
+  * Note: we only need 6*8 = 48 bytes for the espfix stack, but round
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-xen-Fix-allocation-of-pages-for-paravirt-.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-espfix-xen-Fix-allocation-of-pages-for-paravirt-.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,62 @@
+From 118f49702d196921fe1596a8213220c3470fce80 Mon Sep 17 00:00:00 2001
+From: Boris Ostrovsky <boris.ostrovsky at oracle.com>
+Date: Wed, 9 Jul 2014 13:18:18 -0400
+Subject: x86/espfix/xen: Fix allocation of pages for paravirt page tables
+
+commit 8762e5092828c4dc0f49da5a47a644c670df77f3 upstream.
+
+init_espfix_ap() is currently off by one level when informing hypervisor
+that allocated pages will be used for ministacks' page tables.
+
+The most immediate effect of this on a PV guest is that if
+'stack_page = __get_free_page()' returns a non-zeroed-out page the hypervisor
+will refuse to use it for a page table (which it shouldn't be anyway). This will
+result in warnings by both Xen and Linux.
+
+More importantly, a subsequent write to that page (again, by a PV guest) is
+likely to result in fatal page fault.
+
+Signed-off-by: Boris Ostrovsky <boris.ostrovsky at oracle.com>
+Link: http://lkml.kernel.org/r/1404926298-5565-1-git-send-email-boris.ostrovsky@oracle.com
+Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>
+Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+(cherry picked from commit 060e7f67c88ebbcf8745505c7ccf44c53601f7de)
+---
+ arch/x86/kernel/espfix_64.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c
+index 24bd342..8563154 100644
+--- a/arch/x86/kernel/espfix_64.c
++++ b/arch/x86/kernel/espfix_64.c
+@@ -175,7 +175,7 @@ void init_espfix_ap(void)
+ 	if (!pud_present(pud)) {
+ 		pmd_p = (pmd_t *)__get_free_page(PGALLOC_GFP);
+ 		pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask));
+-		paravirt_alloc_pud(&init_mm, __pa(pmd_p) >> PAGE_SHIFT);
++		paravirt_alloc_pmd(&init_mm, __pa(pmd_p) >> PAGE_SHIFT);
+ 		for (n = 0; n < ESPFIX_PUD_CLONES; n++)
+ 			set_pud(&pud_p[n], pud);
+ 	}
+@@ -185,7 +185,7 @@ void init_espfix_ap(void)
+ 	if (!pmd_present(pmd)) {
+ 		pte_p = (pte_t *)__get_free_page(PGALLOC_GFP);
+ 		pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask));
+-		paravirt_alloc_pmd(&init_mm, __pa(pte_p) >> PAGE_SHIFT);
++		paravirt_alloc_pte(&init_mm, __pa(pte_p) >> PAGE_SHIFT);
+ 		for (n = 0; n < ESPFIX_PMD_CLONES; n++)
+ 			set_pmd(&pmd_p[n], pmd);
+ 	}
+@@ -193,7 +193,6 @@ void init_espfix_ap(void)
+ 	pte_p = pte_offset_kernel(&pmd, addr);
+ 	stack_page = (void *)__get_free_page(GFP_KERNEL);
+ 	pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask));
+-	paravirt_alloc_pte(&init_mm, __pa(stack_page) >> PAGE_SHIFT);
+ 	for (n = 0; n < ESPFIX_PTE_CLONES; n++)
+ 		set_pte(&pte_p[n*PTE_STRIDE], pte);
+ 
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-entry-xen-Do-not-invoke-espfix64-on-Xen.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-entry-xen-Do-not-invoke-espfix64-on-Xen.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,144 @@
+From 84ef62af57d0b20d5fef6d336305b8599807d191 Mon Sep 17 00:00:00 2001
+From: Andy Lutomirski <luto at amacapital.net>
+Date: Wed, 23 Jul 2014 08:34:11 -0700
+Subject: x86_64/entry/xen: Do not invoke espfix64 on Xen
+
+commit 7209a75d2009dbf7745e2fd354abf25c3deb3ca3 upstream.
+
+This moves the espfix64 logic into native_iret.  To make this work,
+it gets rid of the native patch for INTERRUPT_RETURN:
+INTERRUPT_RETURN on native kernels is now 'jmp native_iret'.
+
+This changes the 16-bit SS behavior on Xen from OOPSing to leaking
+some bits of the Xen hypervisor's RSP (I think).
+
+[ hpa: this is a nonzero cost on native, but probably not enough to
+  measure. Xen needs to fix this in their own code, probably doing
+  something equivalent to espfix64. ]
+
+Signed-off-by: Andy Lutomirski <luto at amacapital.net>
+Link: http://lkml.kernel.org/r/7b8f1d8ef6597cb16ae004a43c56980a7de3cf94.1406129132.git.luto@amacapital.net
+Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+(cherry picked from commit 8ba19cd8c351e16b6be4caca9338d19b0cb8eaa4)
+---
+ arch/x86/include/asm/irqflags.h     |  2 +-
+ arch/x86/kernel/entry_64.S          | 31 ++++++++++---------------------
+ arch/x86/kernel/paravirt_patch_64.c |  2 --
+ 3 files changed, 11 insertions(+), 24 deletions(-)
+
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index 9e2b952..58b0c5c 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -130,7 +130,7 @@ static inline unsigned long __raw_local_irq_save(void)
+ 
+ #define PARAVIRT_ADJUST_EXCEPTION_FRAME	/*  */
+ 
+-#define INTERRUPT_RETURN	iretq
++#define INTERRUPT_RETURN	jmp native_iret
+ #define USERGS_SYSRET64				\
+ 	swapgs;					\
+ 	sysretq;
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index cf9e7d2..4313d48 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -859,33 +859,27 @@ restore_args:
+ 	RESTORE_ARGS 0,8,0
+ 
+ irq_return:
++	INTERRUPT_RETURN
++
++ENTRY(native_iret)
+ 	/*
+ 	 * Are we returning to a stack segment from the LDT?  Note: in
+ 	 * 64-bit mode SS:RSP on the exception stack is always valid.
+ 	 */
+ #ifdef CONFIG_X86_ESPFIX64
+ 	testb $4,(SS-RIP)(%rsp)
+-	jnz irq_return_ldt
++	jnz native_irq_return_ldt
+ #endif
+ 
+-irq_return_iret:
+-	INTERRUPT_RETURN
+-
+-	.section __ex_table, "a"
+-	.quad irq_return_iret, bad_iret
+-	.previous
+-
+-#ifdef CONFIG_PARAVIRT
+-ENTRY(native_iret)
++native_irq_return_iret:
+ 	iretq
+ 
+ 	.section __ex_table,"a"
+-	.quad native_iret, bad_iret
++	.quad native_irq_return_iret, bad_iret
+ 	.previous
+-#endif
+ 
+ #ifdef CONFIG_X86_ESPFIX64
+-irq_return_ldt:
++native_irq_return_ldt:
+ 	pushq_cfi %rax
+ 	pushq_cfi %rdi
+ 	SWAPGS
+@@ -907,7 +901,7 @@ irq_return_ldt:
+ 	SWAPGS
+ 	movq %rax,%rsp
+ 	popq_cfi %rax
+-	jmp irq_return_iret
++	jmp native_irq_return_iret
+ #endif
+ 
+ 	.section .fixup,"ax"
+@@ -995,13 +989,8 @@ __do_double_fault:
+ 	cmpl $__KERNEL_CS,CS(%rdi)
+ 	jne do_double_fault
+ 	movq RIP(%rdi),%rax
+-	cmpq $irq_return_iret,%rax
+-#ifdef CONFIG_PARAVIRT
+-	je 1f
+-	cmpq $native_iret,%rax
+-#endif
++	cmpq $native_irq_return_iret,%rax
+ 	jne do_double_fault		/* This shouldn't happen... */
+-1:
+ 	movq PER_CPU_VAR(kernel_stack),%rax
+ 	subq $(6*8-KERNEL_STACK_OFFSET),%rax	/* Reset to original stack */
+ 	movq %rax,RSP(%rdi)
+@@ -1560,7 +1549,7 @@ error_sti:
+  */
+ error_kernelspace:
+ 	incl %ebx
+-	leaq irq_return_iret(%rip),%rcx
++	leaq native_irq_return_iret(%rip),%rcx
+ 	cmpq %rcx,RIP+8(%rsp)
+ 	je error_swapgs
+ 	movl %ecx,%eax	/* zero extend */
+diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c
+index 3f08f34..a1da673 100644
+--- a/arch/x86/kernel/paravirt_patch_64.c
++++ b/arch/x86/kernel/paravirt_patch_64.c
+@@ -6,7 +6,6 @@ DEF_NATIVE(pv_irq_ops, irq_disable, "cli");
+ DEF_NATIVE(pv_irq_ops, irq_enable, "sti");
+ DEF_NATIVE(pv_irq_ops, restore_fl, "pushq %rdi; popfq");
+ DEF_NATIVE(pv_irq_ops, save_fl, "pushfq; popq %rax");
+-DEF_NATIVE(pv_cpu_ops, iret, "iretq");
+ DEF_NATIVE(pv_mmu_ops, read_cr2, "movq %cr2, %rax");
+ DEF_NATIVE(pv_mmu_ops, read_cr3, "movq %cr3, %rax");
+ DEF_NATIVE(pv_mmu_ops, write_cr3, "movq %rdi, %cr3");
+@@ -50,7 +49,6 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
+ 		PATCH_SITE(pv_irq_ops, save_fl);
+ 		PATCH_SITE(pv_irq_ops, irq_enable);
+ 		PATCH_SITE(pv_irq_ops, irq_disable);
+-		PATCH_SITE(pv_cpu_ops, iret);
+ 		PATCH_SITE(pv_cpu_ops, irq_enable_sysexit);
+ 		PATCH_SITE(pv_cpu_ops, usergs_sysret32);
+ 		PATCH_SITE(pv_cpu_ops, usergs_sysret64);
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-traps-Fix-the-espfix64-DF-fixup-and-rewrite-i.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-traps-Fix-the-espfix64-DF-fixup-and-rewrite-i.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,126 @@
+From a825b66514b38db86380e0c69044ac9a2f0d236b Mon Sep 17 00:00:00 2001
+From: Andy Lutomirski <luto at amacapital.net>
+Date: Sat, 22 Nov 2014 18:00:31 -0800
+Subject: x86_64, traps: Fix the espfix64 #DF fixup and rewrite it in C
+
+There's nothing special enough about the espfix64 double fault fixup to
+justify writing it in assembly.  Move it to C.
+
+This also fixes a bug: if the double fault came from an IST stack, the
+old asm code would return to a partially uninitialized stack frame.
+
+Fixes: 3891a04aafd668686239349ea58f3314ea2af86b
+Signed-off-by: Andy Lutomirski <luto at amacapital.net>
+Reviewed-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable at vger.kernel.org
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+(cherry picked from commit af726f21ed8af2cdaa4e93098dc211521218ae65)
+
+Conflicts:
+	arch/x86/kernel/entry_64.S
+	arch/x86/kernel/traps.c
+
+- Adaptations to entry_64.S in declaration of do_double_fault.
+- no exception_enter() in 2.6.32. Seems to be only for context tracking.
+---
+ arch/x86/kernel/entry_64.S | 34 ++--------------------------------
+ arch/x86/kernel/traps.c    | 24 ++++++++++++++++++++++++
+ 2 files changed, 26 insertions(+), 32 deletions(-)
+
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index 8e2f14a..c862780 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -871,6 +871,7 @@ ENTRY(native_iret)
+ 	jnz native_irq_return_ldt
+ #endif
+ 
++.global native_irq_return_iret
+ native_irq_return_iret:
+ 	iretq
+ 
+@@ -972,37 +973,6 @@ ENTRY(retint_kernel)
+ 	CFI_ENDPROC
+ END(common_interrupt)
+ 
+-	/*
+-	 * If IRET takes a fault on the espfix stack, then we
+-	 * end up promoting it to a doublefault.  In that case,
+-	 * modify the stack to make it look like we just entered
+-	 * the #GP handler from user space, similar to bad_iret.
+-	 */
+-#ifdef CONFIG_X86_ESPFIX64
+-	ALIGN
+-__do_double_fault:
+-	XCPT_FRAME 1 RDI+8
+-	movq RSP(%rdi),%rax		/* Trap on the espfix stack? */
+-	sarq $PGDIR_SHIFT,%rax
+-	cmpl $ESPFIX_PGD_ENTRY,%eax
+-	jne do_double_fault		/* No, just deliver the fault */
+-	cmpl $__KERNEL_CS,CS(%rdi)
+-	jne do_double_fault
+-	movq RIP(%rdi),%rax
+-	cmpq $native_irq_return_iret,%rax
+-	jne do_double_fault		/* This shouldn't happen... */
+-	movq PER_CPU_VAR(kernel_stack),%rax
+-	subq $(6*8-KERNEL_STACK_OFFSET),%rax	/* Reset to original stack */
+-	movq %rax,RSP(%rdi)
+-	movq $0,(%rax)			/* Missing (lost) #GP error code */
+-	movq $general_protection,RIP(%rdi)
+-	retq
+-	CFI_ENDPROC
+-END(__do_double_fault)
+-#else
+-# define __do_double_fault do_double_fault
+-#endif
+-
+ /*
+  * APIC interrupts.
+  */
+@@ -1179,7 +1149,7 @@ zeroentry overflow do_overflow
+ zeroentry bounds do_bounds
+ zeroentry invalid_op do_invalid_op
+ zeroentry device_not_available do_device_not_available
+-paranoiderrorentry double_fault __do_double_fault
++paranoiderrorentry double_fault do_double_fault
+ zeroentry coprocessor_segment_overrun do_coprocessor_segment_overrun
+ errorentry invalid_TSS do_invalid_TSS
+ errorentry segment_not_present do_segment_not_present
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index c4cc05a..03563a4 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -230,6 +230,30 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
+ 	static const char str[] = "double fault";
+ 	struct task_struct *tsk = current;
+ 
++#ifdef CONFIG_X86_ESPFIX64
++	extern unsigned char native_irq_return_iret[];
++
++	/*
++	 * If IRET takes a non-IST fault on the espfix64 stack, then we
++	 * end up promoting it to a doublefault.  In that case, modify
++	 * the stack to make it look like we just entered the #GP
++	 * handler from user space, similar to bad_iret.
++	 */
++	if (((long)regs->sp >> PGDIR_SHIFT) == ESPFIX_PGD_ENTRY &&
++		regs->cs == __KERNEL_CS &&
++		regs->ip == (unsigned long)native_irq_return_iret)
++	{
++		struct pt_regs *normal_regs = task_pt_regs(current);
++
++		/* Fake a #GP(0) from userspace. */
++		memmove(&normal_regs->ip, (void *)regs->sp, 5*8);
++		normal_regs->orig_ax = 0;  /* Missing (lost) #GP error code */
++		regs->ip = (unsigned long)general_protection;
++		regs->sp = (unsigned long)&normal_regs->orig_ax;
++		return;
++	}
++#endif
++
+ 	/* Return not checked because double check cannot be ignored */
+ 	notify_die(DIE_TRAP, str, regs, error_code, 8, SIGSEGV);
+ 
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-traps-Rework-bad_iret.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-traps-Rework-bad_iret.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,177 @@
+From 11c806e613207ccf633aab9b4beecf7f80ad8298 Mon Sep 17 00:00:00 2001
+From: Andy Lutomirski <luto at amacapital.net>
+Date: Sat, 22 Nov 2014 18:00:33 -0800
+Subject: x86_64, traps: Rework bad_iret
+
+It's possible for iretq to userspace to fail.  This can happen because
+of a bad CS, SS, or RIP.
+
+Historically, we've handled it by fixing up an exception from iretq to
+land at bad_iret, which pretends that the failed iret frame was really
+the hardware part of #GP(0) from userspace.  To make this work, there's
+an extra fixup to fudge the gs base into a usable state.
+
+This is suboptimal because it loses the original exception.  It's also
+buggy because there's no guarantee that we were on the kernel stack to
+begin with.  For example, if the failing iret happened on return from an
+NMI, then we'll end up executing general_protection on the NMI stack.
+This is bad for several reasons, the most immediate of which is that
+general_protection, as a non-paranoid idtentry, will try to deliver
+signals and/or schedule from the wrong stack.
+
+This patch throws out bad_iret entirely.  As a replacement, it augments
+the existing swapgs fudge into a full-blown iret fixup, mostly written
+in C.  It's should be clearer and more correct.
+
+Signed-off-by: Andy Lutomirski <luto at amacapital.net>
+Reviewed-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable at vger.kernel.org
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+(cherry picked from commit b645af2d5905c4e32399005b867987919cbfc3ae)
+
+Conflicts:
+	arch/x86/kernel/entry_64.S
+	arch/x86/kernel/traps.c
+
+Notes:
+- _ASM_EXTABLE was open-coded.
+- removed unneeded CFI_ENDPROC
+- removed __visible (introduced in 2.6.37-rc1, not needed here)
+
+Conflicts:
+	arch/x86/kernel/entry_64.S
+---
+ arch/x86/kernel/entry_64.S | 48 ++++++++++++++++++----------------------------
+ arch/x86/kernel/traps.c    | 29 ++++++++++++++++++++++++++++
+ 2 files changed, 48 insertions(+), 29 deletions(-)
+
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index c862780..d9bcee0 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -873,12 +873,14 @@ ENTRY(native_iret)
+ 
+ .global native_irq_return_iret
+ native_irq_return_iret:
++	/*
++	 * This may fault.  Non-paranoid faults on return to userspace are
++	 * handled by fixup_bad_iret.  These include #SS, #GP, and #NP.
++	 * Double-faults due to espfix64 are handled in do_double_fault.
++	 * Other faults here are fatal.
++	 */
+ 	iretq
+ 
+-	.section __ex_table,"a"
+-	.quad native_irq_return_iret, bad_iret
+-	.previous
+-
+ #ifdef CONFIG_X86_ESPFIX64
+ native_irq_return_ldt:
+ 	pushq_cfi %rax
+@@ -905,25 +907,6 @@ native_irq_return_ldt:
+ 	jmp native_irq_return_iret
+ #endif
+ 
+-	.section .fixup,"ax"
+-bad_iret:
+-	/*
+-	 * The iret traps when the %cs or %ss being restored is bogus.
+-	 * We've lost the original trap vector and error code.
+-	 * #GPF is the most likely one to get for an invalid selector.
+-	 * So pretend we completed the iret and took the #GPF in user mode.
+-	 *
+-	 * We are now running with the kernel GS after exception recovery.
+-	 * But error_entry expects us to have user GS to match the user %cs,
+-	 * so swap back.
+-	 */
+-	pushq $0
+-
+-	SWAPGS
+-	jmp general_protection
+-
+-	.previous
+-
+ 	/* edi: workmask, edx: work */
+ retint_careful:
+ 	CFI_RESTORE_STATE
+@@ -1512,16 +1495,15 @@ error_sti:
+ 
+ /*
+  * There are two places in the kernel that can potentially fault with
+- * usergs. Handle them here. The exception handlers after iret run with
+- * kernel gs again, so don't set the user space flag. B stepping K8s
+- * sometimes report an truncated RIP for IRET exceptions returning to
+- * compat mode. Check for these here too.
++ * usergs. Handle them here.  B stepping K8s sometimes report a
++ * truncated RIP for IRET exceptions returning to compat mode. Check
++ * for these here too.
+  */
+ error_kernelspace:
+ 	incl %ebx
+ 	leaq native_irq_return_iret(%rip),%rcx
+ 	cmpq %rcx,RIP+8(%rsp)
+-	je error_swapgs
++	je error_bad_iret
+ 	movl %ecx,%eax	/* zero extend */
+ 	cmpq %rax,RIP+8(%rsp)
+ 	je bstep_iret
+@@ -1532,7 +1514,15 @@ error_kernelspace:
+ bstep_iret:
+ 	/* Fix truncated RIP */
+ 	movq %rcx,RIP+8(%rsp)
+-	je error_swapgs
++	/* fall through */
++
++error_bad_iret:
++	SWAPGS
++	mov %rsp,%rdi
++	call fixup_bad_iret
++	mov %rax,%rsp
++	decl %ebx	/* Return to usergs */
++	jmp error_sti
+ END(error_entry)
+ 
+ 
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 03563a4..8a39a6c 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -512,6 +512,35 @@ asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
+ 		*regs = *eregs;
+ 	return regs;
+ }
++
++struct bad_iret_stack {
++	void *error_entry_ret;
++	struct pt_regs regs;
++};
++
++asmlinkage
++struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s)
++{
++	/*
++	 * This is called from entry_64.S early in handling a fault
++	 * caused by a bad iret to user mode.  To handle the fault
++	 * correctly, we want move our stack frame to task_pt_regs
++	 * and we want to pretend that the exception came from the
++	 * iret target.
++	 */
++	struct bad_iret_stack *new_stack =
++		container_of(task_pt_regs(current),
++			     struct bad_iret_stack, regs);
++
++	/* Copy the IRET target to the new stack. */
++	memmove(&new_stack->regs.ip, (void *)s->regs.sp, 5*8);
++
++	/* Copy the remainder of the stack from the current stack. */
++	memmove(new_stack, s, offsetof(struct bad_iret_stack, regs.ip));
++
++	BUG_ON(!user_mode_vm(&new_stack->regs));
++	return new_stack;
++}
+ #endif
+ 
+ /*
+-- 
+1.7.12.1
+

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-traps-Stop-using-IST-for-SS.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86_64-traps-Stop-using-IST-for-SS.patch	Sun Dec  7 17:52:05 2014	(r22121)
@@ -0,0 +1,139 @@
+From 38b44c90448f7c8a5060bc1dd2f2e8e25e05d847 Mon Sep 17 00:00:00 2001
+From: Andy Lutomirski <luto at amacapital.net>
+Date: Sat, 22 Nov 2014 18:00:32 -0800
+Subject: x86_64, traps: Stop using IST for #SS
+
+On a 32-bit kernel, this has no effect, since there are no IST stacks.
+
+On a 64-bit kernel, #SS can only happen in user code, on a failed iret
+to user space, a canonical violation on access via RSP or RBP, or a
+genuine stack segment violation in 32-bit kernel code.  The first two
+cases don't need IST, and the latter two cases are unlikely fatal bugs,
+and promoting them to double faults would be fine.
+
+This fixes a bug in which the espfix64 code mishandles a stack segment
+violation.
+
+This saves 4k of memory per CPU and a tiny bit of code.
+
+Signed-off-by: Andy Lutomirski <luto at amacapital.net>
+Reviewed-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable at vger.kernel.org
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+(cherry picked from commit 6f442be2fb22be02cafa606f1769fa1e6f894441)
+
+Conflicts:
+	arch/x86/include/asm/traps.h
+	arch/x86/kernel/dumpstack_64.c
+	arch/x86/kernel/entry_64.S
+	arch/x86/kernel/traps.c
+
+Note: no CONFIG_TRACING on 2.6.32.
+Fixes CVE-2014-9090
+---
+ arch/x86/include/asm/page_32_types.h |  1 -
+ arch/x86/include/asm/page_64_types.h | 11 +++++------
+ arch/x86/kernel/dumpstack_64.c       |  1 -
+ arch/x86/kernel/entry_64.S           |  2 +-
+ arch/x86/kernel/traps.c              | 14 +-------------
+ 5 files changed, 7 insertions(+), 22 deletions(-)
+
+diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
+index 6f1b733..775c92f 100644
+--- a/arch/x86/include/asm/page_32_types.h
++++ b/arch/x86/include/asm/page_32_types.h
+@@ -22,7 +22,6 @@
+ #endif
+ #define THREAD_SIZE 	(PAGE_SIZE << THREAD_ORDER)
+ 
+-#define STACKFAULT_STACK 0
+ #define DOUBLEFAULT_STACK 1
+ #define NMI_STACK 0
+ #define DEBUG_STACK 0
+diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
+index 7639dbf..a9e9937 100644
+--- a/arch/x86/include/asm/page_64_types.h
++++ b/arch/x86/include/asm/page_64_types.h
+@@ -14,12 +14,11 @@
+ #define IRQ_STACK_ORDER 2
+ #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
+ 
+-#define STACKFAULT_STACK 1
+-#define DOUBLEFAULT_STACK 2
+-#define NMI_STACK 3
+-#define DEBUG_STACK 4
+-#define MCE_STACK 5
+-#define N_EXCEPTION_STACKS 5  /* hw limit: 7 */
++#define DOUBLEFAULT_STACK 1
++#define NMI_STACK 2
++#define DEBUG_STACK 3
++#define MCE_STACK 4
++#define N_EXCEPTION_STACKS 4  /* hw limit: 7 */
+ 
+ #define PUD_PAGE_SIZE		(_AC(1, UL) << PUD_SHIFT)
+ #define PUD_PAGE_MASK		(~(PUD_PAGE_SIZE-1))
+diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
+index a071e6b..5dc0882 100644
+--- a/arch/x86/kernel/dumpstack_64.c
++++ b/arch/x86/kernel/dumpstack_64.c
+@@ -23,7 +23,6 @@ static char x86_stack_ids[][8] = {
+ 		[DEBUG_STACK - 1] = "#DB",
+ 		[NMI_STACK - 1] = "NMI",
+ 		[DOUBLEFAULT_STACK - 1] = "#DF",
+-		[STACKFAULT_STACK - 1] = "#SS",
+ 		[MCE_STACK - 1] = "#MC",
+ #if DEBUG_STKSZ > EXCEPTION_STKSZ
+ 		[N_EXCEPTION_STACKS ...
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index 4313d48..8e2f14a 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -1434,7 +1434,7 @@ END(xen_failsafe_callback)
+ 
+ paranoidzeroentry_ist debug do_debug DEBUG_STACK
+ paranoidzeroentry_ist int3 do_int3 DEBUG_STACK
+-paranoiderrorentry stack_segment do_stack_segment
++errorentry stack_segment do_stack_segment
+ #ifdef CONFIG_XEN
+ zeroentry xen_debug do_debug
+ zeroentry xen_int3 do_int3
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 7e37dce..c4cc05a 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -220,23 +220,11 @@ DO_ERROR_INFO(6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->ip)
+ DO_ERROR(9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun)
+ DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS)
+ DO_ERROR(11, SIGBUS, "segment not present", segment_not_present)
+-#ifdef CONFIG_X86_32
+ DO_ERROR(12, SIGBUS, "stack segment", stack_segment)
+-#endif
+ DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0)
+ 
+ #ifdef CONFIG_X86_64
+ /* Runs on IST stack */
+-dotraplinkage void do_stack_segment(struct pt_regs *regs, long error_code)
+-{
+-	if (notify_die(DIE_TRAP, "stack segment", regs, error_code,
+-			12, SIGBUS) == NOTIFY_STOP)
+-		return;
+-	preempt_conditional_sti(regs);
+-	do_trap(12, SIGBUS, "stack segment", regs, error_code, NULL);
+-	preempt_conditional_cli(regs);
+-}
+-
+ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
+ {
+ 	static const char str[] = "double fault";
+@@ -927,7 +915,7 @@ void __init trap_init(void)
+ 	set_intr_gate(9, &coprocessor_segment_overrun);
+ 	set_intr_gate(10, &invalid_TSS);
+ 	set_intr_gate(11, &segment_not_present);
+-	set_intr_gate_ist(12, &stack_segment, STACKFAULT_STACK);
++	set_intr_gate(12, &stack_segment);
+ 	set_intr_gate(13, &general_protection);
+ 	set_intr_gate(14, &page_fault);
+ 	set_intr_gate(15, &spurious_interrupt_bug);
+-- 
+1.7.12.1
+

Modified: dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze9
==============================================================================
--- dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze9	Sun Dec  7 03:58:14 2014	(r22120)
+++ dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze9	Sun Dec  7 17:52:05 2014	(r22121)
@@ -161,3 +161,19 @@
 + bugfix/all/net-sendmsg-Really-fix-NULL-pointer-dereference.patch
 + bugfix/all/sctp-Fix-double-free-introduced-by-bad-backport-in-2.patch
 + bugfix/all/md-raid6-Fix-misapplied-backport-in-2.6.32.64.patch
+
+# Fixes for kernel entry/exit security flaws (mostly x86-64)
++ bugfix/x86/x86-64-bit-Move-K8-B-step-iret-fixup-to-fault-entry-.patch
++ bugfix/x86/x86-64-Adjust-frame-type-at-paranoid_exit.patch
++ bugfix/x86/x86-64-modify_ldt-Ban-16-bit-segments-on-64-bit-kern.patch
++ bugfix/x86/x86-32-espfix-Remove-filter-for-espfix32-due-to-race.patch
++ bugfix/x86/x86-64-espfix-Don-t-leak-bits-31-16-of-esp-returning.patch
++ bugfix/x86/x86-espfix-Move-espfix-definitions-into-a-separate-h.patch
++ bugfix/x86/x86-espfix-Fix-broken-header-guard.patch
++ bugfix/x86/x86-espfix-Make-espfix64-a-Kconfig-option-fix-UML.patch
++ bugfix/x86/x86-espfix-Make-it-possible-to-disable-16-bit-suppor.patch
++ bugfix/x86/x86_64-entry-xen-Do-not-invoke-espfix64-on-Xen.patch
++ bugfix/x86/x86-espfix-xen-Fix-allocation-of-pages-for-paravirt-.patch
++ bugfix/x86/x86_64-traps-Stop-using-IST-for-SS.patch
++ bugfix/x86/x86_64-traps-Fix-the-espfix64-DF-fixup-and-rewrite-i.patch
++ bugfix/x86/x86_64-traps-Rework-bad_iret.patch



More information about the Kernel-svn-changes mailing list