[kernel] r22308 - in dists/squeeze-security/linux-2.6/debian: . config patches/bugfix/x86 patches/features/all/openvz patches/series

Ben Hutchings benh at moszumanska.debian.org
Fri Jan 30 04:50:25 UTC 2015


Author: benh
Date: Fri Jan 30 04:50:25 2015
New Revision: 22308

Log:
[x86] Backport fixes to FPU/SSE state save and restore from Linux 3.3

Added:
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0001-x86-fpu-move-most-of-__save_init_fpu-into-fpu_save_i.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0002-x86-64-fpu-disable-preemption-when-using-ts_usedfpu.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0003-x86-32-fpu-rewrite-fpu_save_init.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0004-x86-fpu-merge-fpu_save_init.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0005-x86-32-fpu-fix-fpu-exception-handling-on-non-sse-sys.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0006-i387-math_state_restore-isn-t-called-from-asm.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0007-i387-make-irq_fpu_usable-tests-more-robust.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0008-i387-fix-sense-of-sanity-check.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0009-i387-fix-x86-64-preemption-unsafe-user-stack-save-re.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0010-i387-move-ts_usedfpu-clearing-out-of-__save_init_fpu.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0011-i387-don-t-ever-touch-ts_usedfpu-directly-use-helper.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0012-i387-do-not-preload-fpu-state-at-task-switch-time.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0013-i387-move-amd-k7-k8-fpu-fxsave-fxrstor-workaround-fr.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0014-i387-move-ts_usedfpu-flag-from-thread_info-to-task_s.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0015-i387-re-introduce-fpu-state-preloading-at-context-sw.patch
   dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-fpu-avoid-abi-change-for-addition-of-has_fpu-fla.patch
Modified:
   dists/squeeze-security/linux-2.6/debian/changelog
   dists/squeeze-security/linux-2.6/debian/config/defines
   dists/squeeze-security/linux-2.6/debian/patches/features/all/openvz/openvz.patch
   dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze11

Modified: dists/squeeze-security/linux-2.6/debian/changelog
==============================================================================
--- dists/squeeze-security/linux-2.6/debian/changelog	Fri Jan 30 01:17:17 2015	(r22307)
+++ dists/squeeze-security/linux-2.6/debian/changelog	Fri Jan 30 04:50:25 2015	(r22308)
@@ -13,6 +13,23 @@
     (CVE-2014-7822)
   * net: sctp: fix slab corruption from use after free on INIT collisions
     (CVE-2015-1421)
+  * [x86] Backport fixes to FPU/SSE state save and restore from Linux 3.3:
+    - fpu: Move most of __save_init_fpu() into fpu_save_init()
+    - [amd64] fpu: Disable preemption when using TS_USEDFPU
+    - [i386] fpu: Rewrite fpu_save_init()
+    - fpu: Merge fpu_save_init()
+    - [i386] fpu: Fix FPU exception handling on non-SSE systems
+    - i387: math_state_restore() isn't called from asm
+    - i387: make irq_fpu_usable() tests more robust
+    - i387: fix sense of sanity check
+    - i387: fix x86-64 preemption-unsafe user stack save/restore
+    - i387: move TS_USEDFPU clearing out of __save_init_fpu and into callers
+    - i387: don't ever touch TS_USEDFPU directly, use helper functions
+    - i387: do not preload FPU state at task switch time
+    - i387: move AMD K7/K8 fpu fxsave/fxrstor workaround from save to restore
+    - i387: move TS_USEDFPU flag from thread_info to task_struct
+    - i387: re-introduce FPU state preloading at context switch time
+  * Ignore ABI change for math_state_restore(), not used out-of-tree
 
  -- Ben Hutchings <ben at decadent.org.uk>  Wed, 28 Jan 2015 22:33:05 +0000
 

Modified: dists/squeeze-security/linux-2.6/debian/config/defines
==============================================================================
--- dists/squeeze-security/linux-2.6/debian/config/defines	Fri Jan 30 01:17:17 2015	(r22307)
+++ dists/squeeze-security/linux-2.6/debian/config/defines	Fri Jan 30 04:50:25 2015	(r22308)
@@ -14,6 +14,7 @@
  ip_build_and_send_pkt
  tcp_cong_avoid_ai
  tcp_slow_start
+ math_state_restore
 
 [base]
 arches:

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0001-x86-fpu-move-most-of-__save_init_fpu-into-fpu_save_i.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0001-x86-fpu-move-most-of-__save_init_fpu-into-fpu_save_i.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,67 @@
+From 9f3a8c830bd06bbcd9e0ac51d9e1c25f73299742 Mon Sep 17 00:00:00 2001
+From: Ben Hutchings <ben at decadent.org.uk>
+Date: Thu, 29 Jan 2015 22:48:16 +0000
+Subject: [PATCH 01/15] x86, fpu: Move most of __save_init_fpu() into
+ fpu_save_init()
+
+Move everything except the clearing of TS_USEDFPU into a
+separate function.
+
+This was done upstream when struct fpu was introduced in commit
+86603283326c ("x86: Introduce 'struct fpu' and related API").  It is
+only needed here to ease cherry-picking of later fixes.
+
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h | 13 +++++++++----
+ 1 file changed, 9 insertions(+), 4 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index cb42fad..d5690c2 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -152,7 +152,7 @@ static inline void fxsave(struct task_struct *tsk)
+ #endif
+ }
+ 
+-static inline void __save_init_fpu(struct task_struct *tsk)
++static inline void fpu_save_init(struct task_struct *tsk)
+ {
+ 	if (task_thread_info(tsk)->status & TS_XSAVE)
+ 		xsave(tsk);
+@@ -160,7 +160,6 @@ static inline void __save_init_fpu(struct task_struct *tsk)
+ 		fxsave(tsk);
+ 
+ 	clear_fpu_state(tsk);
+-	task_thread_info(tsk)->status &= ~TS_USEDFPU;
+ }
+ 
+ #else  /* CONFIG_X86_32 */
+@@ -206,7 +205,7 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx)
+ /*
+  * These must be called with preempt disabled
+  */
+-static inline void __save_init_fpu(struct task_struct *tsk)
++static inline void fpu_save_init(struct task_struct *tsk)
+ {
+ 	if (task_thread_info(tsk)->status & TS_XSAVE) {
+ 		struct xsave_struct *xstate = &tsk->thread.xstate->xsave;
+@@ -250,11 +249,17 @@ clear_state:
+ 			: : [addr] "m" (safe_address));
+ 	}
+ end:
+-	task_thread_info(tsk)->status &= ~TS_USEDFPU;
++	;
+ }
+ 
+ #endif	/* CONFIG_X86_64 */
+ 
++static inline void __save_init_fpu(struct task_struct *tsk)
++{
++	fpu_save_init(tsk);
++	task_thread_info(tsk)->status &= ~TS_USEDFPU;
++}
++
+ static inline int restore_fpu_checking(struct task_struct *tsk)
+ {
+ 	if (task_thread_info(tsk)->status & TS_XSAVE)

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0002-x86-64-fpu-disable-preemption-when-using-ts_usedfpu.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0002-x86-64-fpu-disable-preemption-when-using-ts_usedfpu.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,88 @@
+From 10eaf1b81487a78ba61f84d0b95b3b1b2cd00726 Mon Sep 17 00:00:00 2001
+From: Brian Gerst <brgerst at gmail.com>
+Date: Fri, 3 Sep 2010 21:17:12 -0400
+Subject: [PATCH 02/15] x86-64, fpu: Disable preemption when using TS_USEDFPU
+
+commit a4d4fbc7735bba6654b20f859135f9d3f8fe7f76 upstream.
+
+Consolidates code and fixes the below race for 64-bit.
+
+commit 9fa2f37bfeb798728241cc4a19578ce6e4258f25
+Author: torvalds <torvalds>
+Date:   Tue Sep 2 07:37:25 2003 +0000
+
+    Be a lot more careful about TS_USEDFPU and preemption
+
+    We had some races where we testecd (or set) TS_USEDFPU together
+    with sequences that depended on the setting (like clearing or
+    setting the TS flag in %cr0) and we could be preempted in between,
+    which screws up the FPU state, since preemption will itself change
+    USEDFPU and the TS flag.
+
+    This makes it a lot more explicit: the "internal" low-level FPU
+    functions ("__xxxx_fpu()") all require preemption to be disabled,
+    and the exported "real" functions will make sure that is the case.
+
+    One case - in __switch_to() - was switched to the non-preempt-safe
+    internal version, since the scheduler itself has already disabled
+    preemption.
+
+    BKrev: 3f5448b5WRiQuyzAlbajs3qoQjSobw
+
+Signed-off-by: Brian Gerst <brgerst at gmail.com>
+Acked-by: Pekka Enberg <penberg at kernel.org>
+Cc: Suresh Siddha <suresh.b.siddha at intel.com>
+LKML-Reference: <1283563039-3466-6-git-send-email-brgerst at gmail.com>
+Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h  | 15 ---------------
+ arch/x86/kernel/process_64.c |  2 +-
+ 2 files changed, 1 insertion(+), 16 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index d5690c2..57f2494 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -347,19 +347,6 @@ static inline void irq_ts_restore(int TS_state)
+ 		stts();
+ }
+ 
+-#ifdef CONFIG_X86_64
+-
+-static inline void save_init_fpu(struct task_struct *tsk)
+-{
+-	__save_init_fpu(tsk);
+-	stts();
+-}
+-
+-#define unlazy_fpu	__unlazy_fpu
+-#define clear_fpu	__clear_fpu
+-
+-#else  /* CONFIG_X86_32 */
+-
+ /*
+  * These disable preemption on their own and are safe
+  */
+@@ -385,8 +372,6 @@ static inline void clear_fpu(struct task_struct *tsk)
+ 	preempt_enable();
+ }
+ 
+-#endif	/* CONFIG_X86_64 */
+-
+ /*
+  * i387 state interaction
+  */
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 39493bc..84e87dd 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -423,7 +423,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 	load_TLS(next, cpu);
+ 
+ 	/* Must be after DS reload */
+-	unlazy_fpu(prev_p);
++	__unlazy_fpu(prev_p);
+ 
+ 	/* Make sure cpu is ready for new context */
+ 	if (preload_fpu)

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0003-x86-32-fpu-rewrite-fpu_save_init.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0003-x86-32-fpu-rewrite-fpu_save_init.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,109 @@
+From a5a0636f44b7616a5f3507ff7c81d1c4577b1ecb Mon Sep 17 00:00:00 2001
+From: Brian Gerst <brgerst at gmail.com>
+Date: Fri, 3 Sep 2010 21:17:18 -0400
+Subject: [PATCH 03/15] x86-32, fpu: Rewrite fpu_save_init()
+
+commit 58a992b9cbaf449aeebd3575c3695a9eb5d95b5e upstream.
+
+Rewrite fpu_save_init() to prepare for merging with 64-bit.
+
+Signed-off-by: Brian Gerst <brgerst at gmail.com>
+Acked-by: Pekka Enberg <penberg at kernel.org>
+Cc: Suresh Siddha <suresh.b.siddha at intel.com>
+LKML-Reference: <1283563039-3466-12-git-send-email-brgerst at gmail.com>
+Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
+[bwh: Backported to 2.6.32:
+ - We don't use struct fpu
+ - Use the function name fxsave(), matching x86_64
+ - Adjust context]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h | 47 +++++++++++++++++++++------------------------
+ 1 file changed, 22 insertions(+), 25 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 57f2494..6e4bfa9 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -46,6 +46,11 @@ extern int restore_i387_xstate_ia32(void __user *buf);
+ 
+ #define X87_FSW_ES (1 << 7)	/* Exception Summary */
+ 
++static __always_inline __pure bool use_fxsr(void)
++{
++        return boot_cpu_has(X86_FEATURE_FXSR);
++}
++
+ #ifdef CONFIG_X86_64
+ 
+ /* Ignore delayed exceptions from user space */
+@@ -193,6 +198,12 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx)
+ 	return 0;
+ }
+ 
++static inline void fxsave(struct task_struct *tsk)
++{
++	asm volatile("fxsave %[fx]"
++		     : [fx] "=m" (tsk->thread.xstate->fxsave));
++}
++
+ /* We need a safe address that is cheap to find and that is already
+    in L1 during context switch. The best choices are unfortunately
+    different for UP and SMP */
+@@ -208,36 +219,24 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx)
+ static inline void fpu_save_init(struct task_struct *tsk)
+ {
+ 	if (task_thread_info(tsk)->status & TS_XSAVE) {
+-		struct xsave_struct *xstate = &tsk->thread.xstate->xsave;
+-		struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave;
+-
+ 		xsave(tsk);
+ 
+ 		/*
+ 		 * xsave header may indicate the init state of the FP.
+ 		 */
+-		if (!(xstate->xsave_hdr.xstate_bv & XSTATE_FP))
+-			goto end;
+-
+-		if (unlikely(fx->swd & X87_FSW_ES))
+-			asm volatile("fnclex");
+-
+-		/*
+-		 * we can do a simple return here or be paranoid :)
+-		 */
+-		goto clear_state;
++		if (!(tsk->thread.xstate.xsave.xsave_hdr.xstate_bv & XSTATE_FP))
++			return;
++	} else if (use_fxsr()) {
++		fxsave(tsk);
++	} else {
++		asm volatile("fsave %[fx]; fwait"
++			     : [fx] "=m" (tsk->thread.xstate->fsave));
++		return;
+ 	}
+ 
+-	/* Use more nops than strictly needed in case the compiler
+-	   varies code */
+-	alternative_input(
+-		"fnsave %[fx] ;fwait;" GENERIC_NOP8 GENERIC_NOP4,
+-		"fxsave %[fx]\n"
+-		"bt $7,%[fsw] ; jnc 1f ; fnclex\n1:",
+-		X86_FEATURE_FXSR,
+-		[fx] "m" (tsk->thread.xstate->fxsave),
+-		[fsw] "m" (tsk->thread.xstate->fxsave.swd) : "memory");
+-clear_state:
++	if (unlikely(tsk->thread.xstate.fxsave->swd & X87_FSW_ES))
++		asm volatile("fnclex");
++
+ 	/* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
+ 	   is pending.  Clear the x87 state here by setting it to fixed
+ 	   values. safe_address is a random variable that should be in L1 */
+@@ -248,8 +247,6 @@ clear_state:
+ 			"fildl %[addr]"        /* set F?P to defined value */
+ 			: : [addr] "m" (safe_address));
+ 	}
+-end:
+-	;
+ }
+ 
+ #endif	/* CONFIG_X86_64 */

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0004-x86-fpu-merge-fpu_save_init.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0004-x86-fpu-merge-fpu_save_init.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,119 @@
+From 826b8d08b8528d2b025021e43c9a5c16713f9b47 Mon Sep 17 00:00:00 2001
+From: Brian Gerst <brgerst at gmail.com>
+Date: Fri, 3 Sep 2010 21:17:19 -0400
+Subject: [PATCH 04/15] x86, fpu: Merge fpu_save_init()
+
+commit b2b57fe053c9cf8b8af5a0e826a465996afed0ff upstream.
+
+Make 64-bit use the 32-bit version of fpu_save_init().  Remove
+unused clear_fpu_state().
+
+Signed-off-by: Brian Gerst <brgerst at gmail.com>
+Acked-by: Pekka Enberg <penberg at kernel.org>
+Cc: Suresh Siddha <suresh.b.siddha at intel.com>
+LKML-Reference: <1283563039-3466-13-git-send-email-brgerst at gmail.com>
+Signed-off-by: H. Peter Anvin <hpa at linux.intel.com>
+[bwh: Backported to 2.6.32:
+ - We don't have struct fpu
+ - The AMD FXSAVE workaround looks a bit different
+ - Adjust context]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h | 45 +++++----------------------------------------
+ 1 file changed, 5 insertions(+), 40 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 6e4bfa9..a404dfe 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -81,31 +81,6 @@ static inline int fxrstor_checking(struct i387_fxsave_struct *fx)
+ 	return err;
+ }
+ 
+-/* AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
+-   is pending. Clear the x87 state here by setting it to fixed
+-   values. The kernel data segment can be sometimes 0 and sometimes
+-   new user value. Both should be ok.
+-   Use the PDA as safe address because it should be already in L1. */
+-static inline void clear_fpu_state(struct task_struct *tsk)
+-{
+-	struct xsave_struct *xstate = &tsk->thread.xstate->xsave;
+-	struct i387_fxsave_struct *fx = &tsk->thread.xstate->fxsave;
+-
+-	/*
+-	 * xsave header may indicate the init state of the FP.
+-	 */
+-	if ((task_thread_info(tsk)->status & TS_XSAVE) &&
+-	    !(xstate->xsave_hdr.xstate_bv & XSTATE_FP))
+-		return;
+-
+-	if (unlikely(fx->swd & X87_FSW_ES))
+-		asm volatile("fnclex");
+-	alternative_input(ASM_NOP8 ASM_NOP2,
+-			  "    emms\n"		/* clear stack tags */
+-			  "    fildl %%gs:0",	/* load to clear state */
+-			  X86_FEATURE_FXSAVE_LEAK);
+-}
+-
+ static inline int fxsave_user(struct i387_fxsave_struct __user *fx)
+ {
+ 	int err;
+@@ -157,16 +132,6 @@ static inline void fxsave(struct task_struct *tsk)
+ #endif
+ }
+ 
+-static inline void fpu_save_init(struct task_struct *tsk)
+-{
+-	if (task_thread_info(tsk)->status & TS_XSAVE)
+-		xsave(tsk);
+-	else
+-		fxsave(tsk);
+-
+-	clear_fpu_state(tsk);
+-}
+-
+ #else  /* CONFIG_X86_32 */
+ 
+ #ifdef CONFIG_MATH_EMULATION
+@@ -204,6 +169,8 @@ static inline void fxsave(struct task_struct *tsk)
+ 		     : [fx] "=m" (tsk->thread.xstate->fxsave));
+ }
+ 
++#endif	/* CONFIG_X86_64 */
++
+ /* We need a safe address that is cheap to find and that is already
+    in L1 during context switch. The best choices are unfortunately
+    different for UP and SMP */
+@@ -224,7 +191,7 @@ static inline void fpu_save_init(struct task_struct *tsk)
+ 		/*
+ 		 * xsave header may indicate the init state of the FP.
+ 		 */
+-		if (!(tsk->thread.xstate.xsave.xsave_hdr.xstate_bv & XSTATE_FP))
++		if (!(tsk->thread.xstate->xsave.xsave_hdr.xstate_bv & XSTATE_FP))
+ 			return;
+ 	} else if (use_fxsr()) {
+ 		fxsave(tsk);
+@@ -234,7 +201,7 @@ static inline void fpu_save_init(struct task_struct *tsk)
+ 		return;
+ 	}
+ 
+-	if (unlikely(tsk->thread.xstate.fxsave->swd & X87_FSW_ES))
++	if (unlikely(tsk->thread.xstate->fxsave.swd & X87_FSW_ES))
+ 		asm volatile("fnclex");
+ 
+ 	/* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
+@@ -244,13 +211,11 @@ static inline void fpu_save_init(struct task_struct *tsk)
+ 		asm volatile(
+ 			"fnclex\n\t"
+ 			"emms\n\t"
+-			"fildl %[addr]"        /* set F?P to defined value */
++			"fildl %P[addr]"        /* set F?P to defined value */
+ 			: : [addr] "m" (safe_address));
+ 	}
+ }
+ 
+-#endif	/* CONFIG_X86_64 */
+-
+ static inline void __save_init_fpu(struct task_struct *tsk)
+ {
+ 	fpu_save_init(tsk);

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0005-x86-32-fpu-fix-fpu-exception-handling-on-non-sse-sys.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0005-x86-32-fpu-fix-fpu-exception-handling-on-non-sse-sys.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,64 @@
+From 3ac347d0af42211f8ddf8fdf1d45c032c284adec Mon Sep 17 00:00:00 2001
+From: Ben Hutchings <ben at decadent.org.uk>
+Date: Thu, 29 Jan 2015 23:35:13 +0000
+Subject: [PATCH 05/15] x86-32, fpu: Fix FPU exception handling on non-SSE
+ systems
+
+commit f994d99cf140dbb637e49882891c89b3fd84becd upstream.
+
+On 32bit systems without SSE (that is, they use FSAVE/FRSTOR for FPU
+context switches), FPU exceptions in user mode cause Oopses, BUGs,
+recursive faults and other nasty things:
+
+fpu exception: 0000 [#1]
+last sysfs file: /sys/power/state
+Modules linked in: psmouse evdev pcspkr serio_raw [last unloaded: scsi_wait_scan]
+
+Pid: 1638, comm: fxsave-32-excep Not tainted 2.6.35-07798-g58a992b-dirty #633 VP3-596B-DD/VT82C597
+EIP: 0060:[<c1003527>] EFLAGS: 00010202 CPU: 0
+EIP is at math_error+0x1b4/0x1c8
+EAX: 00000003 EBX: cf9be7e0 ECX: 00000000 EDX: cf9c5c00
+ESI: cf9d9fb4 EDI: c1372db3 EBP: 00000010 ESP: cf9d9f1c
+DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068
+Process fxsave-32-excep (pid: 1638, ti=cf9d8000 task=cf9be7e0 task.ti=cf9d8000)
+Stack:
+00000000 00000301 00000004 00000000 00000000 cf9d3000 cf9da8f0 00000001
+<0> 00000004 cf9b6b60 c1019a6b c1019a79 00000020 00000242 000001b6 cf9c5380
+<0> cf806b40 cf791880 00000000 00000282 00000282 c108a213 00000020 cf9c5380
+Call Trace:
+[<c1019a6b>] ? need_resched+0x11/0x1a
+[<c1019a79>] ? should_resched+0x5/0x1f
+[<c108a213>] ? do_sys_open+0xbd/0xc7
+[<c108a213>] ? do_sys_open+0xbd/0xc7
+[<c100353b>] ? do_coprocessor_error+0x0/0x11
+[<c12d5965>] ? error_code+0x65/0x70
+Code: a8 20 74 30 c7 44 24 0c 06 00 03 00 8d 54 24 04 89 d9 b8 08 00 00 00 e8 9b 6d 02 00 eb 16 8b 93 5c 02 00 00 eb 05 e9 04 ff ff ff <9b> dd 32 9b e9 16 ff ff ff 81 c4 84 00 00 00 5b 5e 5f 5d c3 c6
+EIP: [<c1003527>] math_error+0x1b4/0x1c8 SS:ESP 0068:cf9d9f1c
+
+This usually continues in slight variations until the system is reset.
+
+This bug was introduced by commit 58a992b9cbaf449aeebd3575c3695a9eb5d95b5e:
+	x86-32, fpu: Rewrite fpu_save_init()
+
+Signed-off-by: Hans Rosenfeld <hans.rosenfeld at amd.com>
+Link: http://lkml.kernel.org/r/1302106003-366952-1-git-send-email-hans.rosenfeld@amd.com
+Signed-off-by: H. Peter Anvin <hpa at zytor.com>
+[bwh: Backported to 2.6.32: adjust context]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index a404dfe..f2d743e 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -196,7 +196,7 @@ static inline void fpu_save_init(struct task_struct *tsk)
+ 	} else if (use_fxsr()) {
+ 		fxsave(tsk);
+ 	} else {
+-		asm volatile("fsave %[fx]; fwait"
++		asm volatile("fnsave %[fx]; fwait"
+ 			     : [fx] "=m" (tsk->thread.xstate->fsave));
+ 		return;
+ 	}

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0006-i387-math_state_restore-isn-t-called-from-asm.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0006-i387-math_state_restore-isn-t-called-from-asm.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,51 @@
+From 6129b2eb7505342cc3f6eb8801c7e203ce950ab8 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds at linux-foundation.org>
+Date: Mon, 13 Feb 2012 13:47:25 -0800
+Subject: [PATCH 06/15] i387: math_state_restore() isn't called from asm
+
+commit be98c2cdb15ba26148cd2bd58a857d4f7759ed38 upstream.
+
+It was marked asmlinkage for some really old and stale legacy reasons.
+Fix that and the equally stale comment.
+
+Noticed when debugging the irq_fpu_usable() bugs.
+
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h | 2 +-
+ arch/x86/kernel/traps.c     | 6 +++---
+ 2 files changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index f2d743e..1cfee5c 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -25,7 +25,7 @@ extern unsigned int sig_xstate_size;
+ extern void fpu_init(void);
+ extern void mxcsr_feature_mask_init(void);
+ extern int init_fpu(struct task_struct *child);
+-extern asmlinkage void math_state_restore(void);
++extern void math_state_restore(void);
+ extern void __math_state_restore(void);
+ extern void init_thread_xstate(void);
+ extern int dump_fpu(struct pt_regs *, struct user_i387_struct *);
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 8a39a6c..c033a76 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -861,10 +861,10 @@ void __math_state_restore(void)
+  * Careful.. There are problems with IBM-designed IRQ13 behaviour.
+  * Don't touch unless you *really* know how it works.
+  *
+- * Must be called with kernel preemption disabled (in this case,
+- * local interrupts are disabled at the call-site in entry.S).
++ * Must be called with kernel preemption disabled (eg with local
++ * local interrupts as in the case of do_device_not_available).
+  */
+-asmlinkage void math_state_restore(void)
++void math_state_restore(void)
+ {
+ 	struct thread_info *thread = current_thread_info();
+ 	struct task_struct *tsk = thread->task;

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0007-i387-make-irq_fpu_usable-tests-more-robust.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0007-i387-make-irq_fpu_usable-tests-more-robust.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,130 @@
+From 4aa158931d97d230dc72701bb975e4fd4a92487f Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds at linux-foundation.org>
+Date: Mon, 13 Feb 2012 13:56:14 -0800
+Subject: [PATCH 07/15] i387: make irq_fpu_usable() tests more robust
+
+commit 5b1cbac37798805c1fee18c8cebe5c0a13975b17 upstream.
+
+Some code - especially the crypto layer - wants to use the x86
+FP/MMX/AVX register set in what may be interrupt (typically softirq)
+context.
+
+That *can* be ok, but the tests for when it was ok were somewhat
+suspect.  We cannot touch the thread-specific status bits either, so
+we'd better check that we're not going to try to save FP state or
+anything like that.
+
+Now, it may be that the TS bit is always cleared *before* we set the
+USEDFPU bit (and only set when we had already cleared the USEDFP
+before), so the TS bit test may actually have been sufficient, but it
+certainly was not obviously so.
+
+So this explicitly verifies that we will not touch the TS_USEDFPU bit,
+and adds a few related sanity-checks.  Because it seems that somehow
+AES-NI is corrupting user FP state.  The cause is not clear, and this
+patch doesn't fix it, but while debugging it I really wanted the code to
+be more obviously correct and robust.
+
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+[bwh: Backported to 2.6.32: adjust context]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h | 54 ++++++++++++++++++++++++++++++++++++++-------
+ arch/x86/kernel/traps.c     |  1 +
+ 2 files changed, 47 insertions(+), 8 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 1cfee5c..9de7bd1 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -254,9 +254,54 @@ static inline void __clear_fpu(struct task_struct *tsk)
+ 	}
+ }
+ 
++/*
++ * Were we in an interrupt that interrupted kernel mode?
++ *
++ * We can do a kernel_fpu_begin/end() pair *ONLY* if that
++ * pair does nothing at all: TS_USEDFPU must be clear (so
++ * that we don't try to save the FPU state), and TS must
++ * be set (so that the clts/stts pair does nothing that is
++ * visible in the interrupted kernel thread).
++ */
++static inline bool interrupted_kernel_fpu_idle(void)
++{
++	return !(current_thread_info()->status & TS_USEDFPU) &&
++		(read_cr0() & X86_CR0_TS);
++}
++
++/*
++ * Were we in user mode (or vm86 mode) when we were
++ * interrupted?
++ *
++ * Doing kernel_fpu_begin/end() is ok if we are running
++ * in an interrupt context from user mode - we'll just
++ * save the FPU state as required.
++ */
++static inline bool interrupted_user_mode(void)
++{
++	struct pt_regs *regs = get_irq_regs();
++	return regs && user_mode_vm(regs);
++}
++
++/*
++ * Can we use the FPU in kernel mode with the
++ * whole "kernel_fpu_begin/end()" sequence?
++ *
++ * It's always ok in process context (ie "not interrupt")
++ * but it is sometimes ok even from an irq.
++ */
++static inline bool irq_fpu_usable(void)
++{
++	return !in_interrupt() ||
++		interrupted_user_mode() ||
++		interrupted_kernel_fpu_idle();
++}
++
+ static inline void kernel_fpu_begin(void)
+ {
+ 	struct thread_info *me = current_thread_info();
++
++	WARN_ON_ONCE(!irq_fpu_usable());
+ 	preempt_disable();
+ 	if (me->status & TS_USEDFPU)
+ 		__save_init_fpu(me->task);
+@@ -270,14 +315,6 @@ static inline void kernel_fpu_end(void)
+ 	preempt_enable();
+ }
+ 
+-static inline bool irq_fpu_usable(void)
+-{
+-	struct pt_regs *regs;
+-
+-	return !in_interrupt() || !(regs = get_irq_regs()) || \
+-		user_mode(regs) || (read_cr0() & X86_CR0_TS);
+-}
+-
+ /*
+  * Some instructions like VIA's padlock instructions generate a spurious
+  * DNA fault but don't modify SSE registers. And these instructions
+@@ -314,6 +351,7 @@ static inline void irq_ts_restore(int TS_state)
+  */
+ static inline void save_init_fpu(struct task_struct *tsk)
+ {
++	WARN_ON_ONCE(task_thread_info(tsk)->status & TS_USEDFPU);
+ 	preempt_disable();
+ 	__save_init_fpu(tsk);
+ 	stts();
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index c033a76..510b401 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -904,6 +904,7 @@ void math_emulate(struct math_emu_info *info)
+ dotraplinkage void __kprobes
+ do_device_not_available(struct pt_regs *regs, long error_code)
+ {
++	WARN_ON_ONCE(!user_mode_vm(regs));
+ #ifdef CONFIG_X86_32
+ 	if (read_cr0() & X86_CR0_EM) {
+ 		struct math_emu_info info = { };

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0008-i387-fix-sense-of-sanity-check.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0008-i387-fix-sense-of-sanity-check.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,35 @@
+From 0a9d61bdac7af7a4b0b6145451731abe6891f951 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds at linux-foundation.org>
+Date: Wed, 15 Feb 2012 08:05:18 -0800
+Subject: [PATCH 08/15] i387: fix sense of sanity check
+
+commit c38e23456278e967f094b08247ffc3711b1029b2 upstream.
+
+The check for save_init_fpu() (introduced in commit 5b1cbac37798: "i387:
+make irq_fpu_usable() tests more robust") was the wrong way around, but
+I hadn't noticed, because my "tests" were bogus: the FPU exceptions are
+disabled by default, so even doing a divide by zero never actually
+triggers this code at all unless you do extra work to enable them.
+
+So if anybody did enable them, they'd get one spurious warning.
+
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 9de7bd1..1944c1e 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -351,7 +351,7 @@ static inline void irq_ts_restore(int TS_state)
+  */
+ static inline void save_init_fpu(struct task_struct *tsk)
+ {
+-	WARN_ON_ONCE(task_thread_info(tsk)->status & TS_USEDFPU);
++	WARN_ON_ONCE(!(task_thread_info(tsk)->status & TS_USEDFPU));
+ 	preempt_disable();
+ 	__save_init_fpu(tsk);
+ 	stts();

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0009-i387-fix-x86-64-preemption-unsafe-user-stack-save-re.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0009-i387-fix-x86-64-preemption-unsafe-user-stack-save-re.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,162 @@
+From 57da77363671f2d10123eb5fc11e1e590afc5fb3 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds at linux-foundation.org>
+Date: Thu, 16 Feb 2012 09:15:04 -0800
+Subject: [PATCH 09/15] i387: fix x86-64 preemption-unsafe user stack
+ save/restore
+
+commit 15d8791cae75dca27bfda8ecfe87dca9379d6bb0 upstream.
+
+Commit 5b1cbac37798 ("i387: make irq_fpu_usable() tests more robust")
+added a sanity check to the #NM handler to verify that we never cause
+the "Device Not Available" exception in kernel mode.
+
+However, that check actually pinpointed a (fundamental) race where we do
+cause that exception as part of the signal stack FPU state save/restore
+code.
+
+Because we use the floating point instructions themselves to save and
+restore state directly from user mode, we cannot do that atomically with
+testing the TS_USEDFPU bit: the user mode access itself may cause a page
+fault, which causes a task switch, which saves and restores the FP/MMX
+state from the kernel buffers.
+
+This kind of "recursive" FP state save is fine per se, but it means that
+when the signal stack save/restore gets restarted, it will now take the
+'#NM' exception we originally tried to avoid.  With preemption this can
+happen even without the page fault - but because of the user access, we
+cannot just disable preemption around the save/restore instruction.
+
+There are various ways to solve this, including using the
+"enable/disable_page_fault()" helpers to not allow page faults at all
+during the sequence, and fall back to copying things by hand without the
+use of the native FP state save/restore instructions.
+
+However, the simplest thing to do is to just allow the #NM from kernel
+space, but fix the race in setting and clearing CR0.TS that this all
+exposed: the TS bit changes and the TS_USEDFPU bit absolutely have to be
+atomic wrt scheduling, so while the actual state save/restore can be
+interrupted and restarted, the act of actually clearing/setting CR0.TS
+and the TS_USEDFPU bit together must not.
+
+Instead of just adding random "preempt_disable/enable()" calls to what
+is already excessively ugly code, this introduces some helper functions
+that mostly mirror the "kernel_fpu_begin/end()" functionality, just for
+the user state instead.
+
+Those helper functions should probably eventually replace the other
+ad-hoc CR0.TS and TS_USEDFPU tests too, but I'll need to think about it
+some more: the task switching functionality in particular needs to
+expose the difference between the 'prev' and 'next' threads, while the
+new helper functions intentionally were written to only work with
+'current'.
+
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+[bwh: Backported to 2.6.32: adjust context]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h | 42 ++++++++++++++++++++++++++++++++++++++++++
+ arch/x86/kernel/traps.c     |  1 -
+ arch/x86/kernel/xsave.c     | 10 +++-------
+ 3 files changed, 45 insertions(+), 8 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 1944c1e..2daac1b 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -347,6 +347,48 @@ static inline void irq_ts_restore(int TS_state)
+ }
+ 
+ /*
++ * The question "does this thread have fpu access?"
++ * is slightly racy, since preemption could come in
++ * and revoke it immediately after the test.
++ *
++ * However, even in that very unlikely scenario,
++ * we can just assume we have FPU access - typically
++ * to save the FP state - we'll just take a #NM
++ * fault and get the FPU access back.
++ *
++ * The actual user_fpu_begin/end() functions
++ * need to be preemption-safe, though.
++ *
++ * NOTE! user_fpu_end() must be used only after you
++ * have saved the FP state, and user_fpu_begin() must
++ * be used only immediately before restoring it.
++ * These functions do not do any save/restore on
++ * their own.
++ */
++static inline int user_has_fpu(void)
++{
++	return current_thread_info()->status & TS_USEDFPU;
++}
++
++static inline void user_fpu_end(void)
++{
++	preempt_disable();
++	current_thread_info()->status &= ~TS_USEDFPU;
++	stts();
++	preempt_enable();
++}
++
++static inline void user_fpu_begin(void)
++{
++	preempt_disable();
++	if (!user_has_fpu()) {
++		clts();
++		current_thread_info()->status |= TS_USEDFPU;
++	}
++	preempt_enable();
++}
++
++/*
+  * These disable preemption on their own and are safe
+  */
+ static inline void save_init_fpu(struct task_struct *tsk)
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 510b401..c033a76 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -904,7 +904,6 @@ void math_emulate(struct math_emu_info *info)
+ dotraplinkage void __kprobes
+ do_device_not_available(struct pt_regs *regs, long error_code)
+ {
+-	WARN_ON_ONCE(!user_mode_vm(regs));
+ #ifdef CONFIG_X86_32
+ 	if (read_cr0() & X86_CR0_EM) {
+ 		struct math_emu_info info = { };
+diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c
+index c5ee17e..38a36f1 100644
+--- a/arch/x86/kernel/xsave.c
++++ b/arch/x86/kernel/xsave.c
+@@ -90,7 +90,7 @@ int save_i387_xstate(void __user *buf)
+ 	if (!used_math())
+ 		return 0;
+ 
+-	if (task_thread_info(tsk)->status & TS_USEDFPU) {
++	if (user_has_fpu()) {
+ 		/*
+ 	 	 * Start with clearing the user buffer. This will present a
+ 	 	 * clean context for the bytes not touched by the fxsave/xsave.
+@@ -106,8 +106,7 @@ int save_i387_xstate(void __user *buf)
+ 
+ 		if (err)
+ 			return err;
+-		task_thread_info(tsk)->status &= ~TS_USEDFPU;
+-		stts();
++		user_fpu_end();
+ 	} else {
+ 		if (__copy_to_user(buf, &tsk->thread.xstate->fxsave,
+ 				   xstate_size))
+@@ -221,10 +220,7 @@ int restore_i387_xstate(void __user *buf)
+ 			return err;
+ 	}
+ 
+-	if (!(task_thread_info(current)->status & TS_USEDFPU)) {
+-		clts();
+-		task_thread_info(current)->status |= TS_USEDFPU;
+-	}
++	user_fpu_begin();
+ 	if (task_thread_info(tsk)->status & TS_XSAVE)
+ 		err = restore_user_xstate(buf);
+ 	else

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0010-i387-move-ts_usedfpu-clearing-out-of-__save_init_fpu.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0010-i387-move-ts_usedfpu-clearing-out-of-__save_init_fpu.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,63 @@
+From f19efc638d693ce09dffd19cdc8d89d8d57de719 Mon Sep 17 00:00:00 2001
+From: Ben Hutchings <ben at decadent.org.uk>
+Date: Fri, 30 Jan 2015 00:10:55 +0000
+Subject: [PATCH 10/15] i387: move TS_USEDFPU clearing out of __save_init_fpu
+ and into callers
+
+commit b6c66418dcad0fcf83cd1d0a39482db37bf4fc41 upstream.
+
+Touching TS_USEDFPU without touching CR0.TS is confusing, so don't do
+it.  By moving it into the callers, we always do the TS_USEDFPU next to
+the CR0.TS accesses in the source code, and it's much easier to see how
+the two go hand in hand.
+
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+[bwh: Backported to 2.6.32: adjust context]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h | 9 ++++++---
+ 1 file changed, 6 insertions(+), 3 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 2daac1b..2dcac3b 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -219,7 +219,6 @@ static inline void fpu_save_init(struct task_struct *tsk)
+ static inline void __save_init_fpu(struct task_struct *tsk)
+ {
+ 	fpu_save_init(tsk);
+-	task_thread_info(tsk)->status &= ~TS_USEDFPU;
+ }
+ 
+ static inline int restore_fpu_checking(struct task_struct *tsk)
+@@ -240,6 +239,7 @@ static inline void __unlazy_fpu(struct task_struct *tsk)
+ {
+ 	if (task_thread_info(tsk)->status & TS_USEDFPU) {
+ 		__save_init_fpu(tsk);
++		task_thread_info(tsk)->status &= ~TS_USEDFPU;
+ 		stts();
+ 	} else
+ 		tsk->fpu_counter = 0;
+@@ -303,9 +303,11 @@ static inline void kernel_fpu_begin(void)
+ 
+ 	WARN_ON_ONCE(!irq_fpu_usable());
+ 	preempt_disable();
+-	if (me->status & TS_USEDFPU)
++	if (me->status & TS_USEDFPU) {
+ 		__save_init_fpu(me->task);
+-	else
++		me->status &= ~TS_USEDFPU;
++		/* We do 'stts()' in kernel_fpu_end() */
++	} else
+ 		clts();
+ }
+ 
+@@ -396,6 +398,7 @@ static inline void save_init_fpu(struct task_struct *tsk)
+ 	WARN_ON_ONCE(!(task_thread_info(tsk)->status & TS_USEDFPU));
+ 	preempt_disable();
+ 	__save_init_fpu(tsk);
++	task_thread_info(tsk)->status &= ~TS_USEDFPU;
+ 	stts();
+ 	preempt_enable();
+ }

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0011-i387-don-t-ever-touch-ts_usedfpu-directly-use-helper.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0011-i387-don-t-ever-touch-ts_usedfpu-directly-use-helper.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,198 @@
+From 55e24027b2bf56f4f9eaaf4325001febc6ffd683 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds at linux-foundation.org>
+Date: Thu, 16 Feb 2012 13:33:12 -0800
+Subject: [PATCH 11/15] i387: don't ever touch TS_USEDFPU directly, use helper
+ functions
+
+commit 6d59d7a9f5b723a7ac1925c136e93ec83c0c3043 upstream.
+
+This creates three helper functions that do the TS_USEDFPU accesses, and
+makes everybody that used to do it by hand use those helpers instead.
+
+In addition, there's a couple of helper functions for the "change both
+CR0.TS and TS_USEDFPU at the same time" case, and the places that do
+that together have been changed to use those.  That means that we have
+fewer random places that open-code this situation.
+
+The intent is partly to clarify the code without actually changing any
+semantics yet (since we clearly still have some hard to reproduce bug in
+this area), but also to make it much easier to use another approach
+entirely to caching the CR0.TS bit for software accesses.
+
+Right now we use a bit in the thread-info 'status' variable (this patch
+does not change that), but we might want to make it a full field of its
+own or even make it a per-cpu variable.
+
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+[bwh: Backported to 2.6.32:
+ - Adjust context
+ - Drop inapplicable changes to KVM and xsave.c]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h | 75 +++++++++++++++++++++++++++++++++------------
+ arch/x86/kernel/traps.c     |  2 +-
+ 2 files changed, 56 insertions(+), 21 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 2dcac3b..b15de2f 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -230,6 +230,47 @@ static inline int restore_fpu_checking(struct task_struct *tsk)
+ }
+ 
+ /*
++ * Software FPU state helpers. Careful: these need to
++ * be preemption protection *and* they need to be
++ * properly paired with the CR0.TS changes!
++ */
++static inline int __thread_has_fpu(struct thread_info *ti)
++{
++	return ti->status & TS_USEDFPU;
++}
++
++/* Must be paired with an 'stts' after! */
++static inline void __thread_clear_has_fpu(struct thread_info *ti)
++{
++	ti->status &= ~TS_USEDFPU;
++}
++
++/* Must be paired with a 'clts' before! */
++static inline void __thread_set_has_fpu(struct thread_info *ti)
++{
++	ti->status |= TS_USEDFPU;
++}
++
++/*
++ * Encapsulate the CR0.TS handling together with the
++ * software flag.
++ *
++ * These generally need preemption protection to work,
++ * do try to avoid using these on their own.
++ */
++static inline void __thread_fpu_end(struct thread_info *ti)
++{
++	__thread_clear_has_fpu(ti);
++	stts();
++}
++
++static inline void __thread_fpu_begin(struct thread_info *ti)
++{
++	clts();
++	__thread_set_has_fpu(ti);
++}
++
++/*
+  * Signal frame handlers...
+  */
+ extern int save_i387_xstate(void __user *buf);
+@@ -237,20 +278,18 @@ extern int restore_i387_xstate(void __user *buf);
+ 
+ static inline void __unlazy_fpu(struct task_struct *tsk)
+ {
+-	if (task_thread_info(tsk)->status & TS_USEDFPU) {
++	if (__thread_has_fpu(task_thread_info(tsk))) {
+ 		__save_init_fpu(tsk);
+-		task_thread_info(tsk)->status &= ~TS_USEDFPU;
+-		stts();
++		__thread_fpu_end(task_thread_info(tsk));
+ 	} else
+ 		tsk->fpu_counter = 0;
+ }
+ 
+ static inline void __clear_fpu(struct task_struct *tsk)
+ {
+-	if (task_thread_info(tsk)->status & TS_USEDFPU) {
++	if (__thread_has_fpu(task_thread_info(tsk))) {
+ 		tolerant_fwait();
+-		task_thread_info(tsk)->status &= ~TS_USEDFPU;
+-		stts();
++		__thread_fpu_end(task_thread_info(tsk));
+ 	}
+ }
+ 
+@@ -258,14 +297,14 @@ static inline void __clear_fpu(struct task_struct *tsk)
+  * Were we in an interrupt that interrupted kernel mode?
+  *
+  * We can do a kernel_fpu_begin/end() pair *ONLY* if that
+- * pair does nothing at all: TS_USEDFPU must be clear (so
++ * pair does nothing at all: the thread must not have fpu (so
+  * that we don't try to save the FPU state), and TS must
+  * be set (so that the clts/stts pair does nothing that is
+  * visible in the interrupted kernel thread).
+  */
+ static inline bool interrupted_kernel_fpu_idle(void)
+ {
+-	return !(current_thread_info()->status & TS_USEDFPU) &&
++	return !__thread_has_fpu(current_thread_info()) &&
+ 		(read_cr0() & X86_CR0_TS);
+ }
+ 
+@@ -303,9 +342,9 @@ static inline void kernel_fpu_begin(void)
+ 
+ 	WARN_ON_ONCE(!irq_fpu_usable());
+ 	preempt_disable();
+-	if (me->status & TS_USEDFPU) {
++	if (__thread_has_fpu(me)) {
+ 		__save_init_fpu(me->task);
+-		me->status &= ~TS_USEDFPU;
++		__thread_clear_has_fpu(me);
+ 		/* We do 'stts()' in kernel_fpu_end() */
+ 	} else
+ 		clts();
+@@ -369,24 +408,21 @@ static inline void irq_ts_restore(int TS_state)
+  */
+ static inline int user_has_fpu(void)
+ {
+-	return current_thread_info()->status & TS_USEDFPU;
++	return __thread_has_fpu(current_thread_info());
+ }
+ 
+ static inline void user_fpu_end(void)
+ {
+ 	preempt_disable();
+-	current_thread_info()->status &= ~TS_USEDFPU;
+-	stts();
++	__thread_fpu_end(current_thread_info());
+ 	preempt_enable();
+ }
+ 
+ static inline void user_fpu_begin(void)
+ {
+ 	preempt_disable();
+-	if (!user_has_fpu()) {
+-		clts();
+-		current_thread_info()->status |= TS_USEDFPU;
+-	}
++	if (!user_has_fpu())
++		__thread_fpu_begin(current_thread_info());
+ 	preempt_enable();
+ }
+ 
+@@ -395,11 +431,10 @@ static inline void user_fpu_begin(void)
+  */
+ static inline void save_init_fpu(struct task_struct *tsk)
+ {
+-	WARN_ON_ONCE(!(task_thread_info(tsk)->status & TS_USEDFPU));
++	WARN_ON_ONCE(!__thread_has_fpu(task_thread_info(tsk)));
+ 	preempt_disable();
+ 	__save_init_fpu(tsk);
+-	task_thread_info(tsk)->status &= ~TS_USEDFPU;
+-	stts();
++	__thread_fpu_end(task_thread_info(tsk));
+ 	preempt_enable();
+ }
+ 
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index c033a76..41e3161 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -850,7 +850,7 @@ void __math_state_restore(void)
+ 		return;
+ 	}
+ 
+-	thread->status |= TS_USEDFPU;	/* So we fnsave on switch_to() */
++	__thread_set_has_fpu(thread);	/* clts in caller! */
+ 	tsk->fpu_counter++;
+ }
+ 

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0012-i387-do-not-preload-fpu-state-at-task-switch-time.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0012-i387-do-not-preload-fpu-state-at-task-switch-time.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,200 @@
+From af98079f5a72cf84c80fba10a851ba9a27132f90 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds at linux-foundation.org>
+Date: Thu, 16 Feb 2012 15:45:23 -0800
+Subject: [PATCH 12/15] i387: do not preload FPU state at task switch time
+
+commit b3b0870ef3ffed72b92415423da864f440f57ad6 upstream.
+
+Yes, taking the trap to re-load the FPU/MMX state is expensive, but so
+is spending several days looking for a bug in the state save/restore
+code.  And the preload code has some rather subtle interactions with
+both paravirtualization support and segment state restore, so it's not
+nearly as simple as it should be.
+
+Also, now that we no longer necessarily depend on a single bit (ie
+TS_USEDFPU) for keeping track of the state of the FPU, we migth be able
+to do better.  If we are really switching between two processes that
+keep touching the FP state, save/restore is inevitable, but in the case
+of having one process that does most of the FPU usage, we may actually
+be able to do much better than the preloading.
+
+In particular, we may be able to keep track of which CPU the process ran
+on last, and also per CPU keep track of which process' FP state that CPU
+has.  For modern CPU's that don't destroy the FPU contents on save time,
+that would allow us to do a lazy restore by just re-enabling the
+existing FPU state - with no restore cost at all!
+
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+[bwh: Backported to 2.6.32: adjust for lack of struct fpu]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h  |  1 -
+ arch/x86/kernel/process_32.c | 20 --------------------
+ arch/x86/kernel/process_64.c | 22 ----------------------
+ arch/x86/kernel/traps.c      | 35 +++++++++++------------------------
+ 4 files changed, 11 insertions(+), 67 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index b15de2f..987f6e0 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -26,7 +26,6 @@ extern void fpu_init(void);
+ extern void mxcsr_feature_mask_init(void);
+ extern int init_fpu(struct task_struct *child);
+ extern void math_state_restore(void);
+-extern void __math_state_restore(void);
+ extern void init_thread_xstate(void);
+ extern int dump_fpu(struct pt_regs *, struct user_i387_struct *);
+ 
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index c40c432..4d5508f 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -346,23 +346,11 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 				 *next = &next_p->thread;
+ 	int cpu = smp_processor_id();
+ 	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+-	bool preload_fpu;
+ 
+ 	/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
+ 
+-	/*
+-	 * If the task has used fpu the last 5 timeslices, just do a full
+-	 * restore of the math state immediately to avoid the trap; the
+-	 * chances of needing FPU soon are obviously high now
+-	 */
+-	preload_fpu = tsk_used_math(next_p) && next_p->fpu_counter > 5;
+-
+ 	__unlazy_fpu(prev_p);
+ 
+-	/* we're going to use this soon, after a few expensive things */
+-	if (preload_fpu)
+-		prefetch(next->xstate);
+-
+ 	/*
+ 	 * Reload esp0.
+ 	 */
+@@ -401,11 +389,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 		     task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT))
+ 		__switch_to_xtra(prev_p, next_p, tss);
+ 
+-	/* If we're going to preload the fpu context, make sure clts
+-	   is run while we're batching the cpu state updates. */
+-	if (preload_fpu)
+-		clts();
+-
+ 	/*
+ 	 * Leave lazy mode, flushing any hypercalls made here.
+ 	 * This must be done before restoring TLS segments so
+@@ -415,9 +398,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 	 */
+ 	arch_end_context_switch(next_p);
+ 
+-	if (preload_fpu)
+-		__math_state_restore();
+-
+ 	/*
+ 	 * Restore %gs if needed (which is common)
+ 	 */
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 84e87dd..b5f2f30 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -381,18 +381,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 	int cpu = smp_processor_id();
+ 	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+ 	unsigned fsindex, gsindex;
+-	bool preload_fpu;
+-
+-	/*
+-	 * If the task has used fpu the last 5 timeslices, just do a full
+-	 * restore of the math state immediately to avoid the trap; the
+-	 * chances of needing FPU soon are obviously high now
+-	 */
+-	preload_fpu = tsk_used_math(next_p) && next_p->fpu_counter > 5;
+-
+-	/* we're going to use this soon, after a few expensive things */
+-	if (preload_fpu)
+-		prefetch(next->xstate);
+ 
+ 	/*
+ 	 * Reload esp0, LDT and the page table pointer:
+@@ -425,10 +413,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 	/* Must be after DS reload */
+ 	__unlazy_fpu(prev_p);
+ 
+-	/* Make sure cpu is ready for new context */
+-	if (preload_fpu)
+-		clts();
+-
+ 	/*
+ 	 * Leave lazy mode, flushing any hypercalls made here.
+ 	 * This must be done before restoring TLS segments so
+@@ -487,12 +471,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 		     task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))
+ 		__switch_to_xtra(prev_p, next_p, tss);
+ 
+-	/*
+-	 * Preload the FPU context, now that we've determined that the
+-	 * task is likely to be using it. 
+-	 */
+-	if (preload_fpu)
+-		__math_state_restore();
+ 	return prev_p;
+ }
+ 
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 41e3161..2b20deb 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -833,28 +833,6 @@ asmlinkage void __attribute__((weak)) smp_threshold_interrupt(void)
+ }
+ 
+ /*
+- * __math_state_restore assumes that cr0.TS is already clear and the
+- * fpu state is all ready for use.  Used during context switch.
+- */
+-void __math_state_restore(void)
+-{
+-	struct thread_info *thread = current_thread_info();
+-	struct task_struct *tsk = thread->task;
+-
+-	/*
+-	 * Paranoid restore. send a SIGSEGV if we fail to restore the state.
+-	 */
+-	if (unlikely(restore_fpu_checking(tsk))) {
+-		stts();
+-		force_sig(SIGSEGV, tsk);
+-		return;
+-	}
+-
+-	__thread_set_has_fpu(thread);	/* clts in caller! */
+-	tsk->fpu_counter++;
+-}
+-
+-/*
+  * 'math_state_restore()' saves the current math information in the
+  * old math state array, and gets the new ones from the current task
+  *
+@@ -884,9 +862,18 @@ void math_state_restore(void)
+ 		local_irq_disable();
+ 	}
+ 
+-	clts();				/* Allow maths ops (or we recurse) */
++	__thread_fpu_begin(thread);
+ 
+-	__math_state_restore();
++	/*
++	 * Paranoid restore. send a SIGSEGV if we fail to restore the state.
++	 */
++	if (unlikely(restore_fpu_checking(tsk))) {
++		__thread_fpu_end(thread);
++		force_sig(SIGSEGV, tsk);
++		return;
++	}
++
++	tsk->fpu_counter++;
+ }
+ EXPORT_SYMBOL_GPL(math_state_restore);
+ 

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0013-i387-move-amd-k7-k8-fpu-fxsave-fxrstor-workaround-fr.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0013-i387-move-amd-k7-k8-fpu-fxsave-fxrstor-workaround-fr.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,137 @@
+From bdfce76118a9f86585ddab151987ed2233b99e8f Mon Sep 17 00:00:00 2001
+From: Ben Hutchings <ben at decadent.org.uk>
+Date: Thu, 29 Jan 2015 22:36:11 +0000
+Subject: [PATCH 13/15] i387: move AMD K7/K8 fpu fxsave/fxrstor workaround from
+ save to restore
+
+commit 4903062b5485f0e2c286a23b44c9b59d9b017d53 upstream.
+
+The AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception is
+pending.  In order to not leak FIP state from one process to another, we
+need to do a floating point load after the fxsave of the old process,
+and before the fxrstor of the new FPU state.  That resets the state to
+the (uninteresting) kernel load, rather than some potentially sensitive
+user information.
+
+We used to do this directly after the FPU state save, but that is
+actually very inconvenient, since it
+
+ (a) corrupts what is potentially perfectly good FPU state that we might
+     want to lazy avoid restoring later and
+
+ (b) on x86-64 it resulted in a very annoying ordering constraint, where
+     "__unlazy_fpu()" in the task switch needs to be delayed until after
+     the DS segment has been reloaded just to get the new DS value.
+
+Coupling it to the fxrstor instead of the fxsave automatically avoids
+both of these issues, and also ensures that we only do it when actually
+necessary (the FP state after a save may never actually get used).  It's
+simply a much more natural place for the leaked state cleanup.
+
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+[bwh: Backported to 2.6.32: The AMD FXSAVE workaround has already been
+ modified by "x86, fpu, amd: Clear exceptions in AMD FXSAVE
+ workaround" which was applied later than this upstream; move that fix
+ into math_state_restore()]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h  | 20 --------------------
+ arch/x86/kernel/process_64.c |  5 ++---
+ arch/x86/kernel/traps.c      | 15 +++++++++++++++
+ 3 files changed, 17 insertions(+), 23 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 987f6e0..e2890ee3 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -170,15 +170,6 @@ static inline void fxsave(struct task_struct *tsk)
+ 
+ #endif	/* CONFIG_X86_64 */
+ 
+-/* We need a safe address that is cheap to find and that is already
+-   in L1 during context switch. The best choices are unfortunately
+-   different for UP and SMP */
+-#ifdef CONFIG_SMP
+-#define safe_address (__per_cpu_offset[0])
+-#else
+-#define safe_address (kstat_cpu(0).cpustat.user)
+-#endif
+-
+ /*
+  * These must be called with preempt disabled
+  */
+@@ -202,17 +193,6 @@ static inline void fpu_save_init(struct task_struct *tsk)
+ 
+ 	if (unlikely(tsk->thread.xstate->fxsave.swd & X87_FSW_ES))
+ 		asm volatile("fnclex");
+-
+-	/* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
+-	   is pending.  Clear the x87 state here by setting it to fixed
+-	   values. safe_address is a random variable that should be in L1 */
+-	if (unlikely(boot_cpu_has(X86_FEATURE_FXSAVE_LEAK))) {
+-		asm volatile(
+-			"fnclex\n\t"
+-			"emms\n\t"
+-			"fildl %P[addr]"        /* set F?P to defined value */
+-			: : [addr] "m" (safe_address));
+-	}
+ }
+ 
+ static inline void __save_init_fpu(struct task_struct *tsk)
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index b5f2f30..d8040a6 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -382,6 +382,8 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+ 	unsigned fsindex, gsindex;
+ 
++	__unlazy_fpu(prev_p);
++
+ 	/*
+ 	 * Reload esp0, LDT and the page table pointer:
+ 	 */
+@@ -410,9 +412,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 
+ 	load_TLS(next, cpu);
+ 
+-	/* Must be after DS reload */
+-	__unlazy_fpu(prev_p);
+-
+ 	/*
+ 	 * Leave lazy mode, flushing any hypercalls made here.
+ 	 * This must be done before restoring TLS segments so
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 2b20deb..585f37b 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -847,6 +847,10 @@ void math_state_restore(void)
+ 	struct thread_info *thread = current_thread_info();
+ 	struct task_struct *tsk = thread->task;
+ 
++	/* We need a safe address that is cheap to find and that is already
++	   in L1. We just brought in "thread->task", so use that */
++#define safe_address (thread->task)
++
+ 	if (!tsk_used_math(tsk)) {
+ 		local_irq_enable();
+ 		/*
+@@ -864,6 +868,17 @@ void math_state_restore(void)
+ 
+ 	__thread_fpu_begin(thread);
+ 
++	/* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
++	   is pending.  Clear the x87 state here by setting it to fixed
++	   values. safe_address is a random variable that should be in L1 */
++	if (unlikely(boot_cpu_has(X86_FEATURE_FXSAVE_LEAK))) {
++		asm volatile(
++			"fnclex\n\t"
++			"emms\n\t"
++			"fildl %P[addr]"	/* set F?P to defined value */
++			: : [addr] "m" (safe_address));
++	}
++
+ 	/*
+ 	 * Paranoid restore. send a SIGSEGV if we fail to restore the state.
+ 	 */

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0014-i387-move-ts_usedfpu-flag-from-thread_info-to-task_s.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0014-i387-move-ts_usedfpu-flag-from-thread_info-to-task_s.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,275 @@
+From 2469b293db86d90cf811d62dcbbe369ddd6282b7 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds at linux-foundation.org>
+Date: Fri, 17 Feb 2012 21:48:54 -0800
+Subject: [PATCH 14/15] i387: move TS_USEDFPU flag from thread_info to
+ task_struct
+
+commit f94edacf998516ac9d849f7bc6949a703977a7f3 upstream.
+
+This moves the bit that indicates whether a thread has ownership of the
+FPU from the TS_USEDFPU bit in thread_info->status to a word of its own
+(called 'has_fpu') in task_struct->thread.has_fpu.
+
+This fixes two independent bugs at the same time:
+
+ - changing 'thread_info->status' from the scheduler causes nasty
+   problems for the other users of that variable, since it is defined to
+   be thread-synchronous (that's what the "TS_" part of the naming was
+   supposed to indicate).
+
+   So perfectly valid code could (and did) do
+
+	ti->status |= TS_RESTORE_SIGMASK;
+
+   and the compiler was free to do that as separate load, or and store
+   instructions.  Which can cause problems with preemption, since a task
+   switch could happen in between, and change the TS_USEDFPU bit. The
+   change to TS_USEDFPU would be overwritten by the final store.
+
+   In practice, this seldom happened, though, because the 'status' field
+   was seldom used more than once, so gcc would generally tend to
+   generate code that used a read-modify-write instruction and thus
+   happened to avoid this problem - RMW instructions are naturally low
+   fat and preemption-safe.
+
+ - On x86-32, the current_thread_info() pointer would, during interrupts
+   and softirqs, point to a *copy* of the real thread_info, because
+   x86-32 uses %esp to calculate the thread_info address, and thus the
+   separate irq (and softirq) stacks would cause these kinds of odd
+   thread_info copy aliases.
+
+   This is normally not a problem, since interrupts aren't supposed to
+   look at thread information anyway (what thread is running at
+   interrupt time really isn't very well-defined), but it confused the
+   heck out of irq_fpu_usable() and the code that tried to squirrel
+   away the FPU state.
+
+   (It also caused untold confusion for us poor kernel developers).
+
+It also turns out that using 'task_struct' is actually much more natural
+for most of the call sites that care about the FPU state, since they
+tend to work with the task struct for other reasons anyway (ie
+scheduling).  And the FPU data that we are going to save/restore is
+found there too.
+
+Thanks to Arjan Van De Ven <arjan at linux.intel.com> for pointing us to
+the %esp issue.
+
+Cc: Arjan van de Ven <arjan at linux.intel.com>
+Reported-and-tested-by: Raphael Prevost <raphael at buro.asia>
+Acked-and-tested-by: Suresh Siddha <suresh.b.siddha at intel.com>
+Tested-by: Peter Anvin <hpa at zytor.com>
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+[bwh: Backported to 2.6.32:
+ - Adjust context
+ - Drop inapplicable changes to KVM and xsave.c]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h        | 44 +++++++++++++++++++-------------------
+ arch/x86/include/asm/processor.h   |  1 +
+ arch/x86/include/asm/thread_info.h |  2 --
+ arch/x86/kernel/traps.c            | 11 +++++-----
+ 4 files changed, 28 insertions(+), 30 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index e2890ee3..99711b0 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -213,21 +213,21 @@ static inline int restore_fpu_checking(struct task_struct *tsk)
+  * be preemption protection *and* they need to be
+  * properly paired with the CR0.TS changes!
+  */
+-static inline int __thread_has_fpu(struct thread_info *ti)
++static inline int __thread_has_fpu(struct task_struct *tsk)
+ {
+-	return ti->status & TS_USEDFPU;
++	return tsk->thread.has_fpu;
+ }
+ 
+ /* Must be paired with an 'stts' after! */
+-static inline void __thread_clear_has_fpu(struct thread_info *ti)
++static inline void __thread_clear_has_fpu(struct task_struct *tsk)
+ {
+-	ti->status &= ~TS_USEDFPU;
++	tsk->thread.has_fpu = 0;
+ }
+ 
+ /* Must be paired with a 'clts' before! */
+-static inline void __thread_set_has_fpu(struct thread_info *ti)
++static inline void __thread_set_has_fpu(struct task_struct *tsk)
+ {
+-	ti->status |= TS_USEDFPU;
++	tsk->thread.has_fpu = 1;
+ }
+ 
+ /*
+@@ -237,16 +237,16 @@ static inline void __thread_set_has_fpu(struct thread_info *ti)
+  * These generally need preemption protection to work,
+  * do try to avoid using these on their own.
+  */
+-static inline void __thread_fpu_end(struct thread_info *ti)
++static inline void __thread_fpu_end(struct task_struct *tsk)
+ {
+-	__thread_clear_has_fpu(ti);
++	__thread_clear_has_fpu(tsk);
+ 	stts();
+ }
+ 
+-static inline void __thread_fpu_begin(struct thread_info *ti)
++static inline void __thread_fpu_begin(struct task_struct *tsk)
+ {
+ 	clts();
+-	__thread_set_has_fpu(ti);
++	__thread_set_has_fpu(tsk);
+ }
+ 
+ /*
+@@ -257,18 +257,18 @@ extern int restore_i387_xstate(void __user *buf);
+ 
+ static inline void __unlazy_fpu(struct task_struct *tsk)
+ {
+-	if (__thread_has_fpu(task_thread_info(tsk))) {
++	if (__thread_has_fpu(tsk)) {
+ 		__save_init_fpu(tsk);
+-		__thread_fpu_end(task_thread_info(tsk));
++		__thread_fpu_end(tsk);
+ 	} else
+ 		tsk->fpu_counter = 0;
+ }
+ 
+ static inline void __clear_fpu(struct task_struct *tsk)
+ {
+-	if (__thread_has_fpu(task_thread_info(tsk))) {
++	if (__thread_has_fpu(tsk)) {
+ 		tolerant_fwait();
+-		__thread_fpu_end(task_thread_info(tsk));
++		__thread_fpu_end(tsk);
+ 	}
+ }
+ 
+@@ -283,7 +283,7 @@ static inline void __clear_fpu(struct task_struct *tsk)
+  */
+ static inline bool interrupted_kernel_fpu_idle(void)
+ {
+-	return !__thread_has_fpu(current_thread_info()) &&
++	return !__thread_has_fpu(current) &&
+ 		(read_cr0() & X86_CR0_TS);
+ }
+ 
+@@ -317,12 +317,12 @@ static inline bool irq_fpu_usable(void)
+ 
+ static inline void kernel_fpu_begin(void)
+ {
+-	struct thread_info *me = current_thread_info();
++	struct task_struct *me = current;
+ 
+ 	WARN_ON_ONCE(!irq_fpu_usable());
+ 	preempt_disable();
+ 	if (__thread_has_fpu(me)) {
+-		__save_init_fpu(me->task);
++		__save_init_fpu(me);
+ 		__thread_clear_has_fpu(me);
+ 		/* We do 'stts()' in kernel_fpu_end() */
+ 	} else
+@@ -387,13 +387,13 @@ static inline void irq_ts_restore(int TS_state)
+  */
+ static inline int user_has_fpu(void)
+ {
+-	return __thread_has_fpu(current_thread_info());
++	return __thread_has_fpu(current);
+ }
+ 
+ static inline void user_fpu_end(void)
+ {
+ 	preempt_disable();
+-	__thread_fpu_end(current_thread_info());
++	__thread_fpu_end(current);
+ 	preempt_enable();
+ }
+ 
+@@ -401,7 +401,7 @@ static inline void user_fpu_begin(void)
+ {
+ 	preempt_disable();
+ 	if (!user_has_fpu())
+-		__thread_fpu_begin(current_thread_info());
++		__thread_fpu_begin(current);
+ 	preempt_enable();
+ }
+ 
+@@ -410,10 +410,10 @@ static inline void user_fpu_begin(void)
+  */
+ static inline void save_init_fpu(struct task_struct *tsk)
+ {
+-	WARN_ON_ONCE(!__thread_has_fpu(task_thread_info(tsk)));
++	WARN_ON_ONCE(!__thread_has_fpu(tsk));
+ 	preempt_disable();
+ 	__save_init_fpu(tsk);
+-	__thread_fpu_end(task_thread_info(tsk));
++	__thread_fpu_end(tsk);
+ 	preempt_enable();
+ }
+ 
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index fa04dea..45bc73b 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -456,6 +456,7 @@ struct thread_struct {
+ 	unsigned long		error_code;
+ 	/* floating point and extended processor state */
+ 	union thread_xstate	*xstate;
++	unsigned long		has_fpu;
+ #ifdef CONFIG_X86_32
+ 	/* Virtual 86 mode info */
+ 	struct vm86_struct __user *vm86_info;
+diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
+index 19c3ce4..b7b8fd4 100644
+--- a/arch/x86/include/asm/thread_info.h
++++ b/arch/x86/include/asm/thread_info.h
+@@ -235,8 +235,6 @@ static inline struct thread_info *current_thread_info(void)
+  * ever touches our thread-synchronous status, so we don't
+  * have to worry about atomic accesses.
+  */
+-#define TS_USEDFPU		0x0001	/* FPU was used by this task
+-					   this quantum (SMP) */
+ #define TS_COMPAT		0x0002	/* 32bit syscall active (64BIT)*/
+ #define TS_POLLING		0x0004	/* true if in idle loop
+ 					   and not sleeping */
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 585f37b..83642b2 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -844,12 +844,11 @@ asmlinkage void __attribute__((weak)) smp_threshold_interrupt(void)
+  */
+ void math_state_restore(void)
+ {
+-	struct thread_info *thread = current_thread_info();
+-	struct task_struct *tsk = thread->task;
++	struct task_struct *tsk = current;
+ 
+ 	/* We need a safe address that is cheap to find and that is already
+-	   in L1. We just brought in "thread->task", so use that */
+-#define safe_address (thread->task)
++	   in L1. We're just bringing in "tsk->thread.has_fpu", so use that */
++#define safe_address (tsk->thread.has_fpu)
+ 
+ 	if (!tsk_used_math(tsk)) {
+ 		local_irq_enable();
+@@ -866,7 +865,7 @@ void math_state_restore(void)
+ 		local_irq_disable();
+ 	}
+ 
+-	__thread_fpu_begin(thread);
++	__thread_fpu_begin(tsk);
+ 
+ 	/* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
+ 	   is pending.  Clear the x87 state here by setting it to fixed
+@@ -883,7 +882,7 @@ void math_state_restore(void)
+ 	 * Paranoid restore. send a SIGSEGV if we fail to restore the state.
+ 	 */
+ 	if (unlikely(restore_fpu_checking(tsk))) {
+-		__thread_fpu_end(thread);
++		__thread_fpu_end(tsk);
+ 		force_sig(SIGSEGV, tsk);
+ 		return;
+ 	}

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0015-i387-re-introduce-fpu-state-preloading-at-context-sw.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/0015-i387-re-introduce-fpu-state-preloading-at-context-sw.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,358 @@
+From 21da3a8a1c6813caef2e50ed06521a443b1c252e Mon Sep 17 00:00:00 2001
+From: Ben Hutchings <ben at decadent.org.uk>
+Date: Fri, 30 Jan 2015 00:34:44 +0000
+Subject: [PATCH 15/15] i387: re-introduce FPU state preloading at context
+ switch time
+
+commit 34ddc81a230b15c0e345b6b253049db731499f7e upstream.
+
+After all the FPU state cleanups and finally finding the problem that
+caused all our FPU save/restore problems, this re-introduces the
+preloading of FPU state that was removed in commit b3b0870ef3ff ("i387:
+do not preload FPU state at task switch time").
+
+However, instead of simply reverting the removal, this reimplements
+preloading with several fixes, most notably
+
+ - properly abstracted as a true FPU state switch, rather than as
+   open-coded save and restore with various hacks.
+
+   In particular, implementing it as a proper FPU state switch allows us
+   to optimize the CR0.TS flag accesses: there is no reason to set the
+   TS bit only to then almost immediately clear it again.  CR0 accesses
+   are quite slow and expensive, don't flip the bit back and forth for
+   no good reason.
+
+ - Make sure that the same model works for both x86-32 and x86-64, so
+   that there are no gratuitous differences between the two due to the
+   way they save and restore segment state differently due to
+   architectural differences that really don't matter to the FPU state.
+
+ - Avoid exposing the "preload" state to the context switch routines,
+   and in particular allow the concept of lazy state restore: if nothing
+   else has used the FPU in the meantime, and the process is still on
+   the same CPU, we can avoid restoring state from memory entirely, just
+   re-expose the state that is still in the FPU unit.
+
+   That optimized lazy restore isn't actually implemented here, but the
+   infrastructure is set up for it.  Of course, older CPU's that use
+   'fnsave' to save the state cannot take advantage of this, since the
+   state saving also trashes the state.
+
+In other words, there is now an actual _design_ to the FPU state saving,
+rather than just random historical baggage.  Hopefully it's easier to
+follow as a result.
+
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh at linuxfoundation.org>
+[bwh: Backported to 2.6.32:
+ - We don't have struct fpu
+ - We already have "x86, fpu, amd: Clear exceptions in AMD FXSAVE
+   workaround" which was applied later than this upstream; move that
+   fix from math_state_restore() into __math_state_restore()
+ - Adjust context]
+Signed-off-by: Ben Hutchings <ben at decadent.org.uk>
+---
+ arch/x86/include/asm/i387.h  | 110 ++++++++++++++++++++++++++++++++++++-------
+ arch/x86/kernel/process_32.c |   5 +-
+ arch/x86/kernel/process_64.c |   5 +-
+ arch/x86/kernel/traps.c      |  56 +++++++++++++---------
+ 4 files changed, 134 insertions(+), 42 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 99711b0..8408250 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -25,6 +25,7 @@ extern unsigned int sig_xstate_size;
+ extern void fpu_init(void);
+ extern void mxcsr_feature_mask_init(void);
+ extern int init_fpu(struct task_struct *child);
++extern void __math_state_restore(struct task_struct *);
+ extern void math_state_restore(void);
+ extern void init_thread_xstate(void);
+ extern int dump_fpu(struct pt_regs *, struct user_i387_struct *);
+@@ -171,9 +172,10 @@ static inline void fxsave(struct task_struct *tsk)
+ #endif	/* CONFIG_X86_64 */
+ 
+ /*
+- * These must be called with preempt disabled
++ * These must be called with preempt disabled. Returns
++ * 'true' if the FPU state is still intact.
+  */
+-static inline void fpu_save_init(struct task_struct *tsk)
++static inline int fpu_save_init(struct task_struct *tsk)
+ {
+ 	if (task_thread_info(tsk)->status & TS_XSAVE) {
+ 		xsave(tsk);
+@@ -182,22 +184,33 @@ static inline void fpu_save_init(struct task_struct *tsk)
+ 		 * xsave header may indicate the init state of the FP.
+ 		 */
+ 		if (!(tsk->thread.xstate->xsave.xsave_hdr.xstate_bv & XSTATE_FP))
+-			return;
++			return 1;
+ 	} else if (use_fxsr()) {
+ 		fxsave(tsk);
+ 	} else {
+ 		asm volatile("fnsave %[fx]; fwait"
+ 			     : [fx] "=m" (tsk->thread.xstate->fsave));
+-		return;
++		return 0;
+ 	}
+ 
+-	if (unlikely(tsk->thread.xstate->fxsave.swd & X87_FSW_ES))
++	/*
++	 * If exceptions are pending, we need to clear them so
++	 * that we don't randomly get exceptions later.
++	 *
++	 * FIXME! Is this perhaps only true for the old-style
++	 * irq13 case? Maybe we could leave the x87 state
++	 * intact otherwise?
++	 */
++	if (unlikely(tsk->thread.xstate->fxsave.swd & X87_FSW_ES)) {
+ 		asm volatile("fnclex");
++		return 0;
++	}
++	return 1;
+ }
+ 
+-static inline void __save_init_fpu(struct task_struct *tsk)
++static inline int __save_init_fpu(struct task_struct *tsk)
+ {
+-	fpu_save_init(tsk);
++	return fpu_save_init(tsk);
+ }
+ 
+ static inline int restore_fpu_checking(struct task_struct *tsk)
+@@ -250,20 +263,79 @@ static inline void __thread_fpu_begin(struct task_struct *tsk)
+ }
+ 
+ /*
+- * Signal frame handlers...
++ * FPU state switching for scheduling.
++ *
++ * This is a two-stage process:
++ *
++ *  - switch_fpu_prepare() saves the old state and
++ *    sets the new state of the CR0.TS bit. This is
++ *    done within the context of the old process.
++ *
++ *  - switch_fpu_finish() restores the new state as
++ *    necessary.
+  */
+-extern int save_i387_xstate(void __user *buf);
+-extern int restore_i387_xstate(void __user *buf);
++typedef struct { int preload; } fpu_switch_t;
++
++/*
++ * FIXME! We could do a totally lazy restore, but we need to
++ * add a per-cpu "this was the task that last touched the FPU
++ * on this CPU" variable, and the task needs to have a "I last
++ * touched the FPU on this CPU" and check them.
++ *
++ * We don't do that yet, so "fpu_lazy_restore()" always returns
++ * false, but some day..
++ */
++#define fpu_lazy_restore(tsk) (0)
++#define fpu_lazy_state_intact(tsk) do { } while (0)
++
++static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct task_struct *new)
++{
++	fpu_switch_t fpu;
++
++	fpu.preload = tsk_used_math(new) && new->fpu_counter > 5;
++	if (__thread_has_fpu(old)) {
++		if (__save_init_fpu(old))
++			fpu_lazy_state_intact(old);
++		__thread_clear_has_fpu(old);
++		old->fpu_counter++;
++
++		/* Don't change CR0.TS if we just switch! */
++		if (fpu.preload) {
++			__thread_set_has_fpu(new);
++			prefetch(new->thread.xstate);
++		} else
++			stts();
++	} else {
++		old->fpu_counter = 0;
++		if (fpu.preload) {
++			if (fpu_lazy_restore(new))
++				fpu.preload = 0;
++			else
++				prefetch(new->thread.xstate);
++			__thread_fpu_begin(new);
++		}
++	}
++	return fpu;
++}
+ 
+-static inline void __unlazy_fpu(struct task_struct *tsk)
++/*
++ * By the time this gets called, we've already cleared CR0.TS and
++ * given the process the FPU if we are going to preload the FPU
++ * state - all we need to do is to conditionally restore the register
++ * state itself.
++ */
++static inline void switch_fpu_finish(struct task_struct *new, fpu_switch_t fpu)
+ {
+-	if (__thread_has_fpu(tsk)) {
+-		__save_init_fpu(tsk);
+-		__thread_fpu_end(tsk);
+-	} else
+-		tsk->fpu_counter = 0;
++	if (fpu.preload)
++		__math_state_restore(new);
+ }
+ 
++/*
++ * Signal frame handlers...
++ */
++extern int save_i387_xstate(void __user *buf);
++extern int restore_i387_xstate(void __user *buf);
++
+ static inline void __clear_fpu(struct task_struct *tsk)
+ {
+ 	if (__thread_has_fpu(tsk)) {
+@@ -420,7 +492,11 @@ static inline void save_init_fpu(struct task_struct *tsk)
+ static inline void unlazy_fpu(struct task_struct *tsk)
+ {
+ 	preempt_disable();
+-	__unlazy_fpu(tsk);
++	if (__thread_has_fpu(tsk)) {
++		__save_init_fpu(tsk);
++		__thread_fpu_end(tsk);
++	} else
++		tsk->fpu_counter = 0;
+ 	preempt_enable();
+ }
+ 
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index 4d5508f..14748c3 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -346,10 +346,11 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 				 *next = &next_p->thread;
+ 	int cpu = smp_processor_id();
+ 	struct tss_struct *tss = &per_cpu(init_tss, cpu);
++	fpu_switch_t fpu;
+ 
+ 	/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
+ 
+-	__unlazy_fpu(prev_p);
++	fpu = switch_fpu_prepare(prev_p, next_p);
+ 
+ 	/*
+ 	 * Reload esp0.
+@@ -404,6 +405,8 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 	if (prev->gs | next->gs)
+ 		lazy_load_gs(next->gs);
+ 
++	switch_fpu_finish(next_p, fpu);
++
+ 	percpu_write(current_task, next_p);
+ 
+ 	return prev_p;
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index d8040a6..53e42f5 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -381,8 +381,9 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 	int cpu = smp_processor_id();
+ 	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+ 	unsigned fsindex, gsindex;
++	fpu_switch_t fpu;
+ 
+-	__unlazy_fpu(prev_p);
++	fpu = switch_fpu_prepare(prev_p, next_p);
+ 
+ 	/*
+ 	 * Reload esp0, LDT and the page table pointer:
+@@ -452,6 +453,8 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 		wrmsrl(MSR_KERNEL_GS_BASE, next->gs);
+ 	prev->gsindex = gsindex;
+ 
++	switch_fpu_finish(next_p, fpu);
++
+ 	/*
+ 	 * Switch the PDA and FPU contexts.
+ 	 */
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 83642b2..a31c42e 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -833,6 +833,38 @@ asmlinkage void __attribute__((weak)) smp_threshold_interrupt(void)
+ }
+ 
+ /*
++ * This gets called with the process already owning the
++ * FPU state, and with CR0.TS cleared. It just needs to
++ * restore the FPU register state.
++ */
++void __math_state_restore(struct task_struct *tsk)
++{
++	/* We need a safe address that is cheap to find and that is already
++	   in L1. We've just brought in "tsk->thread.has_fpu", so use that */
++#define safe_address (tsk->thread.has_fpu)
++
++	/* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
++	   is pending.  Clear the x87 state here by setting it to fixed
++	   values. safe_address is a random variable that should be in L1 */
++	if (unlikely(boot_cpu_has(X86_FEATURE_FXSAVE_LEAK))) {
++		asm volatile(
++			"fnclex\n\t"
++			"emms\n\t"
++			"fildl %P[addr]"	/* set F?P to defined value */
++			: : [addr] "m" (safe_address));
++	}
++
++	/*
++	 * Paranoid restore. send a SIGSEGV if we fail to restore the state.
++	 */
++	if (unlikely(restore_fpu_checking(tsk))) {
++		__thread_fpu_end(tsk);
++		force_sig(SIGSEGV, tsk);
++		return;
++	}
++}
++
++/*
+  * 'math_state_restore()' saves the current math information in the
+  * old math state array, and gets the new ones from the current task
+  *
+@@ -846,10 +878,6 @@ void math_state_restore(void)
+ {
+ 	struct task_struct *tsk = current;
+ 
+-	/* We need a safe address that is cheap to find and that is already
+-	   in L1. We're just bringing in "tsk->thread.has_fpu", so use that */
+-#define safe_address (tsk->thread.has_fpu)
+-
+ 	if (!tsk_used_math(tsk)) {
+ 		local_irq_enable();
+ 		/*
+@@ -867,25 +895,7 @@ void math_state_restore(void)
+ 
+ 	__thread_fpu_begin(tsk);
+ 
+-	/* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
+-	   is pending.  Clear the x87 state here by setting it to fixed
+-	   values. safe_address is a random variable that should be in L1 */
+-	if (unlikely(boot_cpu_has(X86_FEATURE_FXSAVE_LEAK))) {
+-		asm volatile(
+-			"fnclex\n\t"
+-			"emms\n\t"
+-			"fildl %P[addr]"	/* set F?P to defined value */
+-			: : [addr] "m" (safe_address));
+-	}
+-
+-	/*
+-	 * Paranoid restore. send a SIGSEGV if we fail to restore the state.
+-	 */
+-	if (unlikely(restore_fpu_checking(tsk))) {
+-		__thread_fpu_end(tsk);
+-		force_sig(SIGSEGV, tsk);
+-		return;
+-	}
++	__math_state_restore(tsk);
+ 
+ 	tsk->fpu_counter++;
+ }

Added: dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-fpu-avoid-abi-change-for-addition-of-has_fpu-fla.patch
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-security/linux-2.6/debian/patches/bugfix/x86/x86-fpu-avoid-abi-change-for-addition-of-has_fpu-fla.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -0,0 +1,83 @@
+From: Ben Hutchings <ben at decadent.org.uk>
+Date: Fri, 30 Jan 2015 01:36:53 +0000
+Subject: x86, fpu: Avoid ABI change for addition of has_fpu flag
+Forwarded: not-needed
+
+Move it from struct thread_struct (which is embedded in struct
+task_struct) to the end of struct task_struct, and hide it from
+genksyms.
+---
+ arch/x86/include/asm/i387.h      | 6 +++---
+ arch/x86/include/asm/processor.h | 1 -
+ arch/x86/kernel/traps.c          | 4 ++--
+ include/linux/sched.h            | 3 +++
+ 4 files changed, 8 insertions(+), 6 deletions(-)
+
+diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
+index 8408250..84ebc56 100644
+--- a/arch/x86/include/asm/i387.h
++++ b/arch/x86/include/asm/i387.h
+@@ -228,19 +228,19 @@ static inline int restore_fpu_checking(struct task_struct *tsk)
+  */
+ static inline int __thread_has_fpu(struct task_struct *tsk)
+ {
+-	return tsk->thread.has_fpu;
++	return tsk->thread_has_fpu;
+ }
+ 
+ /* Must be paired with an 'stts' after! */
+ static inline void __thread_clear_has_fpu(struct task_struct *tsk)
+ {
+-	tsk->thread.has_fpu = 0;
++	tsk->thread_has_fpu = 0;
+ }
+ 
+ /* Must be paired with a 'clts' before! */
+ static inline void __thread_set_has_fpu(struct task_struct *tsk)
+ {
+-	tsk->thread.has_fpu = 1;
++	tsk->thread_has_fpu = 1;
+ }
+ 
+ /*
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 45bc73b..fa04dea 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -456,7 +456,6 @@ struct thread_struct {
+ 	unsigned long		error_code;
+ 	/* floating point and extended processor state */
+ 	union thread_xstate	*xstate;
+-	unsigned long		has_fpu;
+ #ifdef CONFIG_X86_32
+ 	/* Virtual 86 mode info */
+ 	struct vm86_struct __user *vm86_info;
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index a31c42e..1f2d259 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -840,8 +840,8 @@ asmlinkage void __attribute__((weak)) smp_threshold_interrupt(void)
+ void __math_state_restore(struct task_struct *tsk)
+ {
+ 	/* We need a safe address that is cheap to find and that is already
+-	   in L1. We've just brought in "tsk->thread.has_fpu", so use that */
+-#define safe_address (tsk->thread.has_fpu)
++	   in L1. We've just brought in "tsk->thread_has_fpu", so use that */
++#define safe_address (tsk->thread_has_fpu)
+ 
+ 	/* AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception
+ 	   is pending.  Clear the x87 state here by setting it to fixed
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 56e1771..68de3a0 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1599,6 +1599,9 @@ struct task_struct {
+ 	unsigned long trace_recursion;
+ #endif /* CONFIG_TRACING */
+ 	unsigned long stack_start;
++#ifndef __GENKSYMS__
++	unsigned long thread_has_fpu;
++#endif
+ };
+ 
+ /* Future-safe accessor for struct task_struct's cpus_allowed. */

Modified: dists/squeeze-security/linux-2.6/debian/patches/features/all/openvz/openvz.patch
==============================================================================
--- dists/squeeze-security/linux-2.6/debian/patches/features/all/openvz/openvz.patch	Fri Jan 30 01:17:17 2015	(r22307)
+++ dists/squeeze-security/linux-2.6/debian/patches/features/all/openvz/openvz.patch	Fri Jan 30 04:50:25 2015	(r22308)
@@ -6551,6 +6551,7 @@
  wrapper introduction in 910ffdb18a6408e14febbb6e4b6840fd2c928c82]
 [bwh: Fix context for changes to ip_send_reply() in fix for CVE-2012-3552]
 [dannf: Fix content to skb_header_size() after fix for CVE-2012-3552]
+[bwh: Fix context for changes to struct task_struct in 2.6.32-48squeeze11]
 
 --- /dev/null
 +++ b/COPYING.Parallels
@@ -34674,9 +34675,9 @@
  	 */
  	struct pipe_inode_info *splice_pipe;
 @@ -1595,6 +1634,19 @@ struct task_struct {
- 	unsigned long trace_recursion;
- #endif /* CONFIG_TRACING */
- 	unsigned long stack_start;
+ #ifndef __GENKSYMS__
+ 	unsigned long thread_has_fpu;
+ #endif
 +#ifdef CONFIG_BEANCOUNTERS
 +	struct task_beancounter task_bc;
 +#endif

Modified: dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze11
==============================================================================
--- dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze11	Fri Jan 30 01:17:17 2015	(r22307)
+++ dists/squeeze-security/linux-2.6/debian/patches/series/48squeeze11	Fri Jan 30 04:50:25 2015	(r22308)
@@ -7,3 +7,25 @@
 + bugfix/x86/x86_64-vdso-fix-the-vdso-address-randomization-algor.patch
 + bugfix/all/splice-apply-generic-position-and-size-checks-to-eac.patch
 + bugfix/all/net-sctp-fix-slab-corruption-from-use-after-free-on-.patch
+
+# FPU/SSE fixes and refactoring needed to prepare for the next set
++ bugfix/x86/0001-x86-fpu-move-most-of-__save_init_fpu-into-fpu_save_i.patch
++ bugfix/x86/0002-x86-64-fpu-disable-preemption-when-using-ts_usedfpu.patch
++ bugfix/x86/0003-x86-32-fpu-rewrite-fpu_save_init.patch
++ bugfix/x86/0004-x86-fpu-merge-fpu_save_init.patch
++ bugfix/x86/0005-x86-32-fpu-fix-fpu-exception-handling-on-non-sse-sys.patch
+
+# FPU/SSE fixes from Linux 3.3 fix possible data loss and are needed
+# before the following security fix
++ bugfix/x86/0006-i387-math_state_restore-isn-t-called-from-asm.patch
++ bugfix/x86/0007-i387-make-irq_fpu_usable-tests-more-robust.patch
++ bugfix/x86/0008-i387-fix-sense-of-sanity-check.patch
++ bugfix/x86/0009-i387-fix-x86-64-preemption-unsafe-user-stack-save-re.patch
++ bugfix/x86/0010-i387-move-ts_usedfpu-clearing-out-of-__save_init_fpu.patch
++ bugfix/x86/0011-i387-don-t-ever-touch-ts_usedfpu-directly-use-helper.patch
++ bugfix/x86/0012-i387-do-not-preload-fpu-state-at-task-switch-time.patch
++ bugfix/x86/0013-i387-move-amd-k7-k8-fpu-fxsave-fxrstor-workaround-fr.patch
++ bugfix/x86/0014-i387-move-ts_usedfpu-flag-from-thread_info-to-task_s.patch
++ bugfix/x86/0015-i387-re-introduce-fpu-state-preloading-at-context-sw.patch
+
++ bugfix/x86/x86-fpu-avoid-abi-change-for-addition-of-has_fpu-fla.patch



More information about the Kernel-svn-changes mailing list