[kernel] r19325 - in dists/squeeze-backports/linux: . debian debian/abi/3.2.0-0.bpo.2 debian/bin debian/config debian/config/armel debian/config/kernelarch-x86 debian/config/powerpc debian/installer debian/lib/python/debian_linux debian/patches debian/patches/bugfix/all debian/patches/debian debian/patches/features/all debian/patches/features/all/cpu-devices debian/patches/features/all/rt debian/patches/features/all/wacom debian/patches/features/arm debian/templates

Ben Hutchings benh at alioth.debian.org
Fri Aug 17 02:05:03 UTC 2012


Author: benh
Date: Fri Aug 17 02:04:57 2012
New Revision: 19325

Log:
Merge changes from sid up to 3.2.21-3

Do not include kernel-wedge config changes.
Do not include ABI files for 3.2.0-3.
Bump our ABI to 0.bpo.3 and delete ABI files for 3.2.0-0.bpo.2.

Added:
   dists/squeeze-backports/linux/debian/patches/bugfix/all/apparmor-remove-advertising-the-support-of-network-r.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/bugfix/all/apparmor-remove-advertising-the-support-of-network-r.patch
   dists/squeeze-backports/linux/debian/patches/bugfix/all/ethtool-allow-ETHTOOL_GSSET_INFO-for-users.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/bugfix/all/ethtool-allow-ETHTOOL_GSSET_INFO-for-users.patch
   dists/squeeze-backports/linux/debian/patches/bugfix/all/xen-netfront-teardown-the-device-before-unregistering-it.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/bugfix/all/xen-netfront-teardown-the-device-before-unregistering-it.patch
   dists/squeeze-backports/linux/debian/patches/debian/driver-core-avoid-ABI-change-for-removal-of-__must_check.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/debian/driver-core-avoid-ABI-change-for-removal-of-__must_check.patch
   dists/squeeze-backports/linux/debian/patches/features/all/Input-add-Synaptics-USB-device-driver.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/Input-add-Synaptics-USB-device-driver.patch
   dists/squeeze-backports/linux/debian/patches/features/all/cpu-devices/
      - copied from r19226, dists/sid/linux/debian/patches/features/all/cpu-devices/
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0001-Revert-workqueue-skip-nr_running-sanity-check-in-wor.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0001-Revert-workqueue-skip-nr_running-sanity-check-in-wor.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0002-x86-Call-idle-notifier-after-irq_enter.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0002-x86-Call-idle-notifier-after-irq_enter.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0003-slab-lockdep-Annotate-all-slab-caches.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0003-slab-lockdep-Annotate-all-slab-caches.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0004-x86-kprobes-Remove-remove-bogus-preempt_enable.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0004-x86-kprobes-Remove-remove-bogus-preempt_enable.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0005-x86-hpet-Disable-MSI-on-Lenovo-W510.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0005-x86-hpet-Disable-MSI-on-Lenovo-W510.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0006-block-Shorten-interrupt-disabled-regions.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0006-block-Shorten-interrupt-disabled-regions.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0007-sched-Distangle-worker-accounting-from-rq-3Elock.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0007-sched-Distangle-worker-accounting-from-rq-3Elock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0008-mips-enable-interrupts-in-signal.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0008-mips-enable-interrupts-in-signal.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0009-arm-enable-interrupts-in-signal-code.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0009-arm-enable-interrupts-in-signal-code.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0010-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0010-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0011-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0011-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0012-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0012-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0013-powerpc-Allow-irq-threading.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0013-powerpc-Allow-irq-threading.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0014-sched-Keep-period-timer-ticking-when-throttling-acti.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0014-sched-Keep-period-timer-ticking-when-throttling-acti.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0015-sched-Do-not-throttle-due-to-PI-boosting.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0015-sched-Do-not-throttle-due-to-PI-boosting.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0016-time-Remove-bogus-comments.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0016-time-Remove-bogus-comments.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0017-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0017-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0018-x86-vdso-Use-seqcount-instead-of-seqlock.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0018-x86-vdso-Use-seqcount-instead-of-seqlock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0019-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0019-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0020-seqlock-Remove-unused-functions.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0020-seqlock-Remove-unused-functions.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0021-seqlock-Use-seqcount.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0021-seqlock-Use-seqcount.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0022-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0022-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0023-timekeeping-Split-xtime_lock.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0023-timekeeping-Split-xtime_lock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0024-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0024-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0025-mm-memcg-shorten-preempt-disabled-section-around-eve.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0025-mm-memcg-shorten-preempt-disabled-section-around-eve.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0026-tracing-Account-for-preempt-off-in-preempt_schedule.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0026-tracing-Account-for-preempt-off-in-preempt_schedule.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0027-signal-revert-ptrace-preempt-magic.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0027-signal-revert-ptrace-preempt-magic.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0028-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0028-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0029-arm-Allow-forced-irq-threading.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0029-arm-Allow-forced-irq-threading.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0030-preempt-rt-Convert-arm-boot_lock-to-raw.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0030-preempt-rt-Convert-arm-boot_lock-to-raw.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0031-sched-Create-schedule_preempt_disabled.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0031-sched-Create-schedule_preempt_disabled.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0032-sched-Use-schedule_preempt_disabled.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0032-sched-Use-schedule_preempt_disabled.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0033-signals-Do-not-wakeup-self.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0033-signals-Do-not-wakeup-self.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0034-posix-timers-Prevent-broadcast-signals.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0034-posix-timers-Prevent-broadcast-signals.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0035-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0035-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0036-signal-x86-Delay-calling-signals-in-atomic.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0036-signal-x86-Delay-calling-signals-in-atomic.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0037-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0037-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0038-drivers-random-Reduce-preempt-disabled-region.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0038-drivers-random-Reduce-preempt-disabled-region.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0039-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0039-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0040-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0040-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0041-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0041-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0042-drivers-net-Use-disable_irq_nosync-in-8139too.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0042-drivers-net-Use-disable_irq_nosync-in-8139too.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0043-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0043-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0044-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0044-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0045-preempt-mark-legitimated-no-resched-sites.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0045-preempt-mark-legitimated-no-resched-sites.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0046-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0046-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0047-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0047-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0048-mm-pagefault_disabled.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0048-mm-pagefault_disabled.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0049-mm-raw_pagefault_disable.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0049-mm-raw_pagefault_disable.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0050-filemap-fix-up.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0050-filemap-fix-up.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0051-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0051-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0052-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0052-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0053-suspend-Prevent-might-sleep-splats.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0053-suspend-Prevent-might-sleep-splats.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0054-OF-Fixup-resursive-locking-code-paths.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0054-OF-Fixup-resursive-locking-code-paths.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0055-of-convert-devtree-lock.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0055-of-convert-devtree-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0056-list-add-list-last-entry.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0056-list-add-list-last-entry.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0057-mm-page-alloc-use-list-last-entry.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0057-mm-page-alloc-use-list-last-entry.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0058-mm-slab-move-debug-out.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0058-mm-slab-move-debug-out.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0059-rwsem-inlcude-fix.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0059-rwsem-inlcude-fix.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0060-sysctl-include-fix.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0060-sysctl-include-fix.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0061-net-flip-lock-dep-thingy.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0061-net-flip-lock-dep-thingy.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0062-softirq-thread-do-softirq.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0062-softirq-thread-do-softirq.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0063-softirq-split-out-code.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0063-softirq-split-out-code.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0064-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0064-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0065-x86-32-fix-signal-crap.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0065-x86-32-fix-signal-crap.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0066-x86-Do-not-disable-preemption-in-int3-on-32bit.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0066-x86-Do-not-disable-preemption-in-int3-on-32bit.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0067-rcu-Reduce-lock-section.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0067-rcu-Reduce-lock-section.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0068-locking-various-init-fixes.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0068-locking-various-init-fixes.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0069-wait-Provide-__wake_up_all_locked.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0069-wait-Provide-__wake_up_all_locked.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0070-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0070-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0071-latency-hist.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0071-latency-hist.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0072-hwlatdetect.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0072-hwlatdetect.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0073-localversion.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0073-localversion.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0074-early-printk-consolidate.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0074-early-printk-consolidate.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0075-printk-kill.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0075-printk-kill.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0076-printk-force_early_printk-boot-param-to-help-with-de.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0076-printk-force_early_printk-boot-param-to-help-with-de.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0077-rt-preempt-base-config.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0077-rt-preempt-base-config.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0078-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0078-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0079-rt-local_irq_-variants-depending-on-RT-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0079-rt-local_irq_-variants-depending-on-RT-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0080-preempt-Provide-preempt_-_-no-rt-variants.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0080-preempt-Provide-preempt_-_-no-rt-variants.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0081-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0081-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0082-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0082-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0083-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0083-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0084-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0084-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0085-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0085-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0086-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0086-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0087-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0087-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0088-usb-Use-local_irq_-_nort-variants.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0088-usb-Use-local_irq_-_nort-variants.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0089-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0089-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0090-mm-scatterlist-dont-disable-irqs-on-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0090-mm-scatterlist-dont-disable-irqs-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0091-signal-fix-up-rcu-wreckage.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0091-signal-fix-up-rcu-wreckage.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0092-net-wireless-warn-nort.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0092-net-wireless-warn-nort.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0093-mm-Replace-cgroup_page-bit-spinlock.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0093-mm-Replace-cgroup_page-bit-spinlock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0094-buffer_head-Replace-bh_uptodate_lock-for-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0094-buffer_head-Replace-bh_uptodate_lock-for-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0095-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0095-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0096-genirq-Disable-DEBUG_SHIRQ-for-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0096-genirq-Disable-DEBUG_SHIRQ-for-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0097-genirq-Disable-random-call-on-preempt-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0097-genirq-Disable-random-call-on-preempt-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0098-genirq-disable-irqpoll-on-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0098-genirq-disable-irqpoll-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0099-genirq-force-threading.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0099-genirq-force-threading.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0100-drivers-net-fix-livelock-issues.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0100-drivers-net-fix-livelock-issues.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0101-drivers-net-vortex-fix-locking-issues.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0101-drivers-net-vortex-fix-locking-issues.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0102-drivers-net-gianfar-Make-RT-aware.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0102-drivers-net-gianfar-Make-RT-aware.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0103-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0103-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0104-local-var.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0104-local-var.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0105-rt-local-irq-lock.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0105-rt-local-irq-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0106-cpu-rt-variants.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0106-cpu-rt-variants.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0107-mm-slab-wrap-functions.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0107-mm-slab-wrap-functions.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0108-slab-Fix-__do_drain-to-use-the-right-array-cache.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0108-slab-Fix-__do_drain-to-use-the-right-array-cache.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0109-mm-More-lock-breaks-in-slab.c.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0109-mm-More-lock-breaks-in-slab.c.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0110-mm-page_alloc-rt-friendly-per-cpu-pages.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0110-mm-page_alloc-rt-friendly-per-cpu-pages.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0111-mm-page_alloc-reduce-lock-sections-further.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0111-mm-page_alloc-reduce-lock-sections-further.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0112-mm-page-alloc-fix.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0112-mm-page-alloc-fix.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0113-mm-convert-swap-to-percpu-locked.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0113-mm-convert-swap-to-percpu-locked.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0114-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0114-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0115-mm-make-vmstat-rt-aware.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0115-mm-make-vmstat-rt-aware.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0116-mm-shrink-the-page-frame-to-rt-size.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0116-mm-shrink-the-page-frame-to-rt-size.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0117-ARM-Initialize-ptl-lock-for-vector-page.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0117-ARM-Initialize-ptl-lock-for-vector-page.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0118-mm-Allow-only-slab-on-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0118-mm-Allow-only-slab-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0119-radix-tree-rt-aware.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0119-radix-tree-rt-aware.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0120-panic-disable-random-on-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0120-panic-disable-random-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0121-ipc-Make-the-ipc-code-rt-aware.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0121-ipc-Make-the-ipc-code-rt-aware.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0122-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0122-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0123-relay-fix-timer-madness.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0123-relay-fix-timer-madness.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0124-net-ipv4-route-use-locks-on-up-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0124-net-ipv4-route-use-locks-on-up-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0125-workqueue-avoid-the-lock-in-cpu-dying.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0125-workqueue-avoid-the-lock-in-cpu-dying.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0126-timers-prepare-for-full-preemption.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0126-timers-prepare-for-full-preemption.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0127-timers-preempt-rt-support.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0127-timers-preempt-rt-support.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0128-timers-fix-timer-hotplug-on-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0128-timers-fix-timer-hotplug-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0129-timers-mov-printk_tick-to-soft-interrupt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0129-timers-mov-printk_tick-to-soft-interrupt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0130-timer-delay-waking-softirqs-from-the-jiffy-tick.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0130-timer-delay-waking-softirqs-from-the-jiffy-tick.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0131-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0131-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0132-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0132-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0133-hrtimers-prepare-full-preemption.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0133-hrtimers-prepare-full-preemption.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0134-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0134-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0135-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0135-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0136-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0136-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0137-hrtimer-fix-reprogram-madness.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0137-hrtimer-fix-reprogram-madness.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0138-timer-fd-Prevent-live-lock.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0138-timer-fd-Prevent-live-lock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0139-posix-timers-thread-posix-cpu-timers-on-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0139-posix-timers-thread-posix-cpu-timers-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0140-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0140-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0141-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0141-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0142-sched-delay-put-task.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0142-sched-delay-put-task.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0143-sched-limit-nr-migrate.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0143-sched-limit-nr-migrate.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0144-sched-mmdrop-delayed.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0144-sched-mmdrop-delayed.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0145-sched-rt-mutex-wakeup.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0145-sched-rt-mutex-wakeup.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0146-sched-prevent-idle-boost.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0146-sched-prevent-idle-boost.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0147-sched-might-sleep-do-not-account-rcu-depth.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0147-sched-might-sleep-do-not-account-rcu-depth.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0148-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0148-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0149-sched-cond-resched.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0149-sched-cond-resched.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0150-cond-resched-softirq-fix.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0150-cond-resched-softirq-fix.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0151-sched-no-work-when-pi-blocked.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0151-sched-no-work-when-pi-blocked.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0152-cond-resched-lock-rt-tweak.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0152-cond-resched-lock-rt-tweak.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0153-sched-disable-ttwu-queue.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0153-sched-disable-ttwu-queue.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0154-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0154-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0155-sched-ttwu-Return-success-when-only-changing-the-sav.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0155-sched-ttwu-Return-success-when-only-changing-the-sav.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0156-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0156-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0157-stomp-machine-mark-stomper-thread.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0157-stomp-machine-mark-stomper-thread.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0158-stomp-machine-raw-lock.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0158-stomp-machine-raw-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0159-hotplug-Lightweight-get-online-cpus.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0159-hotplug-Lightweight-get-online-cpus.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0160-hotplug-sync_unplug-No.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0160-hotplug-sync_unplug-No.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0161-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0161-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0162-sched-migrate-disable.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0162-sched-migrate-disable.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0163-hotplug-use-migrate-disable.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0163-hotplug-use-migrate-disable.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0164-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0164-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0165-ftrace-migrate-disable-tracing.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0165-ftrace-migrate-disable-tracing.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0166-tracing-Show-padding-as-unsigned-short.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0166-tracing-Show-padding-as-unsigned-short.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0167-migrate-disable-rt-variant.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0167-migrate-disable-rt-variant.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0168-sched-Optimize-migrate_disable.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0168-sched-Optimize-migrate_disable.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0169-sched-Generic-migrate_disable.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0169-sched-Generic-migrate_disable.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0170-sched-rt-Fix-migrate_enable-thinko.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0170-sched-rt-Fix-migrate_enable-thinko.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0171-sched-teach-migrate_disable-about-atomic-contexts.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0171-sched-teach-migrate_disable-about-atomic-contexts.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0172-sched-Postpone-actual-migration-disalbe-to-schedule.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0172-sched-Postpone-actual-migration-disalbe-to-schedule.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0173-sched-Do-not-compare-cpu-masks-in-scheduler.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0173-sched-Do-not-compare-cpu-masks-in-scheduler.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0174-sched-Have-migrate_disable-ignore-bounded-threads.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0174-sched-Have-migrate_disable-ignore-bounded-threads.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0175-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0175-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0176-ftrace-crap.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0176-ftrace-crap.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0177-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0177-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0178-net-netif_rx_ni-migrate-disable.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0178-net-netif_rx_ni-migrate-disable.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0179-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0179-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0180-lockdep-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0180-lockdep-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0181-mutex-no-spin-on-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0181-mutex-no-spin-on-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0182-softirq-local-lock.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0182-softirq-local-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0183-softirq-Export-in_serving_softirq.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0183-softirq-Export-in_serving_softirq.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0184-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0184-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0185-softirq-Fix-unplug-deadlock.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0185-softirq-Fix-unplug-deadlock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0186-softirq-disable-softirq-stacks-for-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0186-softirq-disable-softirq-stacks-for-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0187-softirq-make-fifo.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0187-softirq-make-fifo.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0188-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0188-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0189-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0189-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0190-local-vars-migrate-disable.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0190-local-vars-migrate-disable.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0191-md-raid5-Make-raid5_percpu-handling-RT-aware.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0191-md-raid5-Make-raid5_percpu-handling-RT-aware.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0192-rtmutex-lock-killable.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0192-rtmutex-lock-killable.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0193-rtmutex-futex-prepare-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0193-rtmutex-futex-prepare-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0194-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0194-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0195-rt-mutex-add-sleeping-spinlocks-support.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0195-rt-mutex-add-sleeping-spinlocks-support.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0196-spinlock-types-separate-raw.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0196-spinlock-types-separate-raw.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0197-rtmutex-avoid-include-hell.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0197-rtmutex-avoid-include-hell.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0198-rt-add-rt-spinlocks.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0198-rt-add-rt-spinlocks.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0199-rt-add-rt-to-mutex-headers.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0199-rt-add-rt-to-mutex-headers.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0200-rwsem-add-rt-variant.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0200-rwsem-add-rt-variant.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0201-rt-Add-the-preempt-rt-lock-replacement-APIs.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0201-rt-Add-the-preempt-rt-lock-replacement-APIs.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0202-rwlocks-Fix-section-mismatch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0202-rwlocks-Fix-section-mismatch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0203-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0203-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0204-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0204-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0205-rcu-Frob-softirq-test.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0205-rcu-Frob-softirq-test.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0206-rcu-Merge-RCU-bh-into-RCU-preempt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0206-rcu-Merge-RCU-bh-into-RCU-preempt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0207-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0207-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0208-rcu-more-fallout.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0208-rcu-more-fallout.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0209-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0209-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0210-rt-rcutree-Move-misplaced-prototype.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0210-rt-rcutree-Move-misplaced-prototype.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0211-lglocks-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0211-lglocks-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0212-serial-8250-Clean-up-the-locking-for-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0212-serial-8250-Clean-up-the-locking-for-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0213-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0213-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0214-drivers-tty-fix-omap-lock-crap.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0214-drivers-tty-fix-omap-lock-crap.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0215-rt-Improve-the-serial-console-PASS_LIMIT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0215-rt-Improve-the-serial-console-PASS_LIMIT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0216-fs-namespace-preemption-fix.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0216-fs-namespace-preemption-fix.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0217-mm-protect-activate-switch-mm.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0217-mm-protect-activate-switch-mm.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0218-fs-block-rt-support.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0218-fs-block-rt-support.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0219-fs-ntfs-disable-interrupt-only-on-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0219-fs-ntfs-disable-interrupt-only-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0220-x86-Convert-mce-timer-to-hrtimer.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0220-x86-Convert-mce-timer-to-hrtimer.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0221-x86-stackprotector-Avoid-random-pool-on-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0221-x86-stackprotector-Avoid-random-pool-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0222-x86-Use-generic-rwsem_spinlocks-on-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0222-x86-Use-generic-rwsem_spinlocks-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0223-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0223-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0224-workqueue-use-get-cpu-light.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0224-workqueue-use-get-cpu-light.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0225-epoll.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0225-epoll.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0226-mm-vmalloc.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0226-mm-vmalloc.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0227-workqueue-Fix-cpuhotplug-trainwreck.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0227-workqueue-Fix-cpuhotplug-trainwreck.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0228-workqueue-Fix-PF_THREAD_BOUND-abuse.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0228-workqueue-Fix-PF_THREAD_BOUND-abuse.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0229-workqueue-Use-get_cpu_light-in-flush_gcwq.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0229-workqueue-Use-get_cpu_light-in-flush_gcwq.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0230-hotplug-stuff.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0230-hotplug-stuff.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0231-debugobjects-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0231-debugobjects-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0232-jump-label-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0232-jump-label-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0233-skbufhead-raw-lock.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0233-skbufhead-raw-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0234-x86-no-perf-irq-work-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0234-x86-no-perf-irq-work-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0235-console-make-rt-friendly.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0235-console-make-rt-friendly.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0236-printk-Disable-migration-instead-of-preemption.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0236-printk-Disable-migration-instead-of-preemption.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0237-power-use-generic-rwsem-on-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0237-power-use-generic-rwsem-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0238-power-disable-highmem-on-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0238-power-disable-highmem-on-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0239-arm-disable-highmem-on-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0239-arm-disable-highmem-on-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0240-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0240-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0241-mips-disable-highmem-on-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0241-mips-disable-highmem-on-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0242-net-Avoid-livelock-in-net_tx_action-on-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0242-net-Avoid-livelock-in-net_tx_action-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0243-ping-sysrq.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0243-ping-sysrq.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0244-kgdb-serial-Short-term-workaround.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0244-kgdb-serial-Short-term-workaround.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0245-add-sys-kernel-realtime-entry.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0245-add-sys-kernel-realtime-entry.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0246-mm-rt-kmap_atomic-scheduling.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0246-mm-rt-kmap_atomic-scheduling.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0247-ipc-sem-Rework-semaphore-wakeups.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0247-ipc-sem-Rework-semaphore-wakeups.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0248-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0248-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0249-x86-kvm-require-const-tsc-for-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0249-x86-kvm-require-const-tsc-for-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0250-scsi-fcoe-rt-aware.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0250-scsi-fcoe-rt-aware.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0251-x86-crypto-Reduce-preempt-disabled-regions.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0251-x86-crypto-Reduce-preempt-disabled-regions.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0252-dm-Make-rt-aware.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0252-dm-Make-rt-aware.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0253-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0253-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0254-seqlock-Prevent-rt-starvation.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0254-seqlock-Prevent-rt-starvation.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0255-timer-Fix-hotplug-for-rt.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0255-timer-Fix-hotplug-for-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0256-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0256-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0257-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0257-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0258-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0258-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0259-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0259-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0260-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0260-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0261-softirq-Check-preemption-after-reenabling-interrupts.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0261-softirq-Check-preemption-after-reenabling-interrupts.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0262-rt-Introduce-cpu_chill.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0262-rt-Introduce-cpu_chill.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0263-fs-dcache-Use-cpu_chill-in-trylock-loops.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0263-fs-dcache-Use-cpu_chill-in-trylock-loops.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0264-net-Use-cpu_chill-instead-of-cpu_relax.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0264-net-Use-cpu_chill-instead-of-cpu_relax.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0265-kconfig-disable-a-few-options-rt.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0265-kconfig-disable-a-few-options-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0266-kconfig-preempt-rt-full.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0266-kconfig-preempt-rt-full.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0267-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0267-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0268-scsi-qla2xxx-Use-local_irq_save_nort-in-qla2x00_poll.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0268-scsi-qla2xxx-Use-local_irq_save_nort-in-qla2x00_poll.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0269-net-RT-REmove-preemption-disabling-in-netif_rx.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0269-net-RT-REmove-preemption-disabling-in-netif_rx.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0270-mips-remove-smp-reserve-lock.patch.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0270-mips-remove-smp-reserve-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0271-Linux-3.2.20-rt32-REBASE.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/rt/0271-Linux-3.2.20-rt32-REBASE.patch
   dists/squeeze-backports/linux/debian/patches/features/all/wacom/0027-wacom-do-not-crash-when-retrieving-touch_max.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/wacom/0027-wacom-do-not-crash-when-retrieving-touch_max.patch
   dists/squeeze-backports/linux/debian/patches/features/all/wacom/0028-wacom-leave-touch_max-as-is-if-predefined.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/wacom/0028-wacom-leave-touch_max-as-is-if-predefined.patch
   dists/squeeze-backports/linux/debian/patches/features/all/wacom/0029-wacom-do-not-request-tablet-data-on-MT-Tablet-PC-pen.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/wacom/0029-wacom-do-not-request-tablet-data-on-MT-Tablet-PC-pen.patch
   dists/squeeze-backports/linux/debian/patches/features/all/wacom/0030-wacom-ignore-new-style-Wacom-multi-touch-packets-on-.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/all/wacom/0030-wacom-ignore-new-style-Wacom-multi-touch-packets-on-.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/ARM-7259-3-net-JIT-compiler-for-packet-filters.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/ARM-7259-3-net-JIT-compiler-for-packet-filters.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/ARM-fix-Kconfig-warning-for-HAVE_BPF_JIT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/ARM-fix-Kconfig-warning-for-HAVE_BPF_JIT.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-add-configuration-for-mpp12-as-gpio.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-add-configuration-for-mpp12-as-gpio.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-add-dreamplug-fdt-support.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-add-dreamplug-fdt-support.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-add-iconnect-support.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-add-iconnect-support.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-create-a-generic-function-for-gpio-led-blinking.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-create-a-generic-function-for-gpio-led-blinking.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-absorb-kirkwood_init.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-absorb-kirkwood_init.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-convert-uart0-to-devicetree.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-convert-uart0-to-devicetree.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-define-uart01-as-disabled.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-define-uart01-as-disabled.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-facilitate-new-boards-during-fdt-migration.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-facilitate-new-boards-during-fdt-migration.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-use-mrvl-ticker-symbol.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-use-mrvl-ticker-symbol.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fix-orion_gpio_set_blink.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fix-orion_gpio_set_blink.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-rtc-mv-devicetree-bindings.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-rtc-mv-devicetree-bindings.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-use-devicetree-for-rtc-mv.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-use-devicetree-for-rtc-mv.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood_add_missing_kexec_h.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood_add_missing_kexec_h.patch
   dists/squeeze-backports/linux/debian/patches/features/arm/net-drop-NET-dependency-from-HAVE_BPF_JIT.patch
      - copied unchanged from r19226, dists/sid/linux/debian/patches/features/arm/net-drop-NET-dependency-from-HAVE_BPF_JIT.patch
Deleted:
   dists/squeeze-backports/linux/debian/abi/3.2.0-0.bpo.2/
   dists/squeeze-backports/linux/debian/patches/bugfix/all/cpu-Do-not-return-errors-from-cpu_dev_init-which-wil.patch
   dists/squeeze-backports/linux/debian/patches/bugfix/all/cpu-Register-a-generic-CPU-device-on-architectures-t.patch
   dists/squeeze-backports/linux/debian/patches/debian/avoid-ABI-change-for-hidepid.patch
   dists/squeeze-backports/linux/debian/patches/debian/efi-avoid-ABI-change.patch
   dists/squeeze-backports/linux/debian/patches/debian/fork-avoid-ABI-change-in-3.2.18.patch
   dists/squeeze-backports/linux/debian/patches/debian/mmc-Avoid-ABI-change-in-3.2.19.patch
   dists/squeeze-backports/linux/debian/patches/debian/net-restore-skb_set_dev-removed-in-3.2.20.patch
   dists/squeeze-backports/linux/debian/patches/debian/nls-Avoid-ABI-change-from-improvement-to-utf8s_to_ut.patch
   dists/squeeze-backports/linux/debian/patches/debian/revert-rtc-Provide-flag-for-rtc-devices-that-don-t-s.patch
   dists/squeeze-backports/linux/debian/patches/debian/skbuff-avoid-ABI-change-in-3.2.17.patch
   dists/squeeze-backports/linux/debian/patches/debian/usb-hcd-avoid-ABI-change-in-3.2.17.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0001-x86-Call-idle-notifier-after-irq_enter.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0002-slab-lockdep-Annotate-all-slab-caches.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0003-x86-kprobes-Remove-remove-bogus-preempt_enable.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0004-x86-hpet-Disable-MSI-on-Lenovo-W510.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0005-block-Shorten-interrupt-disabled-regions.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0006-sched-Distangle-worker-accounting-from-rq-3Elock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0007-mips-enable-interrupts-in-signal.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0008-arm-enable-interrupts-in-signal-code.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0009-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0010-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0011-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0012-powerpc-Allow-irq-threading.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0013-sched-Keep-period-timer-ticking-when-throttling-acti.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0014-sched-Do-not-throttle-due-to-PI-boosting.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0015-time-Remove-bogus-comments.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0016-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0017-x86-vdso-Use-seqcount-instead-of-seqlock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0018-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0019-seqlock-Remove-unused-functions.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0020-seqlock-Use-seqcount.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0021-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0022-timekeeping-Split-xtime_lock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0023-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0024-mm-memcg-shorten-preempt-disabled-section-around-eve.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0025-tracing-Account-for-preempt-off-in-preempt_schedule.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0026-signal-revert-ptrace-preempt-magic.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0027-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0028-arm-Allow-forced-irq-threading.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0029-preempt-rt-Convert-arm-boot_lock-to-raw.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0030-sched-Create-schedule_preempt_disabled.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0031-sched-Use-schedule_preempt_disabled.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0032-signals-Do-not-wakeup-self.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0033-posix-timers-Prevent-broadcast-signals.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0034-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0035-signal-x86-Delay-calling-signals-in-atomic.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0036-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0037-drivers-random-Reduce-preempt-disabled-region.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0038-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0039-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0040-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0041-drivers-net-Use-disable_irq_nosync-in-8139too.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0042-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0043-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0044-preempt-mark-legitimated-no-resched-sites.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0045-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0046-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0047-mm-pagefault_disabled.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0048-mm-raw_pagefault_disable.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0049-filemap-fix-up.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0050-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0051-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0052-suspend-Prevent-might-sleep-splats.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0053-OF-Fixup-resursive-locking-code-paths.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0054-of-convert-devtree-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0055-list-add-list-last-entry.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0056-mm-page-alloc-use-list-last-entry.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0057-mm-slab-move-debug-out.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0058-rwsem-inlcude-fix.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0059-sysctl-include-fix.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0060-net-flip-lock-dep-thingy.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0061-softirq-thread-do-softirq.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0062-softirq-split-out-code.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0063-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0064-x86-32-fix-signal-crap.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0065-x86-Do-not-disable-preemption-in-int3-on-32bit.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0066-rcu-Reduce-lock-section.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0067-locking-various-init-fixes.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0068-wait-Provide-__wake_up_all_locked.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0069-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0070-latency-hist.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0071-hwlatdetect.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0072-localversion.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0073-early-printk-consolidate.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0074-printk-kill.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0075-printk-force_early_printk-boot-param-to-help-with-de.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0076-rt-preempt-base-config.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0077-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0078-rt-local_irq_-variants-depending-on-RT-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0079-preempt-Provide-preempt_-_-no-rt-variants.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0080-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0081-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0082-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0083-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0084-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0085-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0086-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0087-usb-Use-local_irq_-_nort-variants.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0088-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0089-mm-scatterlist-dont-disable-irqs-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0090-signal-fix-up-rcu-wreckage.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0091-net-wireless-warn-nort.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0092-mm-Replace-cgroup_page-bit-spinlock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0093-buffer_head-Replace-bh_uptodate_lock-for-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0094-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0095-genirq-Disable-DEBUG_SHIRQ-for-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0096-genirq-Disable-random-call-on-preempt-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0097-genirq-disable-irqpoll-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0098-genirq-force-threading.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0099-drivers-net-fix-livelock-issues.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0100-drivers-net-vortex-fix-locking-issues.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0101-drivers-net-gianfar-Make-RT-aware.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0102-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0103-local-var.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0104-rt-local-irq-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0105-cpu-rt-variants.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0106-mm-slab-wrap-functions.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0107-slab-Fix-__do_drain-to-use-the-right-array-cache.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0108-mm-More-lock-breaks-in-slab.c.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0109-mm-page_alloc-rt-friendly-per-cpu-pages.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0110-mm-page_alloc-reduce-lock-sections-further.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0111-mm-page-alloc-fix.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0112-mm-convert-swap-to-percpu-locked.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0113-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0114-mm-make-vmstat-rt-aware.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0115-mm-shrink-the-page-frame-to-rt-size.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0116-ARM-Initialize-ptl-lock-for-vector-page.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0117-mm-Allow-only-slab-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0118-radix-tree-rt-aware.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0119-panic-disable-random-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0120-ipc-Make-the-ipc-code-rt-aware.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0121-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0122-relay-fix-timer-madness.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0123-net-ipv4-route-use-locks-on-up-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0124-workqueue-avoid-the-lock-in-cpu-dying.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0125-timers-prepare-for-full-preemption.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0126-timers-preempt-rt-support.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0127-timers-fix-timer-hotplug-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0128-timers-mov-printk_tick-to-soft-interrupt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0129-timer-delay-waking-softirqs-from-the-jiffy-tick.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0130-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0131-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0132-hrtimers-prepare-full-preemption.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0133-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0134-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0135-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0136-hrtimer-fix-reprogram-madness.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0137-timer-fd-Prevent-live-lock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0138-posix-timers-thread-posix-cpu-timers-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0139-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0140-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0141-sched-delay-put-task.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0142-sched-limit-nr-migrate.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0143-sched-mmdrop-delayed.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0144-sched-rt-mutex-wakeup.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0145-sched-prevent-idle-boost.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0146-sched-might-sleep-do-not-account-rcu-depth.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0147-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0148-sched-cond-resched.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0149-cond-resched-softirq-fix.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0150-sched-no-work-when-pi-blocked.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0151-cond-resched-lock-rt-tweak.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0152-sched-disable-ttwu-queue.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0153-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0154-sched-ttwu-Return-success-when-only-changing-the-sav.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0155-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0156-stomp-machine-mark-stomper-thread.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0157-stomp-machine-raw-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0158-hotplug-Lightweight-get-online-cpus.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0159-hotplug-sync_unplug-No.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0160-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0161-sched-migrate-disable.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0162-hotplug-use-migrate-disable.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0163-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0164-ftrace-migrate-disable-tracing.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0165-tracing-Show-padding-as-unsigned-short.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0166-migrate-disable-rt-variant.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0167-sched-Optimize-migrate_disable.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0168-sched-Generic-migrate_disable.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0169-sched-rt-Fix-migrate_enable-thinko.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0170-sched-teach-migrate_disable-about-atomic-contexts.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0171-sched-Postpone-actual-migration-disalbe-to-schedule.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0172-sched-Do-not-compare-cpu-masks-in-scheduler.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0173-sched-Have-migrate_disable-ignore-bounded-threads.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0174-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0175-ftrace-crap.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0176-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0177-net-netif_rx_ni-migrate-disable.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0178-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0179-lockdep-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0180-mutex-no-spin-on-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0181-softirq-local-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0182-softirq-Export-in_serving_softirq.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0183-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0184-softirq-Fix-unplug-deadlock.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0185-softirq-disable-softirq-stacks-for-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0186-softirq-make-fifo.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0187-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0188-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0189-local-vars-migrate-disable.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0190-md-raid5-Make-raid5_percpu-handling-RT-aware.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0191-rtmutex-lock-killable.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0192-rtmutex-futex-prepare-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0193-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0194-rt-mutex-add-sleeping-spinlocks-support.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0195-spinlock-types-separate-raw.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0196-rtmutex-avoid-include-hell.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0197-rt-add-rt-spinlocks.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0198-rt-add-rt-to-mutex-headers.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0199-rwsem-add-rt-variant.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0200-rt-Add-the-preempt-rt-lock-replacement-APIs.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0201-rwlocks-Fix-section-mismatch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0202-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0203-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0204-rcu-Frob-softirq-test.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0205-rcu-Merge-RCU-bh-into-RCU-preempt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0206-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0207-rcu-more-fallout.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0208-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0209-rt-rcutree-Move-misplaced-prototype.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0210-lglocks-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0211-serial-8250-Clean-up-the-locking-for-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0212-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0213-drivers-tty-fix-omap-lock-crap.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0214-rt-Improve-the-serial-console-PASS_LIMIT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0215-fs-namespace-preemption-fix.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0216-mm-protect-activate-switch-mm.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0217-fs-block-rt-support.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0218-fs-ntfs-disable-interrupt-only-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0219-x86-Convert-mce-timer-to-hrtimer.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0220-x86-stackprotector-Avoid-random-pool-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0221-x86-Use-generic-rwsem_spinlocks-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0222-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0223-workqueue-use-get-cpu-light.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0224-epoll.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0225-mm-vmalloc.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0226-workqueue-Fix-cpuhotplug-trainwreck.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0227-workqueue-Fix-PF_THREAD_BOUND-abuse.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0228-workqueue-Use-get_cpu_light-in-flush_gcwq.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0229-hotplug-stuff.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0230-debugobjects-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0231-jump-label-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0232-skbufhead-raw-lock.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0233-x86-no-perf-irq-work-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0234-console-make-rt-friendly.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0235-printk-Disable-migration-instead-of-preemption.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0236-power-use-generic-rwsem-on-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0237-power-disable-highmem-on-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0238-arm-disable-highmem-on-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0239-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0240-mips-disable-highmem-on-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0241-net-Avoid-livelock-in-net_tx_action-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0242-ping-sysrq.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0243-kgdb-serial-Short-term-workaround.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0244-add-sys-kernel-realtime-entry.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0245-mm-rt-kmap_atomic-scheduling.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0246-ipc-sem-Rework-semaphore-wakeups.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0247-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0248-x86-kvm-require-const-tsc-for-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0249-scsi-fcoe-rt-aware.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0250-x86-crypto-Reduce-preempt-disabled-regions.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0251-dm-Make-rt-aware.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0252-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0253-seqlock-Prevent-rt-starvation.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0254-timer-Fix-hotplug-for-rt.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0255-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0256-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0257-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0258-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0259-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0260-softirq-Check-preemption-after-reenabling-interrupts.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0261-rt-Introduce-cpu_chill.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0262-fs-dcache-Use-cpu_chill-in-trylock-loops.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0263-net-Use-cpu_chill-instead-of-cpu_relax.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0264-kconfig-disable-a-few-options-rt.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0265-kconfig-preempt-rt-full.patch.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0266-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/0267-Linux-3.2.16-rt27-REBASE.patch
   dists/squeeze-backports/linux/debian/patches/features/all/rt/revert-workqueue-skip-nr_running-sanity-check-in-wor.patch
   dists/squeeze-backports/linux/debian/patches/features/all/topology-Provide-CPU-topology-in-sysfs-in-SMP-configura.patch
Modified:
   dists/squeeze-backports/linux/   (props changed)
   dists/squeeze-backports/linux/debian/README.Debian
   dists/squeeze-backports/linux/debian/bin/gencontrol.py
   dists/squeeze-backports/linux/debian/changelog
   dists/squeeze-backports/linux/debian/config/armel/config.kirkwood
   dists/squeeze-backports/linux/debian/config/config
   dists/squeeze-backports/linux/debian/config/defines
   dists/squeeze-backports/linux/debian/config/kernelarch-x86/config-arch-32
   dists/squeeze-backports/linux/debian/config/powerpc/config
   dists/squeeze-backports/linux/debian/installer/package-list
   dists/squeeze-backports/linux/debian/lib/python/debian_linux/gencontrol.py
   dists/squeeze-backports/linux/debian/patches/features/all/rt/series
   dists/squeeze-backports/linux/debian/patches/features/all/wacom/0026-Input-wacom-return-proper-error-if-usb_get_extra_des.patch
   dists/squeeze-backports/linux/debian/patches/series
   dists/squeeze-backports/linux/debian/patches/series-rt
   dists/squeeze-backports/linux/debian/rules
   dists/squeeze-backports/linux/debian/rules.real
   dists/squeeze-backports/linux/debian/templates/control.image.type-plain.in
   dists/squeeze-backports/linux/debian/templates/control.libc-dev.in
   dists/squeeze-backports/linux/debian/templates/control.main.in

Modified: dists/squeeze-backports/linux/debian/README.Debian
==============================================================================
--- dists/squeeze-backports/linux/debian/README.Debian	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/README.Debian	Fri Aug 17 02:04:57 2012	(r19325)
@@ -52,4 +52,5 @@
 Further information
 -------------------
 Debian Linux Kernel Handbook: http://kernel-handbook.alioth.debian.org
+                              or debian-kernel-handbook package
 Debian Wiki: http://wiki.debian.org/DebianKernel

Modified: dists/squeeze-backports/linux/debian/bin/gencontrol.py
==============================================================================
--- dists/squeeze-backports/linux/debian/bin/gencontrol.py	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/bin/gencontrol.py	Fri Aug 17 02:04:57 2012	(r19325)
@@ -60,11 +60,7 @@
             'SOURCEVERSION': self.version.complete,
         })
 
-    def do_main_packages(self, packages, vars, makeflags, extra):
-        packages.extend(self.process_packages(self.templates["control.main"], self.vars))
-
-    def do_main_recurse(self, packages, makefile, vars, makeflags, extra):
-        # Add featureset source rules
+    def do_main_makefile(self, makefile, makeflags, extra):
         for featureset in iter(self.config['base', ]['featuresets']):
             makeflags_featureset = makeflags.copy()
             makeflags_featureset['FEATURESET'] = featureset
@@ -75,7 +71,12 @@
                          ['source_%s_real' % featureset])
             makefile.add('source', ['source_%s' % featureset])
 
-        super(Gencontrol, self).do_main_recurse(packages, makefile, vars, makeflags, extra)
+        makeflags = makeflags.copy()
+        makeflags['ALL_FEATURESETS'] = ' '.join(self.config['base', ]['featuresets'])
+        super(Gencontrol, self).do_main_makefile(makefile, makeflags, extra)
+
+    def do_main_packages(self, packages, vars, makeflags, extra):
+        packages.extend(self.process_packages(self.templates["control.main"], self.vars))
 
     arch_makeflags = (
         ('kernel-arch', 'KERNEL_ARCH', False),
@@ -113,34 +114,38 @@
                      ["$(MAKE) -f debian/rules.real install-libc-dev_%s %s" %
                       (arch, makeflags)])
 
-        # Add udebs using kernel-wedge
-        installer_def_dir = 'debian/installer'
-        installer_arch_dir = os.path.join(installer_def_dir, arch)
-        if os.path.isdir(installer_arch_dir):
-            kw_env = os.environ.copy()
-            kw_env['KW_DEFCONFIG_DIR'] = installer_def_dir
-            kw_env['KW_CONFIG_DIR'] = installer_arch_dir
-            kw_proc = subprocess.Popen(
-                ['kernel-wedge', 'gen-control',
-                 self.abiname],
-                stdout=subprocess.PIPE,
-                env=kw_env)
-            udeb_packages = read_control(kw_proc.stdout)
-            kw_proc.wait()
-            if kw_proc.returncode != 0:
-                raise RuntimeError('kernel-wedge exited with code %d' %
-                                   kw_proc.returncode)
-
-            self.merge_packages(packages, udeb_packages, arch)
-
-            # These packages must be built after the per-flavour/
-            # per-featureset packages.
-            makefile.add(
-                'binary-arch_%s' % arch,
-                cmds=["$(MAKE) -f debian/rules.real install-udeb_%s %s "
-                        "PACKAGE_NAMES='%s'" %
-                        (arch, makeflags,
-                         ' '.join(p['Package'] for p in udeb_packages))])
+        if self.changelog[0].distribution == 'UNRELEASED' and os.getenv('DEBIAN_KERNEL_DISABLE_INSTALLER'):
+            import warnings
+            warnings.warn(u'Disable building of debug infos on request (DEBIAN_KERNEL_DISABLE_INSTALLER set)')
+        else:
+            # Add udebs using kernel-wedge
+            installer_def_dir = 'debian/installer'
+            installer_arch_dir = os.path.join(installer_def_dir, arch)
+            if os.path.isdir(installer_arch_dir):
+                kw_env = os.environ.copy()
+                kw_env['KW_DEFCONFIG_DIR'] = installer_def_dir
+                kw_env['KW_CONFIG_DIR'] = installer_arch_dir
+                kw_proc = subprocess.Popen(
+                    ['kernel-wedge', 'gen-control',
+                     self.abiname],
+                    stdout=subprocess.PIPE,
+                    env=kw_env)
+                udeb_packages = read_control(kw_proc.stdout)
+                kw_proc.wait()
+                if kw_proc.returncode != 0:
+                    raise RuntimeError('kernel-wedge exited with code %d' %
+                                       kw_proc.returncode)
+
+                self.merge_packages(packages, udeb_packages, arch)
+
+                # These packages must be built after the per-flavour/
+                # per-featureset packages.
+                makefile.add(
+                    'binary-arch_%s' % arch,
+                    cmds=["$(MAKE) -f debian/rules.real install-udeb_%s %s "
+                            "PACKAGE_NAMES='%s'" %
+                            (arch, makeflags,
+                             ' '.join(p['Package'] for p in udeb_packages))])
 
     def do_featureset_setup(self, vars, makeflags, arch, featureset, extra):
         config_base = self.config.merge('base', arch, featureset)
@@ -281,7 +286,7 @@
 
         if build_debug and self.changelog[0].distribution == 'UNRELEASED' and os.getenv('DEBIAN_KERNEL_DISABLE_DEBUG'):
             import warnings
-            warnings.warn(u'Disable building of debug infos on request (DEBIAN_KERNEL_DISABLE_DEBUG)')
+            warnings.warn(u'Disable building of debug infos on request (DEBIAN_KERNEL_DISABLE_DEBUG set)')
             build_debug = False
 
         if build_debug:
@@ -337,10 +342,10 @@
         cmds_binary_arch = ["$(MAKE) -f debian/rules.real binary-arch-flavour %s" % makeflags]
         if packages_dummy:
             cmds_binary_arch.append("$(MAKE) -f debian/rules.real install-dummy DH_OPTIONS='%s' %s" % (' '.join(["-p%s" % i['Package'] for i in packages_dummy]), makeflags))
-        cmds_build = ["$(MAKE) -f debian/rules.real build %s" % makeflags]
+        cmds_build = ["$(MAKE) -f debian/rules.real build-arch %s" % makeflags]
         cmds_setup = ["$(MAKE) -f debian/rules.real setup-flavour %s" % makeflags]
         makefile.add('binary-arch_%s_%s_%s_real' % (arch, featureset, flavour), cmds=cmds_binary_arch)
-        makefile.add('build_%s_%s_%s_real' % (arch, featureset, flavour), cmds=cmds_build)
+        makefile.add('build-arch_%s_%s_%s_real' % (arch, featureset, flavour), cmds=cmds_build)
         makefile.add('setup_%s_%s_%s_real' % (arch, featureset, flavour), cmds=cmds_setup)
 
     def merge_packages(self, packages, new, arch):

Modified: dists/squeeze-backports/linux/debian/changelog
==============================================================================
--- dists/squeeze-backports/linux/debian/changelog	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/changelog	Fri Aug 17 02:04:57 2012	(r19325)
@@ -1,15 +1,97 @@
-linux (3.2.20-1~bpo60+1) squeeze-backports; urgency=low
+linux (3.2.21-3~bpo60+1) squeeze-backports; urgency=low
 
   * Rebuild for squeeze:
     - Use gcc-4.4 for all architectures
     - Disable building of udebs
-    - Change ABI number to 0.bpo.2
+    - Change ABI number to 0.bpo.3
     - Monkey-patch Python collections module to add OrderedDict if necessary
     - [armel] Disable CRYPTO_FIPS, VGA_ARB, FTRACE on iop32x and ixp4xx to
       reduce kernel size (as suggested by Arnaud Patard)
     - Use QUILT_PATCH_OPTS instead of missing quilt patch --fuzz option
 
- -- Ben Hutchings <ben at decadent.org.uk>  Fri, 29 Jun 2012 03:14:54 +0100
+ -- Ben Hutchings <ben at decadent.org.uk>  Fri, 17 Aug 2012 02:57:53 +0100
+
+linux (3.2.21-3) unstable; urgency=low
+
+  * driver core: remove __must_check from device_create_file
+    (fixes FTBFS on sparc)
+  * i2400m: Disable I2400M_SDIO; hardware did not reach production
+  * apparmor: remove advertising the support of network rules from
+    compat iface (Closes: #676515)
+  * xen/netfront: teardown the device before unregistering it (Closes: #675190)
+  * linux-{doc,manual,source,support}: Mark as capable of satisfying
+    relations from foreign packages (Multi-Arch: foreign) (Closes: #679202)
+
+ -- Ben Hutchings <ben at decadent.org.uk>  Thu, 28 Jun 2012 04:58:18 +0100
+
+linux (3.2.21-2) unstable; urgency=low
+
+  * [i386] cpufreq/gx: Fix the compile error
+  * [powerpc] Enable PPC_DISABLE_WERROR (fixes FTBFS)
+  * tracing/mm: Move include of trace/events/kmem.h out of header into slab.c
+    (fixes FTBFS on sparc)
+  * [i386] Disable incomplete lguest support
+  * udeb: Add missing dependencies for various modules (see #678587)
+    - [armel/kirkwood] fb-modules depends on kernel-image
+    - [ia64] nic-usb-modules depends on kernel-image, nic-shared-modules,
+      usb-modules
+    - [ia64] sata-modules depends on kernel-image, scsi-core-modules
+    - [ia64] scsi-modules depends on scsi-core-modules
+    - [ia64,powerpc,ppc64] pcmcia-modules depends on kernel-image
+    - [powerpc,ppc64] nic-pcmcia-modules depends on kernel-image,
+      nic-shared-modules, pcmcia-modules
+    - [powerpc,ppc64,x86] scsi-modules depends on ata-modules
+    - [x86] nic-extra-modules depends on i2c-modules
+  * wacom: do not crash when retrieving touch_max (Closes: #678798)
+  * wacom: Revert unintended changes to handling of Tablet PCs
+    (Closes: #677164)
+  * linux-image, README.Debian: Suggest debian-kernel-handbook package
+
+  [ Arnaud Patard ]
+  * [armel, armhf] backport BPF JIT support
+
+ -- Ben Hutchings <ben at decadent.org.uk>  Tue, 26 Jun 2012 01:56:42 +0100
+
+linux (3.2.21-1) unstable; urgency=low
+
+  * New upstream stable update:
+    http://www.kernel.org/pub/linux/kernel/v3.x/ChangeLog-3.2.21
+    - NFSv4.1: Fix a request leak on the back channel
+    - target: Return error to initiator if SET TARGET PORT GROUPS emulation
+      fails
+    - USB: add NO_D3_DURING_SLEEP flag and revert 151b61284776be2
+    - USB: fix gathering of interface associations
+
+  [ Ben Hutchings ]
+  * [ia64,powerpc] udeb: Add crc-itu-t to crc-modules; make
+    firewire-core-modules depend on it (fixes FTBFS)
+  * [arm,m68k,sh4] udeb: Build ipv6-modules
+  * ethtool: allow ETHTOOL_GSSET_INFO for users
+  * [rt] bump version to 3.2.20-rt32
+  * cpu: Convert 'cpu' and 'machinecheck' sysdev_class to a regular subsystem
+  * [x86] Add driver auto probing for x86 features
+    - crypto: Add support for x86 cpuid auto loading for x86 crypto drivers
+      (Closes: #568008)
+    - intel-idle: convert to x86_cpu_id auto probing
+    - HWMON: Convert coretemp to x86 cpuid autoprobing
+    - HWMON: Convert via-cputemp to x86 cpuid autoprobing
+    - cpufreq: Add support for x86 cpuinfo auto loading (Closes: #664813)
+  * [x86] ACPI: Load acpi-cpufreq from processor driver automatically
+  * Bump ABI to 3
+  * input: Add Synaptics USB device driver (Closes: #678071)
+  * [x86] udeb: Fix dependencies for nic-wireless-modules
+
+  [ Aurelien Jarno ]
+  * [mips,mipsel] udeb: Remove rivafb and nvidiafb.
+  * [ppc64]: add udebs, based on powerpc/powerpc64.
+
+  [ Bastian Blank ]
+  * Support build-arch and build-indep make targets.
+
+  [ Arnaud Patard ]
+  * [armel/kirkwood] Add dreamplug and iconnect support (Closes: #675922)
+
+ -- Ben Hutchings <ben at decadent.org.uk>  Fri, 22 Jun 2012 13:54:15 +0100
 
 linux (3.2.20-1) unstable; urgency=low
 

Modified: dists/squeeze-backports/linux/debian/config/armel/config.kirkwood
==============================================================================
--- dists/squeeze-backports/linux/debian/config/armel/config.kirkwood	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/config/armel/config.kirkwood	Fri Aug 17 02:04:57 2012	(r19325)
@@ -37,6 +37,8 @@
 CONFIG_UACCESS_WITH_MEMCPY=y
 CONFIG_ZBOOT_ROM_TEXT=0x0
 CONFIG_ZBOOT_ROM_BSS=0x0
+CONFIG_ARM_APPENDED_DTB=y
+CONFIG_ARM_ATAG_DTB_COMPAT=y
 CONFIG_CMDLINE=""
 # CONFIG_XIP_KERNEL is not set
 CONFIG_KEXEC=y
@@ -57,6 +59,9 @@
 CONFIG_MACH_SHEEVAPLUG=y
 CONFIG_MACH_ESATA_SHEEVAPLUG=y
 CONFIG_MACH_GURUPLUG=y
+CONFIG_ARCH_KIRKWOOD_DT=y
+CONFIG_MACH_DREAMPLUG_DT=y
+CONFIG_MACH_ICONNECT_DT=y
 CONFIG_MACH_TS219=y
 CONFIG_MACH_TS41X=y
 CONFIG_MACH_DOCKSTAR=y
@@ -287,6 +292,7 @@
 CONFIG_MTD=y
 # CONFIG_MTD_REDBOOT_PARTS is not set
 CONFIG_MTD_CMDLINE_PARTS=y
+CONFIG_MTD_OF_PARTS=y
 # CONFIG_MTD_AFS_PARTS is not set
 CONFIG_MTD_CHAR=y
 CONFIG_MTD_BLOCK=y
@@ -349,6 +355,7 @@
 CONFIG_MTD_PHYSMAP_START=0x0
 CONFIG_MTD_PHYSMAP_LEN=0x0
 CONFIG_MTD_PHYSMAP_BANKWIDTH=0
+CONFIG_MTD_PHYSMAP_OF=y
 # CONFIG_MTD_IMPA7 is not set
 # CONFIG_MTD_INTEL_VR_NOR is not set
 # CONFIG_MTD_PLATRAM is not set
@@ -461,6 +468,11 @@
 CONFIG_MWIFIEX_SDIO=m
 
 ##
+## file: drivers/of/Kconfig
+##
+CONFIG_PROC_DEVICETREE=y
+
+##
 ## file: drivers/pcmcia/Kconfig
 ##
 # CONFIG_PCCARD is not set
@@ -558,6 +570,8 @@
 CONFIG_SERIAL_8250_NR_UARTS=4
 CONFIG_SERIAL_8250_RUNTIME_UARTS=2
 # CONFIG_SERIAL_8250_EXTENDED is not set
+# CONFIG_SERIAL_8250_DW is not set
+CONFIG_SERIAL_OF_PLATFORM=y
 
 ##
 ## file: drivers/usb/Kconfig

Modified: dists/squeeze-backports/linux/debian/config/config
==============================================================================
--- dists/squeeze-backports/linux/debian/config/config	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/config/config	Fri Aug 17 02:04:57 2012	(r19325)
@@ -738,6 +738,7 @@
 # CONFIG_MOUSE_PS2_TOUCHKIT is not set
 # CONFIG_MOUSE_GPIO is not set
 CONFIG_MOUSE_SYNAPTICS_I2C=m
+CONFIG_MOUSE_SYNAPTICS_USB=m
 
 ##
 ## file: drivers/input/serio/Kconfig
@@ -1545,8 +1546,7 @@
 ##
 ## file: drivers/misc/iwmc3200top/Kconfig
 ##
-# CONFIG_IWMC3200TOP_DEBUG is not set
-# CONFIG_IWMC3200TOP_DEBUGFS is not set
+# CONFIG_IWMC3200TOP is not set
 
 ##
 ## file: drivers/misc/lis3lv02d/Kconfig
@@ -2159,8 +2159,7 @@
 ## file: drivers/net/wimax/i2400m/Kconfig
 ##
 CONFIG_WIMAX_I2400M_USB=m
-CONFIG_WIMAX_I2400M_SDIO=m
-CONFIG_WIMAX_IWMC3200_SDIO=y
+# CONFIG_WIMAX_I2400M_SDIO is not set
 CONFIG_WIMAX_I2400M_DEBUG_LEVEL=8
 
 ##

Modified: dists/squeeze-backports/linux/debian/config/defines
==============================================================================
--- dists/squeeze-backports/linux/debian/config/defines	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/config/defines	Fri Aug 17 02:04:57 2012	(r19325)
@@ -1,7 +1,5 @@
 [abi]
-abiname: 0.bpo.2
-ignore-changes: module:drivers/net/wireless/ath/ath9k/*
- module:drivers/hv/*
+abiname: 0.bpo.3
 
 [base]
 arches:

Modified: dists/squeeze-backports/linux/debian/config/kernelarch-x86/config-arch-32
==============================================================================
--- dists/squeeze-backports/linux/debian/config/kernelarch-x86/config-arch-32	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/config/kernelarch-x86/config-arch-32	Fri Aug 17 02:04:57 2012	(r19325)
@@ -45,7 +45,7 @@
 ##
 ## file: arch/x86/lguest/Kconfig
 ##
-CONFIG_LGUEST_GUEST=y
+# CONFIG_LGUEST_GUEST is not set
 
 ##
 ## file: crypto/Kconfig
@@ -188,7 +188,7 @@
 ##
 ## file: drivers/lguest/Kconfig
 ##
-CONFIG_LGUEST=m
+# CONFIG_LGUEST is not set
 
 ##
 ## file: drivers/macintosh/Kconfig

Modified: dists/squeeze-backports/linux/debian/config/powerpc/config
==============================================================================
--- dists/squeeze-backports/linux/debian/config/powerpc/config	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/config/powerpc/config	Fri Aug 17 02:04:57 2012	(r19325)
@@ -19,6 +19,7 @@
 ##
 ## file: arch/powerpc/Kconfig.debug
 ##
+CONFIG_PPC_DISABLE_WERROR=y
 # CONFIG_DEBUG_STACKOVERFLOW is not set
 # CONFIG_CODE_PATCHING_SELFTEST is not set
 # CONFIG_FTR_FIXUP_SELFTEST is not set

Modified: dists/squeeze-backports/linux/debian/installer/package-list
==============================================================================
--- dists/squeeze-backports/linux/debian/installer/package-list	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/installer/package-list	Fri Aug 17 02:04:57 2012	(r19325)
@@ -22,7 +22,7 @@
  This package contains rare NIC drivers for the kernel.
 
 Package: nic-wireless-modules
-Depends: kernel-image, nic-shared-modules, core-modules, usb-modules, mmc-modules, pcmcia-modules
+Depends: kernel-image, nic-shared-modules, core-modules, usb-modules, mmc-modules, pcmcia-modules, crypto-core-modules, crc-modules
 Priority: standard
 Description: Wireless NIC drivers
  This package contains wireless NIC drivers for the kernel.
@@ -77,14 +77,8 @@
 Description: CDROM support
  This package contains core CDROM support for the kernel.
 
-Package: cdrom-modules
-Depends: kernel-image, ide-modules, cdrom-core-modules
-Priority: optional
-Description: Esoteric CDROM drivers
- This package contains esoteric CDROM drivers for the kernel.
-
 Package: firewire-core-modules
-Depends: kernel-image, scsi-core-modules
+Depends: kernel-image, scsi-core-modules, crc-modules
 Priority: standard
 Description: Core FireWire drivers
  This package contains core FireWire drivers for the kernel.
@@ -139,12 +133,6 @@
 Description: IPv6 driver
  This package contains the IPv6 driver for the kernel.
 
-Package: nls-core-modules
-Depends: kernel-image
-Priority: extra
-Description: Core NLS support
- This package contains basic NLS support modules for the kernel.
-
 Package: btrfs-modules
 Depends: kernel-image, core-modules, crc-modules, zlib-modules, lzo-modules
 Priority: extra
@@ -229,12 +217,6 @@
 Description: UFS filesystem support
  This package contains the UFS filesystem module for the kernel.
 
-Package: zfs-modules
-Depends: kernel-image
-Priority: extra
-Description: ZFS filesystem support
- This package contains the ZFS filesystem module for the kernel.
-
 Package: qnx4-modules
 Depends: kernel-image
 Priority: extra
@@ -253,12 +235,6 @@
 Description: NFS filesystem support
  This package contains the NFS filesystem module for the kernel.
 
-Package: nullfs-modules
-Depends: kernel-image
-Priority: standard
-Description: nullfs filesystem support
- This package contains the nullfs filesystem module for the kernel.
-
 Package: md-modules
 Depends: kernel-image
 Priority: extra
@@ -419,12 +395,6 @@
  This package contains the modules required for support of the Network Block
  Device
 
-Package: loop-aes-modules
-Depends: kernel-image!
-Priority: extra
-Description: loop-AES crypto modules
- This package contains loop-AES crypto modules.
-
 Package: squashfs-modules
 Depends: kernel-image
 Priority: extra

Modified: dists/squeeze-backports/linux/debian/lib/python/debian_linux/gencontrol.py
==============================================================================
--- dists/squeeze-backports/linux/debian/lib/python/debian_linux/gencontrol.py	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/lib/python/debian_linux/gencontrol.py	Fri Aug 17 02:04:57 2012	(r19325)
@@ -77,7 +77,7 @@
 
 
 class Gencontrol(object):
-    makefile_targets = ('binary-arch', 'build', 'setup')
+    makefile_targets = ('binary-arch', 'build-arch', 'setup')
 
     def __init__(self, config, templates, version=Version):
         self.config, self.templates = config, templates
@@ -114,8 +114,7 @@
         pass
 
     def do_main_makefile(self, makefile, makeflags, extra):
-        makeflags = makeflags.copy()
-        makeflags['ALL_FEATURESETS'] = ' '.join(self.config['base', ]['featuresets'])
+        makefile.add('build-indep', cmds=["$(MAKE) -f debian/rules.real build-indep %s" % makeflags])
         makefile.add('binary-indep', cmds=["$(MAKE) -f debian/rules.real binary-indep %s" % makeflags])
 
     def do_main_packages(self, packages, vars, makeflags, extra):

Copied: dists/squeeze-backports/linux/debian/patches/bugfix/all/apparmor-remove-advertising-the-support-of-network-r.patch (from r19226, dists/sid/linux/debian/patches/bugfix/all/apparmor-remove-advertising-the-support-of-network-r.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/bugfix/all/apparmor-remove-advertising-the-support-of-network-r.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/bugfix/all/apparmor-remove-advertising-the-support-of-network-r.patch)
@@ -0,0 +1,32 @@
+From 873143ceca69a2e54e7face1be49ad6b5514525d Mon Sep 17 00:00:00 2001
+From: John Johansen <john.johansen at canonical.com>
+Date: Tue, 26 Jun 2012 02:12:10 -0700
+Subject: [PATCH 1/4] apparmor: remove advertising the support of network
+ rules from compat iface
+
+The interface compatibility patch was advertising support of network rules,
+however this is not true if the networking patch is not applied. Move
+advertising of network rules into a third patch that can be applied if
+both the compatibility and network patches are applied.
+
+Signed-off-by: John Johansen <john.johansen at canonical.com>
+---
+ security/apparmor/apparmorfs-24.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/security/apparmor/apparmorfs-24.c b/security/apparmor/apparmorfs-24.c
+index dc8c744..367c7ea 100644
+--- a/security/apparmor/apparmorfs-24.c
++++ b/security/apparmor/apparmorfs-24.c
+@@ -49,7 +49,7 @@ const struct file_operations aa_fs_matching_fops = {
+ static ssize_t aa_features_read(struct file *file, char __user *buf,
+ 				size_t size, loff_t *ppos)
+ {
+-	const char features[] = "file=3.1 capability=2.0 network=1.0 "
++	const char features[] = "file=3.1 capability=2.0 "
+ 	    "change_hat=1.5 change_profile=1.1 " "aanamespaces=1.1 rlimit=1.1";
+ 
+ 	return simple_read_from_buffer(buf, size, ppos, features,
+-- 
+1.7.9.5
+

Copied: dists/squeeze-backports/linux/debian/patches/bugfix/all/ethtool-allow-ETHTOOL_GSSET_INFO-for-users.patch (from r19226, dists/sid/linux/debian/patches/bugfix/all/ethtool-allow-ETHTOOL_GSSET_INFO-for-users.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/bugfix/all/ethtool-allow-ETHTOOL_GSSET_INFO-for-users.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/bugfix/all/ethtool-allow-ETHTOOL_GSSET_INFO-for-users.patch)
@@ -0,0 +1,31 @@
+From: =?UTF-8?q?Micha=C5=82=20Miros=C5=82aw?= <mirq-linux at rere.qmqm.pl>
+Date: Sun, 22 Jan 2012 00:20:40 +0000
+Subject: ethtool: allow ETHTOOL_GSSET_INFO for users
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+commit f80400a26a2e8bff541de12834a1134358bb6642 upstream.
+
+Allow ETHTOOL_GSSET_INFO ethtool ioctl() for unprivileged users.
+ETHTOOL_GSTRINGS is already allowed, but is unusable without this one.
+
+Signed-off-by: Michał Mirosław <mirq-linux at rere.qmqm.pl>
+Acked-by: Ben Hutchings <bhutchings at solarflare.com>
+Signed-off-by: David S. Miller <davem at davemloft.net>
+---
+ net/core/ethtool.c |    1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 921aa2b..369b418 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -1311,6 +1311,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
+ 	case ETHTOOL_GRXCSUM:
+ 	case ETHTOOL_GTXCSUM:
+ 	case ETHTOOL_GSG:
++	case ETHTOOL_GSSET_INFO:
+ 	case ETHTOOL_GSTRINGS:
+ 	case ETHTOOL_GTSO:
+ 	case ETHTOOL_GPERMADDR:

Copied: dists/squeeze-backports/linux/debian/patches/bugfix/all/xen-netfront-teardown-the-device-before-unregistering-it.patch (from r19226, dists/sid/linux/debian/patches/bugfix/all/xen-netfront-teardown-the-device-before-unregistering-it.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/bugfix/all/xen-netfront-teardown-the-device-before-unregistering-it.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/bugfix/all/xen-netfront-teardown-the-device-before-unregistering-it.patch)
@@ -0,0 +1,49 @@
+From: Ian Campbell <ian.campbell at citrix.com>
+Date: Tue, 26 Jun 2012 09:48:41 +0100
+Subject: xen/netfront: teardown the device before unregistering it.
+Bug-Debian: http://bugs.debian.org/675190
+
+Fixes:
+[   15.470311] WARNING: at /local/scratch/ianc/devel/kernels/linux/fs/sysfs/file.c:498 sysfs_attr_ns+0x95/0xa0()
+[   15.470326] sysfs: kobject eth0 without dirent
+[   15.470333] Modules linked in:
+[   15.470342] Pid: 12, comm: xenwatch Not tainted 3.4.0-x86_32p-xenU #93
+and
+[    9.150554] BUG: unable to handle kernel paging request at 2b359000
+[    9.150577] IP: [<c1279561>] linkwatch_do_dev+0x81/0xc0
+[    9.150592] *pdpt = 000000002c3c9027 *pde = 0000000000000000
+[    9.150604] Oops: 0002 [#1] SMP
+[    9.150613] Modules linked in:
+
+This is http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=675190
+
+Reported-by: George Shuklin <george.shuklin at gmail.com>
+Signed-off-by: Ian Campbell <ian.campbell at citrix.com>
+Tested-by: William Dauchy <wdauchy at gmail.com>
+Cc: stable at kernel.org
+Cc: 675190 at bugs.debian.org
+---
+ drivers/net/xen-netfront.c |    8 ++++----
+ 1 files changed, 4 insertions(+), 4 deletions(-)
+
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -1922,14 +1922,14 @@
+ 
+ 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+ 
+-	unregister_netdev(info->netdev);
+-
+ 	xennet_disconnect_backend(info);
+ 
+-	del_timer_sync(&info->rx_refill_timer);
+-
+ 	xennet_sysfs_delif(info->netdev);
+ 
++	unregister_netdev(info->netdev);
++
++	del_timer_sync(&info->rx_refill_timer);
++
+ 	free_percpu(info->stats);
+ 
+ 	free_netdev(info->netdev);

Copied: dists/squeeze-backports/linux/debian/patches/debian/driver-core-avoid-ABI-change-for-removal-of-__must_check.patch (from r19226, dists/sid/linux/debian/patches/debian/driver-core-avoid-ABI-change-for-removal-of-__must_check.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/debian/driver-core-avoid-ABI-change-for-removal-of-__must_check.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/debian/driver-core-avoid-ABI-change-for-removal-of-__must_check.patch)
@@ -0,0 +1,22 @@
+From: Ben Hutchings <ben at decadent.org.uk>
+Subject: driver core: Avoid ABI change for removal of __must_check
+
+Surprisingly, __must_check contributes to the symbol version hash
+despite making no real difference to the function's ABI.
+
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -510,8 +510,13 @@
+ 	struct dev_ext_attribute dev_attr_##_name = \
+ 		{ __ATTR(_name, _mode, device_show_ulong, device_store_ulong), &(_var) }
+ 
++#ifdef __GENKSYMS__
++extern int __must_check device_create_file(struct device *device,
++					const struct device_attribute *entry);
++#else
+ extern int device_create_file(struct device *device,
+ 			      const struct device_attribute *entry);
++#endif
+ extern void device_remove_file(struct device *dev,
+ 			       const struct device_attribute *attr);
+ extern int __must_check device_create_bin_file(struct device *dev,

Copied: dists/squeeze-backports/linux/debian/patches/features/all/Input-add-Synaptics-USB-device-driver.patch (from r19226, dists/sid/linux/debian/patches/features/all/Input-add-Synaptics-USB-device-driver.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/Input-add-Synaptics-USB-device-driver.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/Input-add-Synaptics-USB-device-driver.patch)
@@ -0,0 +1,683 @@
+From: Jan Steinhoff <mail at jan-steinhoff.de>
+Date: Fri, 3 Feb 2012 00:21:31 -0800
+Subject: Input: add Synaptics USB device driver
+
+commit 8491ee1093c476ea3a9a19ab8593d8531cab40f7 upstream.
+
+This patch adds a driver for Synaptics USB touchpad or pointing stick
+devices. These USB devices emulate an USB mouse by default, so one can
+also use the usbhid driver. However, in combination with special user
+space drivers this kernel driver allows one to customize the behaviour
+of the device.
+
+An extended version of this driver with support for the cPad background
+display can be found at
+<http://jan-steinhoff.de/linux/synaptics-usb.html>.
+
+Signed-off-by: Jan Steinhoff <mail at jan-steinhoff.de>
+Acked-by: Jiri Kosina <jkosina at suse.cz>
+Signed-off-by: Dmitry Torokhov <dtor at mail.ru>
+---
+ drivers/hid/hid-core.c              |   10 +
+ drivers/hid/hid-ids.h               |   11 +
+ drivers/input/mouse/Kconfig         |   17 ++
+ drivers/input/mouse/Makefile        |    1 +
+ drivers/input/mouse/synaptics_usb.c |  568 +++++++++++++++++++++++++++++++++++
+ 5 files changed, 607 insertions(+)
+ create mode 100644 drivers/input/mouse/synaptics_usb.c
+
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 848a56c..b639855 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1892,6 +1892,16 @@ static const struct hid_device_id hid_ignore_list[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0004) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PHILIPS, USB_DEVICE_ID_PHILIPS_IEEE802154_DONGLE) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_POWERCOM, USB_DEVICE_ID_POWERCOM_UPS) },
++#if defined(CONFIG_MOUSE_SYNAPTICS_USB) || defined(CONFIG_MOUSE_SYNAPTICS_USB_MODULE)
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_TP) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_INT_TP) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_CPAD) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_STICK) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_WP) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_COMP_TP) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_WTP) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DPAD) },
++#endif
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_LABPRO) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_GOTEMP) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_SKIP) },
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 06ce996..3b68343 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -633,6 +633,17 @@
+ #define USB_DEVICE_ID_SYMBOL_SCANNER_1	0x0800
+ #define USB_DEVICE_ID_SYMBOL_SCANNER_2	0x1300
+ 
++#define USB_VENDOR_ID_SYNAPTICS		0x06cb
++#define USB_DEVICE_ID_SYNAPTICS_TP	0x0001
++#define USB_DEVICE_ID_SYNAPTICS_INT_TP	0x0002
++#define USB_DEVICE_ID_SYNAPTICS_CPAD	0x0003
++#define USB_DEVICE_ID_SYNAPTICS_TS	0x0006
++#define USB_DEVICE_ID_SYNAPTICS_STICK	0x0007
++#define USB_DEVICE_ID_SYNAPTICS_WP	0x0008
++#define USB_DEVICE_ID_SYNAPTICS_COMP_TP	0x0009
++#define USB_DEVICE_ID_SYNAPTICS_WTP	0x0010
++#define USB_DEVICE_ID_SYNAPTICS_DPAD	0x0013
++
+ #define USB_VENDOR_ID_THRUSTMASTER	0x044f
+ 
+ #define USB_VENDOR_ID_TOPSEED		0x0766
+diff --git a/drivers/input/mouse/Kconfig b/drivers/input/mouse/Kconfig
+index 9c1e6ee..9b8db82 100644
+--- a/drivers/input/mouse/Kconfig
++++ b/drivers/input/mouse/Kconfig
+@@ -322,4 +322,21 @@ config MOUSE_SYNAPTICS_I2C
+ 	  To compile this driver as a module, choose M here: the
+ 	  module will be called synaptics_i2c.
+ 
++config MOUSE_SYNAPTICS_USB
++	tristate "Synaptics USB device support"
++	depends on USB_ARCH_HAS_HCD
++	select USB
++	help
++	  Say Y here if you want to use a Synaptics USB touchpad or pointing
++	  stick.
++
++	  While these devices emulate an USB mouse by default and can be used
++	  with standard usbhid driver, this driver, together with its X.Org
++	  counterpart, allows you to fully utilize capabilities of the device.
++	  More information can be found at:
++	  <http://jan-steinhoff.de/linux/synaptics-usb.html>
++
++	  To compile this driver as a module, choose M here: the
++	  module will be called synaptics_usb.
++
+ endif
+diff --git a/drivers/input/mouse/Makefile b/drivers/input/mouse/Makefile
+index 570c84a4..4718eff 100644
+--- a/drivers/input/mouse/Makefile
++++ b/drivers/input/mouse/Makefile
+@@ -18,6 +18,7 @@ obj-$(CONFIG_MOUSE_PXA930_TRKBALL)	+= pxa930_trkball.o
+ obj-$(CONFIG_MOUSE_RISCPC)		+= rpcmouse.o
+ obj-$(CONFIG_MOUSE_SERIAL)		+= sermouse.o
+ obj-$(CONFIG_MOUSE_SYNAPTICS_I2C)	+= synaptics_i2c.o
++obj-$(CONFIG_MOUSE_SYNAPTICS_USB)	+= synaptics_usb.o
+ obj-$(CONFIG_MOUSE_VSXXXAA)		+= vsxxxaa.o
+ 
+ psmouse-objs := psmouse-base.o synaptics.o
+diff --git a/drivers/input/mouse/synaptics_usb.c b/drivers/input/mouse/synaptics_usb.c
+new file mode 100644
+index 0000000..e559a94
+--- /dev/null
++++ b/drivers/input/mouse/synaptics_usb.c
+@@ -0,0 +1,568 @@
++/*
++ * USB Synaptics device driver
++ *
++ *  Copyright (c) 2002 Rob Miller (rob at inpharmatica . co . uk)
++ *  Copyright (c) 2003 Ron Lee (ron at debian.org)
++ *	cPad driver for kernel 2.4
++ *
++ *  Copyright (c) 2004 Jan Steinhoff (cpad at jan-steinhoff . de)
++ *  Copyright (c) 2004 Ron Lee (ron at debian.org)
++ *	rewritten for kernel 2.6
++ *
++ *  cPad display character device part is not included. It can be found at
++ *  http://jan-steinhoff.de/linux/synaptics-usb.html
++ *
++ * Bases on:	usb_skeleton.c v2.2 by Greg Kroah-Hartman
++ *		drivers/hid/usbhid/usbmouse.c by Vojtech Pavlik
++ *		drivers/input/mouse/synaptics.c by Peter Osterlund
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License as published by the Free
++ * Software Foundation; either version 2 of the License, or (at your option)
++ * any later version.
++ *
++ * Trademarks are the property of their respective owners.
++ */
++
++/*
++ * There are three different types of Synaptics USB devices: Touchpads,
++ * touchsticks (or trackpoints), and touchscreens. Touchpads are well supported
++ * by this driver, touchstick support has not been tested much yet, and
++ * touchscreens have not been tested at all.
++ *
++ * Up to three alternate settings are possible:
++ *	setting 0: one int endpoint for relative movement (used by usbhid.ko)
++ *	setting 1: one int endpoint for absolute finger position
++ *	setting 2 (cPad only): one int endpoint for absolute finger position and
++ *		   two bulk endpoints for the display (in/out)
++ * This driver uses setting 1.
++ */
++
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/slab.h>
++#include <linux/module.h>
++#include <linux/moduleparam.h>
++#include <linux/usb.h>
++#include <linux/input.h>
++#include <linux/usb/input.h>
++
++#define USB_VENDOR_ID_SYNAPTICS	0x06cb
++#define USB_DEVICE_ID_SYNAPTICS_TP	0x0001	/* Synaptics USB TouchPad */
++#define USB_DEVICE_ID_SYNAPTICS_INT_TP	0x0002	/* Integrated USB TouchPad */
++#define USB_DEVICE_ID_SYNAPTICS_CPAD	0x0003	/* Synaptics cPad */
++#define USB_DEVICE_ID_SYNAPTICS_TS	0x0006	/* Synaptics TouchScreen */
++#define USB_DEVICE_ID_SYNAPTICS_STICK	0x0007	/* Synaptics USB Styk */
++#define USB_DEVICE_ID_SYNAPTICS_WP	0x0008	/* Synaptics USB WheelPad */
++#define USB_DEVICE_ID_SYNAPTICS_COMP_TP	0x0009	/* Composite USB TouchPad */
++#define USB_DEVICE_ID_SYNAPTICS_WTP	0x0010	/* Wireless TouchPad */
++#define USB_DEVICE_ID_SYNAPTICS_DPAD	0x0013	/* DisplayPad */
++
++#define SYNUSB_TOUCHPAD			(1 << 0)
++#define SYNUSB_STICK			(1 << 1)
++#define SYNUSB_TOUCHSCREEN		(1 << 2)
++#define SYNUSB_AUXDISPLAY		(1 << 3) /* For cPad */
++#define SYNUSB_COMBO			(1 << 4) /* Composite device (TP + stick) */
++#define SYNUSB_IO_ALWAYS		(1 << 5)
++
++#define USB_DEVICE_SYNAPTICS(prod, kind)		\
++	USB_DEVICE(USB_VENDOR_ID_SYNAPTICS,		\
++		   USB_DEVICE_ID_SYNAPTICS_##prod),	\
++	.driver_info = (kind),
++
++#define SYNUSB_RECV_SIZE	8
++
++#define XMIN_NOMINAL		1472
++#define XMAX_NOMINAL		5472
++#define YMIN_NOMINAL		1408
++#define YMAX_NOMINAL		4448
++
++struct synusb {
++	struct usb_device *udev;
++	struct usb_interface *intf;
++	struct urb *urb;
++	unsigned char *data;
++
++	/* input device related data structures */
++	struct input_dev *input;
++	char name[128];
++	char phys[64];
++
++	/* characteristics of the device */
++	unsigned long flags;
++};
++
++static void synusb_report_buttons(struct synusb *synusb)
++{
++	struct input_dev *input_dev = synusb->input;
++
++	input_report_key(input_dev, BTN_LEFT, synusb->data[1] & 0x04);
++	input_report_key(input_dev, BTN_RIGHT, synusb->data[1] & 0x01);
++	input_report_key(input_dev, BTN_MIDDLE, synusb->data[1] & 0x02);
++}
++
++static void synusb_report_stick(struct synusb *synusb)
++{
++	struct input_dev *input_dev = synusb->input;
++	int x, y;
++	unsigned int pressure;
++
++	pressure = synusb->data[6];
++	x = (s16)(be16_to_cpup((__be16 *)&synusb->data[2]) << 3) >> 7;
++	y = (s16)(be16_to_cpup((__be16 *)&synusb->data[4]) << 3) >> 7;
++
++	if (pressure > 0) {
++		input_report_rel(input_dev, REL_X, x);
++		input_report_rel(input_dev, REL_Y, -y);
++	}
++
++	input_report_abs(input_dev, ABS_PRESSURE, pressure);
++
++	synusb_report_buttons(synusb);
++
++	input_sync(input_dev);
++}
++
++static void synusb_report_touchpad(struct synusb *synusb)
++{
++	struct input_dev *input_dev = synusb->input;
++	unsigned int num_fingers, tool_width;
++	unsigned int x, y;
++	unsigned int pressure, w;
++
++	pressure = synusb->data[6];
++	x = be16_to_cpup((__be16 *)&synusb->data[2]);
++	y = be16_to_cpup((__be16 *)&synusb->data[4]);
++	w = synusb->data[0] & 0x0f;
++
++	if (pressure > 0) {
++		num_fingers = 1;
++		tool_width = 5;
++		switch (w) {
++		case 0 ... 1:
++			num_fingers = 2 + w;
++			break;
++
++		case 2:	                /* pen, pretend its a finger */
++			break;
++
++		case 4 ... 15:
++			tool_width = w;
++			break;
++		}
++	} else {
++		num_fingers = 0;
++		tool_width = 0;
++	}
++
++	/*
++	 * Post events
++	 * BTN_TOUCH has to be first as mousedev relies on it when doing
++	 * absolute -> relative conversion
++	 */
++
++	if (pressure > 30)
++		input_report_key(input_dev, BTN_TOUCH, 1);
++	if (pressure < 25)
++		input_report_key(input_dev, BTN_TOUCH, 0);
++
++	if (num_fingers > 0) {
++		input_report_abs(input_dev, ABS_X, x);
++		input_report_abs(input_dev, ABS_Y,
++				 YMAX_NOMINAL + YMIN_NOMINAL - y);
++	}
++
++	input_report_abs(input_dev, ABS_PRESSURE, pressure);
++	input_report_abs(input_dev, ABS_TOOL_WIDTH, tool_width);
++
++	input_report_key(input_dev, BTN_TOOL_FINGER, num_fingers == 1);
++	input_report_key(input_dev, BTN_TOOL_DOUBLETAP, num_fingers == 2);
++	input_report_key(input_dev, BTN_TOOL_TRIPLETAP, num_fingers == 3);
++
++	synusb_report_buttons(synusb);
++	if (synusb->flags & SYNUSB_AUXDISPLAY)
++		input_report_key(input_dev, BTN_MIDDLE, synusb->data[1] & 0x08);
++
++	input_sync(input_dev);
++}
++
++static void synusb_irq(struct urb *urb)
++{
++	struct synusb *synusb = urb->context;
++	int error;
++
++	/* Check our status in case we need to bail out early. */
++	switch (urb->status) {
++	case 0:
++		usb_mark_last_busy(synusb->udev);
++		break;
++
++	/* Device went away so don't keep trying to read from it. */
++	case -ECONNRESET:
++	case -ENOENT:
++	case -ESHUTDOWN:
++		return;
++
++	default:
++		goto resubmit;
++		break;
++	}
++
++	if (synusb->flags & SYNUSB_STICK)
++		synusb_report_stick(synusb);
++	else
++		synusb_report_touchpad(synusb);
++
++resubmit:
++	error = usb_submit_urb(urb, GFP_ATOMIC);
++	if (error && error != -EPERM)
++		dev_err(&synusb->intf->dev,
++			"%s - usb_submit_urb failed with result: %d",
++			__func__, error);
++}
++
++static struct usb_endpoint_descriptor *
++synusb_get_in_endpoint(struct usb_host_interface *iface)
++{
++
++	struct usb_endpoint_descriptor *endpoint;
++	int i;
++
++	for (i = 0; i < iface->desc.bNumEndpoints; ++i) {
++		endpoint = &iface->endpoint[i].desc;
++
++		if (usb_endpoint_is_int_in(endpoint)) {
++			/* we found our interrupt in endpoint */
++			return endpoint;
++		}
++	}
++
++	return NULL;
++}
++
++static int synusb_open(struct input_dev *dev)
++{
++	struct synusb *synusb = input_get_drvdata(dev);
++	int retval;
++
++	retval = usb_autopm_get_interface(synusb->intf);
++	if (retval) {
++		dev_err(&synusb->intf->dev,
++			"%s - usb_autopm_get_interface failed, error: %d\n",
++			__func__, retval);
++		return retval;
++	}
++
++	retval = usb_submit_urb(synusb->urb, GFP_KERNEL);
++	if (retval) {
++		dev_err(&synusb->intf->dev,
++			"%s - usb_submit_urb failed, error: %d\n",
++			__func__, retval);
++		retval = -EIO;
++		goto out;
++	}
++
++	synusb->intf->needs_remote_wakeup = 1;
++
++out:
++	usb_autopm_put_interface(synusb->intf);
++	return retval;
++}
++
++static void synusb_close(struct input_dev *dev)
++{
++	struct synusb *synusb = input_get_drvdata(dev);
++	int autopm_error;
++
++	autopm_error = usb_autopm_get_interface(synusb->intf);
++
++	usb_kill_urb(synusb->urb);
++	synusb->intf->needs_remote_wakeup = 0;
++
++	if (!autopm_error)
++		usb_autopm_put_interface(synusb->intf);
++}
++
++static int synusb_probe(struct usb_interface *intf,
++			const struct usb_device_id *id)
++{
++	struct usb_device *udev = interface_to_usbdev(intf);
++	struct usb_endpoint_descriptor *ep;
++	struct synusb *synusb;
++	struct input_dev *input_dev;
++	unsigned int intf_num = intf->cur_altsetting->desc.bInterfaceNumber;
++	unsigned int altsetting = min(intf->num_altsetting, 1U);
++	int error;
++
++	error = usb_set_interface(udev, intf_num, altsetting);
++	if (error) {
++		dev_err(&udev->dev,
++			"Can not set alternate setting to %i, error: %i",
++			altsetting, error);
++		return error;
++	}
++
++	ep = synusb_get_in_endpoint(intf->cur_altsetting);
++	if (!ep)
++		return -ENODEV;
++
++	synusb = kzalloc(sizeof(*synusb), GFP_KERNEL);
++	input_dev = input_allocate_device();
++	if (!synusb || !input_dev) {
++		error = -ENOMEM;
++		goto err_free_mem;
++	}
++
++	synusb->udev = udev;
++	synusb->intf = intf;
++	synusb->input = input_dev;
++
++	synusb->flags = id->driver_info;
++	if (synusb->flags & SYNUSB_COMBO) {
++		/*
++		 * This is a combo device, we need to set proper
++		 * capability, depending on the interface.
++		 */
++		synusb->flags |= intf_num == 1 ?
++					SYNUSB_STICK : SYNUSB_TOUCHPAD;
++	}
++
++	synusb->urb = usb_alloc_urb(0, GFP_KERNEL);
++	if (!synusb->urb) {
++		error = -ENOMEM;
++		goto err_free_mem;
++	}
++
++	synusb->data = usb_alloc_coherent(udev, SYNUSB_RECV_SIZE, GFP_KERNEL,
++					  &synusb->urb->transfer_dma);
++	if (!synusb->data) {
++		error = -ENOMEM;
++		goto err_free_urb;
++	}
++
++	usb_fill_int_urb(synusb->urb, udev,
++			 usb_rcvintpipe(udev, ep->bEndpointAddress),
++			 synusb->data, SYNUSB_RECV_SIZE,
++			 synusb_irq, synusb,
++			 ep->bInterval);
++	synusb->urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
++
++	if (udev->manufacturer)
++		strlcpy(synusb->name, udev->manufacturer,
++			sizeof(synusb->name));
++
++	if (udev->product) {
++		if (udev->manufacturer)
++			strlcat(synusb->name, " ", sizeof(synusb->name));
++		strlcat(synusb->name, udev->product, sizeof(synusb->name));
++	}
++
++	if (!strlen(synusb->name))
++		snprintf(synusb->name, sizeof(synusb->name),
++			 "USB Synaptics Device %04x:%04x",
++			 le16_to_cpu(udev->descriptor.idVendor),
++			 le16_to_cpu(udev->descriptor.idProduct));
++
++	if (synusb->flags & SYNUSB_STICK)
++		strlcat(synusb->name, " (Stick) ", sizeof(synusb->name));
++
++	usb_make_path(udev, synusb->phys, sizeof(synusb->phys));
++	strlcat(synusb->phys, "/input0", sizeof(synusb->phys));
++
++	input_dev->name = synusb->name;
++	input_dev->phys = synusb->phys;
++	usb_to_input_id(udev, &input_dev->id);
++	input_dev->dev.parent = &synusb->intf->dev;
++
++	if (!(synusb->flags & SYNUSB_IO_ALWAYS)) {
++		input_dev->open = synusb_open;
++		input_dev->close = synusb_close;
++	}
++
++	input_set_drvdata(input_dev, synusb);
++
++	__set_bit(EV_ABS, input_dev->evbit);
++	__set_bit(EV_KEY, input_dev->evbit);
++
++	if (synusb->flags & SYNUSB_STICK) {
++		__set_bit(EV_REL, input_dev->evbit);
++		__set_bit(REL_X, input_dev->relbit);
++		__set_bit(REL_Y, input_dev->relbit);
++		input_set_abs_params(input_dev, ABS_PRESSURE, 0, 127, 0, 0);
++	} else {
++		input_set_abs_params(input_dev, ABS_X,
++				     XMIN_NOMINAL, XMAX_NOMINAL, 0, 0);
++		input_set_abs_params(input_dev, ABS_Y,
++				     YMIN_NOMINAL, YMAX_NOMINAL, 0, 0);
++		input_set_abs_params(input_dev, ABS_PRESSURE, 0, 255, 0, 0);
++		input_set_abs_params(input_dev, ABS_TOOL_WIDTH, 0, 15, 0, 0);
++		__set_bit(BTN_TOUCH, input_dev->keybit);
++		__set_bit(BTN_TOOL_FINGER, input_dev->keybit);
++		__set_bit(BTN_TOOL_DOUBLETAP, input_dev->keybit);
++		__set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit);
++	}
++
++	__set_bit(BTN_LEFT, input_dev->keybit);
++	__set_bit(BTN_RIGHT, input_dev->keybit);
++	__set_bit(BTN_MIDDLE, input_dev->keybit);
++
++	usb_set_intfdata(intf, synusb);
++
++	if (synusb->flags & SYNUSB_IO_ALWAYS) {
++		error = synusb_open(input_dev);
++		if (error)
++			goto err_free_dma;
++	}
++
++	error = input_register_device(input_dev);
++	if (error) {
++		dev_err(&udev->dev,
++			"Failed to register input device, error %d\n",
++			error);
++		goto err_stop_io;
++	}
++
++	return 0;
++
++err_stop_io:
++	if (synusb->flags & SYNUSB_IO_ALWAYS)
++		synusb_close(synusb->input);
++err_free_dma:
++	usb_free_coherent(udev, SYNUSB_RECV_SIZE, synusb->data,
++			  synusb->urb->transfer_dma);
++err_free_urb:
++	usb_free_urb(synusb->urb);
++err_free_mem:
++	input_free_device(input_dev);
++	kfree(synusb);
++	usb_set_intfdata(intf, NULL);
++
++	return error;
++}
++
++static void synusb_disconnect(struct usb_interface *intf)
++{
++	struct synusb *synusb = usb_get_intfdata(intf);
++	struct usb_device *udev = interface_to_usbdev(intf);
++
++	if (synusb->flags & SYNUSB_IO_ALWAYS)
++		synusb_close(synusb->input);
++
++	input_unregister_device(synusb->input);
++
++	usb_free_coherent(udev, SYNUSB_RECV_SIZE, synusb->data,
++			  synusb->urb->transfer_dma);
++	usb_free_urb(synusb->urb);
++	kfree(synusb);
++
++	usb_set_intfdata(intf, NULL);
++}
++
++static int synusb_suspend(struct usb_interface *intf, pm_message_t message)
++{
++	struct synusb *synusb = usb_get_intfdata(intf);
++	struct input_dev *input_dev = synusb->input;
++
++	mutex_lock(&input_dev->mutex);
++	usb_kill_urb(synusb->urb);
++	mutex_unlock(&input_dev->mutex);
++
++	return 0;
++}
++
++static int synusb_resume(struct usb_interface *intf)
++{
++	struct synusb *synusb = usb_get_intfdata(intf);
++	struct input_dev *input_dev = synusb->input;
++	int retval = 0;
++
++	mutex_lock(&input_dev->mutex);
++
++	if ((input_dev->users || (synusb->flags & SYNUSB_IO_ALWAYS)) &&
++	    usb_submit_urb(synusb->urb, GFP_NOIO) < 0) {
++		retval = -EIO;
++	}
++
++	mutex_unlock(&input_dev->mutex);
++
++	return retval;
++}
++
++static int synusb_pre_reset(struct usb_interface *intf)
++{
++	struct synusb *synusb = usb_get_intfdata(intf);
++	struct input_dev *input_dev = synusb->input;
++
++	mutex_lock(&input_dev->mutex);
++	usb_kill_urb(synusb->urb);
++
++	return 0;
++}
++
++static int synusb_post_reset(struct usb_interface *intf)
++{
++	struct synusb *synusb = usb_get_intfdata(intf);
++	struct input_dev *input_dev = synusb->input;
++	int retval = 0;
++
++	if ((input_dev->users || (synusb->flags & SYNUSB_IO_ALWAYS)) &&
++	    usb_submit_urb(synusb->urb, GFP_NOIO) < 0) {
++		retval = -EIO;
++	}
++
++	mutex_unlock(&input_dev->mutex);
++
++	return retval;
++}
++
++static int synusb_reset_resume(struct usb_interface *intf)
++{
++	return synusb_resume(intf);
++}
++
++static struct usb_device_id synusb_idtable[] = {
++	{ USB_DEVICE_SYNAPTICS(TP, SYNUSB_TOUCHPAD) },
++	{ USB_DEVICE_SYNAPTICS(INT_TP, SYNUSB_TOUCHPAD) },
++	{ USB_DEVICE_SYNAPTICS(CPAD,
++		SYNUSB_TOUCHPAD | SYNUSB_AUXDISPLAY | SYNUSB_IO_ALWAYS) },
++	{ USB_DEVICE_SYNAPTICS(TS, SYNUSB_TOUCHSCREEN) },
++	{ USB_DEVICE_SYNAPTICS(STICK, SYNUSB_STICK) },
++	{ USB_DEVICE_SYNAPTICS(WP, SYNUSB_TOUCHPAD) },
++	{ USB_DEVICE_SYNAPTICS(COMP_TP, SYNUSB_COMBO) },
++	{ USB_DEVICE_SYNAPTICS(WTP, SYNUSB_TOUCHPAD) },
++	{ USB_DEVICE_SYNAPTICS(DPAD, SYNUSB_TOUCHPAD) },
++	{ }
++};
++MODULE_DEVICE_TABLE(usb, synusb_idtable);
++
++static struct usb_driver synusb_driver = {
++	.name		= "synaptics_usb",
++	.probe		= synusb_probe,
++	.disconnect	= synusb_disconnect,
++	.id_table	= synusb_idtable,
++	.suspend	= synusb_suspend,
++	.resume		= synusb_resume,
++	.pre_reset	= synusb_pre_reset,
++	.post_reset	= synusb_post_reset,
++	.reset_resume	= synusb_reset_resume,
++	.supports_autosuspend = 1,
++};
++
++static int __init synusb_init(void)
++{
++	return usb_register(&synusb_driver);
++}
++
++static void __exit synusb_exit(void)
++{
++	usb_deregister(&synusb_driver);
++}
++
++module_init(synusb_init);
++module_exit(synusb_exit);
++
++MODULE_AUTHOR("Rob Miller <rob at inpharmatica.co.uk>, "
++              "Ron Lee <ron at debian.org>, "
++              "Jan Steinhoff <cpad at jan-steinhoff.de>");
++MODULE_DESCRIPTION("Synaptics USB device driver");
++MODULE_LICENSE("GPL");

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0001-Revert-workqueue-skip-nr_running-sanity-check-in-wor.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0001-Revert-workqueue-skip-nr_running-sanity-check-in-wor.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0001-Revert-workqueue-skip-nr_running-sanity-check-in-wor.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0001-Revert-workqueue-skip-nr_running-sanity-check-in-wor.patch)
@@ -0,0 +1,38 @@
+From 4c22f54ece56578f383a339303b225f12c174d7b Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <srostedt at redhat.com>
+Date: Wed, 6 Jun 2012 17:07:34 -0400
+Subject: [PATCH 001/271] Revert "workqueue: skip nr_running sanity check in
+ worker_enter_idle() if trustee is active"
+
+This reverts commit 5d79c6f64a904afc92a329f80abe693e3ae105fe.
+
+Reported-by: Ibrahim Umar <iambaim at gmail.com>
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+---
+ kernel/workqueue.c |    9 ++-------
+ 1 file changed, 2 insertions(+), 7 deletions(-)
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 7947e16..bb425b1 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1215,13 +1215,8 @@ static void worker_enter_idle(struct worker *worker)
+ 	} else
+ 		wake_up_all(&gcwq->trustee_wait);
+ 
+-	/*
+-	 * Sanity check nr_running.  Because trustee releases gcwq->lock
+-	 * between setting %WORKER_ROGUE and zapping nr_running, the
+-	 * warning may trigger spuriously.  Check iff trustee is idle.
+-	 */
+-	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
+-		     gcwq->nr_workers == gcwq->nr_idle &&
++	/* sanity check nr_running */
++	WARN_ON_ONCE(gcwq->nr_workers == gcwq->nr_idle &&
+ 		     atomic_read(get_gcwq_nr_running(gcwq->cpu)));
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0002-x86-Call-idle-notifier-after-irq_enter.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0002-x86-Call-idle-notifier-after-irq_enter.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0002-x86-Call-idle-notifier-after-irq_enter.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0002-x86-Call-idle-notifier-after-irq_enter.patch)
@@ -0,0 +1,155 @@
+From 900585567b315bf91186129c62f925f889e01697 Mon Sep 17 00:00:00 2001
+From: Frederic Weisbecker <fweisbec at gmail.com>
+Date: Mon, 26 Sep 2011 12:19:11 +0200
+Subject: [PATCH 002/271] x86: Call idle notifier after irq_enter()
+
+Interrupts notify the idle exit state before calling irq_enter(). But
+the notifier code calls rcu_read_lock() and this is not allowed while
+rcu is in an extended quiescent state. We need to wait for
+rcu_irq_enter() to be called before doing so otherwise this results in
+a grumpy RCU:
+
+[    0.099991] WARNING: at include/linux/rcupdate.h:194 __atomic_notifier_call_chain+0xd2/0x110()
+[    0.099991] Hardware name: AMD690VM-FMH
+[    0.099991] Modules linked in:
+[    0.099991] Pid: 0, comm: swapper Not tainted 3.0.0-rc6+ #255
+[    0.099991] Call Trace:
+[    0.099991]  <IRQ>  [<ffffffff81051c8a>] warn_slowpath_common+0x7a/0xb0
+[    0.099991]  [<ffffffff81051cd5>] warn_slowpath_null+0x15/0x20
+[    0.099991]  [<ffffffff817d6fa2>] __atomic_notifier_call_chain+0xd2/0x110
+[    0.099991]  [<ffffffff817d6ff1>] atomic_notifier_call_chain+0x11/0x20
+[    0.099991]  [<ffffffff81001873>] exit_idle+0x43/0x50
+[    0.099991]  [<ffffffff81020439>] smp_apic_timer_interrupt+0x39/0xa0
+[    0.099991]  [<ffffffff817da253>] apic_timer_interrupt+0x13/0x20
+[    0.099991]  <EOI>  [<ffffffff8100ae67>] ? default_idle+0xa7/0x350
+[    0.099991]  [<ffffffff8100ae65>] ? default_idle+0xa5/0x350
+[    0.099991]  [<ffffffff8100b19b>] amd_e400_idle+0x8b/0x110
+[    0.099991]  [<ffffffff810cb01f>] ? rcu_enter_nohz+0x8f/0x160
+[    0.099991]  [<ffffffff810019a0>] cpu_idle+0xb0/0x110
+[    0.099991]  [<ffffffff817a7505>] rest_init+0xe5/0x140
+[    0.099991]  [<ffffffff817a7468>] ? rest_init+0x48/0x140
+[    0.099991]  [<ffffffff81cc5ca3>] start_kernel+0x3d1/0x3dc
+[    0.099991]  [<ffffffff81cc5321>] x86_64_start_reservations+0x131/0x135
+[    0.099991]  [<ffffffff81cc5412>] x86_64_start_kernel+0xed/0xf4
+
+Signed-off-by: Frederic Weisbecker <fweisbec at gmail.com>
+Link: http://lkml.kernel.org/r/20110929194047.GA10247@linux.vnet.ibm.com
+Cc: Ingo Molnar <mingo at redhat.com>
+Cc: H. Peter Anvin <hpa at zytor.com>
+Cc: Andy Henroid <andrew.d.henroid at intel.com>
+Signed-off-by: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/kernel/apic/apic.c              |    6 +++---
+ arch/x86/kernel/apic/io_apic.c           |    2 +-
+ arch/x86/kernel/cpu/mcheck/therm_throt.c |    2 +-
+ arch/x86/kernel/cpu/mcheck/threshold.c   |    2 +-
+ arch/x86/kernel/irq.c                    |    6 +++---
+ 5 files changed, 9 insertions(+), 9 deletions(-)
+
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index c4e3581..c2beffe 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -876,8 +876,8 @@ void __irq_entry smp_apic_timer_interrupt(struct pt_regs *regs)
+ 	 * Besides, if we don't timer interrupts ignore the global
+ 	 * interrupt lock, which is the WrongThing (tm) to do.
+ 	 */
+-	exit_idle();
+ 	irq_enter();
++	exit_idle();
+ 	local_apic_timer_interrupt();
+ 	irq_exit();
+ 
+@@ -1813,8 +1813,8 @@ void smp_spurious_interrupt(struct pt_regs *regs)
+ {
+ 	u32 v;
+ 
+-	exit_idle();
+ 	irq_enter();
++	exit_idle();
+ 	/*
+ 	 * Check if this really is a spurious interrupt and ACK it
+ 	 * if it is a vectored one.  Just in case...
+@@ -1850,8 +1850,8 @@ void smp_error_interrupt(struct pt_regs *regs)
+ 		"Illegal register address",	/* APIC Error Bit 7 */
+ 	};
+ 
+-	exit_idle();
+ 	irq_enter();
++	exit_idle();
+ 	/* First tickle the hardware, only then report what went on. -- REW */
+ 	v0 = apic_read(APIC_ESR);
+ 	apic_write(APIC_ESR, 0);
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 6d939d7..8980555 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -2421,8 +2421,8 @@ asmlinkage void smp_irq_move_cleanup_interrupt(void)
+ 	unsigned vector, me;
+ 
+ 	ack_APIC_irq();
+-	exit_idle();
+ 	irq_enter();
++	exit_idle();
+ 
+ 	me = smp_processor_id();
+ 	for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
+diff --git a/arch/x86/kernel/cpu/mcheck/therm_throt.c b/arch/x86/kernel/cpu/mcheck/therm_throt.c
+index 787e06c..ce21561 100644
+--- a/arch/x86/kernel/cpu/mcheck/therm_throt.c
++++ b/arch/x86/kernel/cpu/mcheck/therm_throt.c
+@@ -397,8 +397,8 @@ static void (*smp_thermal_vector)(void) = unexpected_thermal_interrupt;
+ 
+ asmlinkage void smp_thermal_interrupt(struct pt_regs *regs)
+ {
+-	exit_idle();
+ 	irq_enter();
++	exit_idle();
+ 	inc_irq_stat(irq_thermal_count);
+ 	smp_thermal_vector();
+ 	irq_exit();
+diff --git a/arch/x86/kernel/cpu/mcheck/threshold.c b/arch/x86/kernel/cpu/mcheck/threshold.c
+index d746df2..aa578ca 100644
+--- a/arch/x86/kernel/cpu/mcheck/threshold.c
++++ b/arch/x86/kernel/cpu/mcheck/threshold.c
+@@ -19,8 +19,8 @@ void (*mce_threshold_vector)(void) = default_threshold_interrupt;
+ 
+ asmlinkage void smp_threshold_interrupt(void)
+ {
+-	exit_idle();
+ 	irq_enter();
++	exit_idle();
+ 	inc_irq_stat(irq_threshold_count);
+ 	mce_threshold_vector();
+ 	irq_exit();
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 429e0c9..5d31e5b 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -181,8 +181,8 @@ unsigned int __irq_entry do_IRQ(struct pt_regs *regs)
+ 	unsigned vector = ~regs->orig_ax;
+ 	unsigned irq;
+ 
+-	exit_idle();
+ 	irq_enter();
++	exit_idle();
+ 
+ 	irq = __this_cpu_read(vector_irq[vector]);
+ 
+@@ -209,10 +209,10 @@ void smp_x86_platform_ipi(struct pt_regs *regs)
+ 
+ 	ack_APIC_irq();
+ 
+-	exit_idle();
+-
+ 	irq_enter();
+ 
++	exit_idle();
++
+ 	inc_irq_stat(x86_platform_ipis);
+ 
+ 	if (x86_platform_ipi_callback)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0003-slab-lockdep-Annotate-all-slab-caches.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0003-slab-lockdep-Annotate-all-slab-caches.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0003-slab-lockdep-Annotate-all-slab-caches.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0003-slab-lockdep-Annotate-all-slab-caches.patch)
@@ -0,0 +1,137 @@
+From 9046045fb37e8eb8a4d897ff2fec5a7a7dc4d72c Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Mon, 28 Nov 2011 19:51:51 +0100
+Subject: [PATCH 003/271] slab, lockdep: Annotate all slab caches
+
+Currently we only annotate the kmalloc caches, annotate all of them.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Cc: Hans Schillstrom <hans at schillstrom.com>
+Cc: Christoph Lameter <cl at gentwo.org>
+Cc: Pekka Enberg <penberg at cs.helsinki.fi>
+Cc: Matt Mackall <mpm at selenic.com>
+Cc: Sitsofe Wheeler <sitsofe at yahoo.com>
+Cc: linux-mm at kvack.org
+Cc: David Rientjes <rientjes at google.com>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Link: http://lkml.kernel.org/n/tip-10bey2cgpcvtbdkgigaoab8w@git.kernel.org
+---
+ mm/slab.c |   52 ++++++++++++++++++++++++++++------------------------
+ 1 file changed, 28 insertions(+), 24 deletions(-)
+
+diff --git a/mm/slab.c b/mm/slab.c
+index 83311c9a..b76905e 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -607,6 +607,12 @@ int slab_is_available(void)
+ 	return g_cpucache_up >= EARLY;
+ }
+ 
++/*
++ * Guard access to the cache-chain.
++ */
++static DEFINE_MUTEX(cache_chain_mutex);
++static struct list_head cache_chain;
++
+ #ifdef CONFIG_LOCKDEP
+ 
+ /*
+@@ -668,38 +674,41 @@ static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep)
+ 		slab_set_debugobj_lock_classes_node(cachep, node);
+ }
+ 
+-static void init_node_lock_keys(int q)
++static void init_lock_keys(struct kmem_cache *cachep, int node)
+ {
+-	struct cache_sizes *s = malloc_sizes;
++	struct kmem_list3 *l3;
+ 
+ 	if (g_cpucache_up < LATE)
+ 		return;
+ 
+-	for (s = malloc_sizes; s->cs_size != ULONG_MAX; s++) {
+-		struct kmem_list3 *l3;
++	l3 = cachep->nodelists[node];
++	if (!l3 || OFF_SLAB(cachep))
++		return;
+ 
+-		l3 = s->cs_cachep->nodelists[q];
+-		if (!l3 || OFF_SLAB(s->cs_cachep))
+-			continue;
++	slab_set_lock_classes(cachep, &on_slab_l3_key, &on_slab_alc_key, node);
++}
+ 
+-		slab_set_lock_classes(s->cs_cachep, &on_slab_l3_key,
+-				&on_slab_alc_key, q);
+-	}
++static void init_node_lock_keys(int node)
++{
++	struct kmem_cache *cachep;
++
++	list_for_each_entry(cachep, &cache_chain, next)
++		init_lock_keys(cachep, node);
+ }
+ 
+-static inline void init_lock_keys(void)
++static inline void init_cachep_lock_keys(struct kmem_cache *cachep)
+ {
+ 	int node;
+ 
+ 	for_each_node(node)
+-		init_node_lock_keys(node);
++		init_lock_keys(cachep, node);
+ }
+ #else
+-static void init_node_lock_keys(int q)
++static void init_node_lock_keys(int node)
+ {
+ }
+ 
+-static inline void init_lock_keys(void)
++static void init_cachep_lock_keys(struct kmem_cache *cachep)
+ {
+ }
+ 
+@@ -712,12 +721,6 @@ static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep)
+ }
+ #endif
+ 
+-/*
+- * Guard access to the cache-chain.
+- */
+-static DEFINE_MUTEX(cache_chain_mutex);
+-static struct list_head cache_chain;
+-
+ static DEFINE_PER_CPU(struct delayed_work, slab_reap_work);
+ 
+ static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep)
+@@ -1669,14 +1672,13 @@ void __init kmem_cache_init_late(void)
+ 
+ 	g_cpucache_up = LATE;
+ 
+-	/* Annotate slab for lockdep -- annotate the malloc caches */
+-	init_lock_keys();
+-
+ 	/* 6) resize the head arrays to their final sizes */
+ 	mutex_lock(&cache_chain_mutex);
+-	list_for_each_entry(cachep, &cache_chain, next)
++	list_for_each_entry(cachep, &cache_chain, next) {
++		init_cachep_lock_keys(cachep);
+ 		if (enable_cpucache(cachep, GFP_NOWAIT))
+ 			BUG();
++	}
+ 	mutex_unlock(&cache_chain_mutex);
+ 
+ 	/* Done! */
+@@ -2479,6 +2481,8 @@ kmem_cache_create (const char *name, size_t size, size_t align,
+ 		slab_set_debugobj_lock_classes(cachep);
+ 	}
+ 
++	init_cachep_lock_keys(cachep);
++
+ 	/* cache setup completed, link it into the list */
+ 	list_add(&cachep->next, &cache_chain);
+ oops:
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0004-x86-kprobes-Remove-remove-bogus-preempt_enable.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0004-x86-kprobes-Remove-remove-bogus-preempt_enable.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0004-x86-kprobes-Remove-remove-bogus-preempt_enable.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0004-x86-kprobes-Remove-remove-bogus-preempt_enable.patch)
@@ -0,0 +1,33 @@
+From 3695633b62fc9f84b159e9d6012b864a0c7ef1f0 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 17 Mar 2011 11:02:15 +0100
+Subject: [PATCH 004/271] x86: kprobes: Remove remove bogus preempt_enable
+
+The CONFIG_PREEMPT=n section of setup_singlestep() contains:
+
+    preempt_enable_no_resched();
+
+That's bogus as it is asymetric - no preempt_disable() - and it just
+never blew up because preempt_enable_no_resched() is a NOP when
+CONFIG_PREEMPT=n. Remove it.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/kernel/kprobes.c |    1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
+index 7da647d..5604455 100644
+--- a/arch/x86/kernel/kprobes.c
++++ b/arch/x86/kernel/kprobes.c
+@@ -478,7 +478,6 @@ static void __kprobes setup_singlestep(struct kprobe *p, struct pt_regs *regs,
+ 		 * stepping.
+ 		 */
+ 		regs->ip = (unsigned long)p->ainsn.insn;
+-		preempt_enable_no_resched();
+ 		return;
+ 	}
+ #endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0005-x86-hpet-Disable-MSI-on-Lenovo-W510.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0005-x86-hpet-Disable-MSI-on-Lenovo-W510.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0005-x86-hpet-Disable-MSI-on-Lenovo-W510.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0005-x86-hpet-Disable-MSI-on-Lenovo-W510.patch)
@@ -0,0 +1,70 @@
+From 2d135294fadff1032a91fdddeb9873411e52d183 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 30 Sep 2011 20:03:37 +0200
+Subject: [PATCH 005/271] x86: hpet: Disable MSI on Lenovo W510
+
+MSI based per cpu timers lose interrupts when intel_idle() is enabled
+- independent of the c-state. With idle=poll the problem cannot be
+observed. We have no idea yet, whether this is a W510 specific issue
+or a general chipset oddity. Blacklist the known problem machine.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/kernel/hpet.c |   27 +++++++++++++++++++++++++++
+ 1 file changed, 27 insertions(+)
+
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 1bb0bf4..d86b821 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -9,6 +9,7 @@
+ #include <linux/slab.h>
+ #include <linux/hpet.h>
+ #include <linux/init.h>
++#include <linux/dmi.h>
+ #include <linux/cpu.h>
+ #include <linux/pm.h>
+ #include <linux/io.h>
+@@ -568,6 +569,30 @@ static void init_one_hpet_msi_clockevent(struct hpet_dev *hdev, int cpu)
+ #define RESERVE_TIMERS 0
+ #endif
+ 
++static int __init dmi_disable_hpet_msi(const struct dmi_system_id *d)
++{
++	hpet_msi_disable = 1;
++	return 0;
++}
++
++static struct dmi_system_id __initdata dmi_hpet_table[] = {
++	/*
++	 * MSI based per cpu timers lose interrupts when intel_idle()
++	 * is enabled - independent of the c-state. With idle=poll the
++	 * problem cannot be observed. We have no idea yet, whether
++	 * this is a W510 specific issue or a general chipset oddity.
++	 */
++	{
++	 .callback = dmi_disable_hpet_msi,
++	 .ident = "Lenovo W510",
++	 .matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++		     DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W510"),
++		     },
++	 },
++	{}
++};
++
+ static void hpet_msi_capability_lookup(unsigned int start_timer)
+ {
+ 	unsigned int id;
+@@ -575,6 +600,8 @@ static void hpet_msi_capability_lookup(unsigned int start_timer)
+ 	unsigned int num_timers_used = 0;
+ 	int i;
+ 
++	dmi_check_system(dmi_hpet_table);
++
+ 	if (hpet_msi_disable)
+ 		return;
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0006-block-Shorten-interrupt-disabled-regions.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0006-block-Shorten-interrupt-disabled-regions.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0006-block-Shorten-interrupt-disabled-regions.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0006-block-Shorten-interrupt-disabled-regions.patch)
@@ -0,0 +1,121 @@
+From 40a6cbf1c96ee87bba70d50f356b37983c3902ff Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 22 Jun 2011 19:47:02 +0200
+Subject: [PATCH 006/271] block: Shorten interrupt disabled regions
+
+Moving the blk_sched_flush_plug() call out of the interrupt/preempt
+disabled region in the scheduler allows us to replace
+local_irq_save/restore(flags) by local_irq_disable/enable() in
+blk_flush_plug().
+
+Now instead of doing this we disable interrupts explicitely when we
+lock the request_queue and reenable them when we drop the lock. That
+allows interrupts to be handled when the plug list contains requests
+for more than one queue.
+
+Aside of that this change makes the scope of the irq disabled region
+more obvious. The current code confused the hell out of me when
+looking at:
+
+ local_irq_save(flags);
+   spin_lock(q->queue_lock);
+   ...
+   queue_unplugged(q...);
+     scsi_request_fn();
+       spin_unlock(q->queue_lock);
+       spin_lock(shost->host_lock);
+       spin_unlock_irq(shost->host_lock);
+
+-------------------^^^ ????
+
+       spin_lock_irq(q->queue_lock);
+       spin_unlock(q->lock);
+ local_irq_restore(flags);
+
+Also add a comment to __blk_run_queue() documenting that
+q->request_fn() can drop q->queue_lock and reenable interrupts, but
+must return with q->queue_lock held and interrupts disabled.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Tejun Heo <tj at kernel.org>
+Cc: Jens Axboe <axboe at kernel.dk>
+Cc: Linus Torvalds <torvalds at linux-foundation.org>
+Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ block/blk-core.c |   20 ++++++++------------
+ 1 file changed, 8 insertions(+), 12 deletions(-)
+
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 15de223..7366ad4 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -300,7 +300,11 @@ void __blk_run_queue(struct request_queue *q)
+ {
+ 	if (unlikely(blk_queue_stopped(q)))
+ 		return;
+-
++	/*
++	 * q->request_fn() can drop q->queue_lock and reenable
++	 * interrupts, but must return with q->queue_lock held and
++	 * interrupts disabled.
++	 */
+ 	q->request_fn(q);
+ }
+ EXPORT_SYMBOL(__blk_run_queue);
+@@ -2745,11 +2749,11 @@ static void queue_unplugged(struct request_queue *q, unsigned int depth,
+ 	 * this lock).
+ 	 */
+ 	if (from_schedule) {
+-		spin_unlock(q->queue_lock);
++		spin_unlock_irq(q->queue_lock);
+ 		blk_run_queue_async(q);
+ 	} else {
+ 		__blk_run_queue(q);
+-		spin_unlock(q->queue_lock);
++		spin_unlock_irq(q->queue_lock);
+ 	}
+ 
+ }
+@@ -2775,7 +2779,6 @@ static void flush_plug_callbacks(struct blk_plug *plug)
+ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ {
+ 	struct request_queue *q;
+-	unsigned long flags;
+ 	struct request *rq;
+ 	LIST_HEAD(list);
+ 	unsigned int depth;
+@@ -2796,11 +2799,6 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 	q = NULL;
+ 	depth = 0;
+ 
+-	/*
+-	 * Save and disable interrupts here, to avoid doing it for every
+-	 * queue lock we have to take.
+-	 */
+-	local_irq_save(flags);
+ 	while (!list_empty(&list)) {
+ 		rq = list_entry_rq(list.next);
+ 		list_del_init(&rq->queuelist);
+@@ -2813,7 +2811,7 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 				queue_unplugged(q, depth, from_schedule);
+ 			q = rq->q;
+ 			depth = 0;
+-			spin_lock(q->queue_lock);
++			spin_lock_irq(q->queue_lock);
+ 		}
+ 		/*
+ 		 * rq is already accounted, so use raw insert
+@@ -2831,8 +2829,6 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 	 */
+ 	if (q)
+ 		queue_unplugged(q, depth, from_schedule);
+-
+-	local_irq_restore(flags);
+ }
+ 
+ void blk_finish_plug(struct blk_plug *plug)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0007-sched-Distangle-worker-accounting-from-rq-3Elock.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0007-sched-Distangle-worker-accounting-from-rq-3Elock.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0007-sched-Distangle-worker-accounting-from-rq-3Elock.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0007-sched-Distangle-worker-accounting-from-rq-3Elock.patch)
@@ -0,0 +1,266 @@
+From 900d25e3ff2c56a0d9c1d3261ea34fa0c0e7a5e5 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 22 Jun 2011 19:47:03 +0200
+Subject: [PATCH 007/271] sched: Distangle worker accounting from rq-%3Elock
+
+The worker accounting for cpu bound workers is plugged into the core
+scheduler code and the wakeup code. This is not a hard requirement and
+can be avoided by keeping track of the state in the workqueue code
+itself.
+
+Keep track of the sleeping state in the worker itself and call the
+notifier before entering the core scheduler. There might be false
+positives when the task is woken between that call and actually
+scheduling, but that's not really different from scheduling and being
+woken immediately after switching away. There is also no harm from
+updating nr_running when the task returns from scheduling instead of
+accounting it in the wakeup code.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Tejun Heo <tj at kernel.org>
+Cc: Jens Axboe <axboe at kernel.dk>
+Cc: Linus Torvalds <torvalds at linux-foundation.org>
+Link: http://lkml.kernel.org/r/20110622174919.135236139@linutronix.de
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c           |   66 +++++++++++----------------------------------
+ kernel/workqueue.c       |   67 +++++++++++++++++++++-------------------------
+ kernel/workqueue_sched.h |    5 ++--
+ 3 files changed, 47 insertions(+), 91 deletions(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 299f55c..1ae1cab 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -2644,10 +2644,6 @@ static void ttwu_activate(struct rq *rq, struct task_struct *p, int en_flags)
+ {
+ 	activate_task(rq, p, en_flags);
+ 	p->on_rq = 1;
+-
+-	/* if a worker is waking up, notify workqueue */
+-	if (p->flags & PF_WQ_WORKER)
+-		wq_worker_waking_up(p, cpu_of(rq));
+ }
+ 
+ /*
+@@ -2882,40 +2878,6 @@ out:
+ }
+ 
+ /**
+- * try_to_wake_up_local - try to wake up a local task with rq lock held
+- * @p: the thread to be awakened
+- *
+- * Put @p on the run-queue if it's not already there. The caller must
+- * ensure that this_rq() is locked, @p is bound to this_rq() and not
+- * the current task.
+- */
+-static void try_to_wake_up_local(struct task_struct *p)
+-{
+-	struct rq *rq = task_rq(p);
+-
+-	BUG_ON(rq != this_rq());
+-	BUG_ON(p == current);
+-	lockdep_assert_held(&rq->lock);
+-
+-	if (!raw_spin_trylock(&p->pi_lock)) {
+-		raw_spin_unlock(&rq->lock);
+-		raw_spin_lock(&p->pi_lock);
+-		raw_spin_lock(&rq->lock);
+-	}
+-
+-	if (!(p->state & TASK_NORMAL))
+-		goto out;
+-
+-	if (!p->on_rq)
+-		ttwu_activate(rq, p, ENQUEUE_WAKEUP);
+-
+-	ttwu_do_wakeup(rq, p, 0);
+-	ttwu_stat(p, smp_processor_id(), 0);
+-out:
+-	raw_spin_unlock(&p->pi_lock);
+-}
+-
+-/**
+  * wake_up_process - Wake up a specific process
+  * @p: The process to be woken up.
+  *
+@@ -4419,19 +4381,6 @@ need_resched:
+ 		} else {
+ 			deactivate_task(rq, prev, DEQUEUE_SLEEP);
+ 			prev->on_rq = 0;
+-
+-			/*
+-			 * If a worker went to sleep, notify and ask workqueue
+-			 * whether it wants to wake up a task to maintain
+-			 * concurrency.
+-			 */
+-			if (prev->flags & PF_WQ_WORKER) {
+-				struct task_struct *to_wakeup;
+-
+-				to_wakeup = wq_worker_sleeping(prev, cpu);
+-				if (to_wakeup)
+-					try_to_wake_up_local(to_wakeup);
+-			}
+ 		}
+ 		switch_count = &prev->nvcsw;
+ 	}
+@@ -4474,6 +4423,14 @@ static inline void sched_submit_work(struct task_struct *tsk)
+ {
+ 	if (!tsk->state)
+ 		return;
++
++	/*
++	 * If a worker went to sleep, notify and ask workqueue whether
++	 * it wants to wake up a task to maintain concurrency.
++	 */
++	if (tsk->flags & PF_WQ_WORKER)
++		wq_worker_sleeping(tsk);
++
+ 	/*
+ 	 * If we are going to sleep and we have plugged IO queued,
+ 	 * make sure to submit it to avoid deadlocks.
+@@ -4482,12 +4439,19 @@ static inline void sched_submit_work(struct task_struct *tsk)
+ 		blk_schedule_flush_plug(tsk);
+ }
+ 
++static inline void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & PF_WQ_WORKER)
++		wq_worker_running(tsk);
++}
++
+ asmlinkage void __sched schedule(void)
+ {
+ 	struct task_struct *tsk = current;
+ 
+ 	sched_submit_work(tsk);
+ 	__schedule();
++	sched_update_worker(tsk);
+ }
+ EXPORT_SYMBOL(schedule);
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index bb425b1..4b4421d 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -137,6 +137,7 @@ struct worker {
+ 	unsigned int		flags;		/* X: flags */
+ 	int			id;		/* I: worker id */
+ 	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
++	int			sleeping;	/* None */
+ };
+ 
+ /*
+@@ -660,66 +661,58 @@ static void wake_up_worker(struct global_cwq *gcwq)
+ }
+ 
+ /**
+- * wq_worker_waking_up - a worker is waking up
+- * @task: task waking up
+- * @cpu: CPU @task is waking up to
++ * wq_worker_running - a worker is running again
++ * @task: task returning from sleep
+  *
+- * This function is called during try_to_wake_up() when a worker is
+- * being awoken.
+- *
+- * CONTEXT:
+- * spin_lock_irq(rq->lock)
++ * This function is called when a worker returns from schedule()
+  */
+-void wq_worker_waking_up(struct task_struct *task, unsigned int cpu)
++void wq_worker_running(struct task_struct *task)
+ {
+ 	struct worker *worker = kthread_data(task);
+ 
++	if (!worker->sleeping)
++		return;
+ 	if (!(worker->flags & WORKER_NOT_RUNNING))
+-		atomic_inc(get_gcwq_nr_running(cpu));
++		atomic_inc(get_gcwq_nr_running(smp_processor_id()));
++	worker->sleeping = 0;
+ }
+ 
+ /**
+  * wq_worker_sleeping - a worker is going to sleep
+  * @task: task going to sleep
+- * @cpu: CPU in question, must be the current CPU number
+- *
+- * This function is called during schedule() when a busy worker is
+- * going to sleep.  Worker on the same cpu can be woken up by
+- * returning pointer to its task.
+- *
+- * CONTEXT:
+- * spin_lock_irq(rq->lock)
+  *
+- * RETURNS:
+- * Worker task on @cpu to wake up, %NULL if none.
++ * This function is called from schedule() when a busy worker is
++ * going to sleep.
+  */
+-struct task_struct *wq_worker_sleeping(struct task_struct *task,
+-				       unsigned int cpu)
++void wq_worker_sleeping(struct task_struct *task)
+ {
+-	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
+-	struct global_cwq *gcwq = get_gcwq(cpu);
+-	atomic_t *nr_running = get_gcwq_nr_running(cpu);
++	struct worker *worker = kthread_data(task);
++	struct global_cwq *gcwq;
++	int cpu;
+ 
+ 	if (worker->flags & WORKER_NOT_RUNNING)
+-		return NULL;
++		return;
++
++	if (WARN_ON_ONCE(worker->sleeping))
++		return;
+ 
+-	/* this can only happen on the local cpu */
+-	BUG_ON(cpu != raw_smp_processor_id());
++	worker->sleeping = 1;
+ 
++	cpu = smp_processor_id();
++	gcwq = get_gcwq(cpu);
++	spin_lock_irq(&gcwq->lock);
+ 	/*
+ 	 * The counterpart of the following dec_and_test, implied mb,
+ 	 * worklist not empty test sequence is in insert_work().
+ 	 * Please read comment there.
+-	 *
+-	 * NOT_RUNNING is clear.  This means that trustee is not in
+-	 * charge and we're running on the local cpu w/ rq lock held
+-	 * and preemption disabled, which in turn means that none else
+-	 * could be manipulating idle_list, so dereferencing idle_list
+-	 * without gcwq lock is safe.
+ 	 */
+-	if (atomic_dec_and_test(nr_running) && !list_empty(&gcwq->worklist))
+-		to_wakeup = first_worker(gcwq);
+-	return to_wakeup ? to_wakeup->task : NULL;
++	if (atomic_dec_and_test(get_gcwq_nr_running(cpu)) &&
++	    !list_empty(&gcwq->worklist)) {
++		worker = first_worker(gcwq);
++		if (worker)
++			wake_up_process(worker->task);
++	}
++	spin_unlock_irq(&gcwq->lock);
+ }
+ 
+ /**
+diff --git a/kernel/workqueue_sched.h b/kernel/workqueue_sched.h
+index 2d10fc9..3bf73e2 100644
+--- a/kernel/workqueue_sched.h
++++ b/kernel/workqueue_sched.h
+@@ -4,6 +4,5 @@
+  * Scheduler hooks for concurrency managed workqueue.  Only to be
+  * included from sched.c and workqueue.c.
+  */
+-void wq_worker_waking_up(struct task_struct *task, unsigned int cpu);
+-struct task_struct *wq_worker_sleeping(struct task_struct *task,
+-				       unsigned int cpu);
++void wq_worker_running(struct task_struct *task);
++void wq_worker_sleeping(struct task_struct *task);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0008-mips-enable-interrupts-in-signal.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0008-mips-enable-interrupts-in-signal.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0008-mips-enable-interrupts-in-signal.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0008-mips-enable-interrupts-in-signal.patch.patch)
@@ -0,0 +1,27 @@
+From 60b31cf01f01e833d9d13f82b0410011c3137e52 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 18 Jul 2011 21:32:10 +0200
+Subject: [PATCH 008/271] mips-enable-interrupts-in-signal.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/mips/kernel/signal.c |    3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c
+index f852400..1c8f34c 100644
+--- a/arch/mips/kernel/signal.c
++++ b/arch/mips/kernel/signal.c
+@@ -604,6 +604,9 @@ static void do_signal(struct pt_regs *regs)
+ 	if (!user_mode(regs))
+ 		return;
+ 
++	local_irq_enable();
++	preempt_check_resched();
++
+ 	if (test_thread_flag(TIF_RESTORE_SIGMASK))
+ 		oldset = &current->saved_sigmask;
+ 	else
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0009-arm-enable-interrupts-in-signal-code.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0009-arm-enable-interrupts-in-signal-code.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0009-arm-enable-interrupts-in-signal-code.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0009-arm-enable-interrupts-in-signal-code.patch.patch)
@@ -0,0 +1,27 @@
+From 64fa2ef832c394f9b6ea62be1c307b0950f12917 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 16 Jul 2011 16:27:13 +0200
+Subject: [PATCH 009/271] arm-enable-interrupts-in-signal-code.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/arm/kernel/signal.c |    3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
+index 9e617bd..c7001bc 100644
+--- a/arch/arm/kernel/signal.c
++++ b/arch/arm/kernel/signal.c
+@@ -672,6 +672,9 @@ static void do_signal(struct pt_regs *regs, int syscall)
+ 	if (!user_mode(regs))
+ 		return;
+ 
++	local_irq_enable();
++	preempt_check_resched();
++
+ 	/*
+ 	 * If we were from a system call, check for system call restarting...
+ 	 */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0010-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0010-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0010-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0010-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch)
@@ -0,0 +1,28 @@
+From a601bdd4056e53aca545ae3e402ed894ca8efc84 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 16 Jul 2011 12:09:54 +0200
+Subject: [PATCH 010/271] powerpc: 85xx: Mark cascade irq IRQF_NO_THREAD
+
+Cascade interrupt must run in hard interrupt context.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/powerpc/platforms/85xx/mpc85xx_cds.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/powerpc/platforms/85xx/mpc85xx_cds.c b/arch/powerpc/platforms/85xx/mpc85xx_cds.c
+index 66cb8d6..7e2a4d2 100644
+--- a/arch/powerpc/platforms/85xx/mpc85xx_cds.c
++++ b/arch/powerpc/platforms/85xx/mpc85xx_cds.c
+@@ -177,7 +177,7 @@ static irqreturn_t mpc85xx_8259_cascade_action(int irq, void *dev_id)
+ 
+ static struct irqaction mpc85xxcds_8259_irqaction = {
+ 	.handler = mpc85xx_8259_cascade_action,
+-	.flags = IRQF_SHARED,
++	.flags = IRQF_SHARED | IRQF_NO_THREAD,
+ 	.name = "8259 cascade",
+ };
+ #endif /* PPC_I8259 */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0011-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0011-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0011-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0011-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch)
@@ -0,0 +1,30 @@
+From 9b13e1a92f24640cd12d93825ce6fbba59fd281e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 5 Oct 2011 14:11:24 +0200
+Subject: [PATCH 011/271] powerpc: wsp: Mark opb cascade handler
+ IRQF_NO_THREAD
+
+Cascade handlers must run in hard interrupt context.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/powerpc/platforms/wsp/opb_pic.c |    3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/arch/powerpc/platforms/wsp/opb_pic.c b/arch/powerpc/platforms/wsp/opb_pic.c
+index be05631..19f353d 100644
+--- a/arch/powerpc/platforms/wsp/opb_pic.c
++++ b/arch/powerpc/platforms/wsp/opb_pic.c
+@@ -320,7 +320,8 @@ void __init opb_pic_init(void)
+ 		}
+ 
+ 		/* Attach opb interrupt handler to new virtual IRQ */
+-		rc = request_irq(virq, opb_irq_handler, 0, "OPB LS Cascade", opb);
++		rc = request_irq(virq, opb_irq_handler, IRQF_NO_THREAD,
++				 "OPB LS Cascade", opb);
+ 		if (rc) {
+ 			printk("opb: request_irq failed: %d\n", rc);
+ 			continue;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0012-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0012-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0012-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0012-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch)
@@ -0,0 +1,73 @@
+From 97692017f0414cc04e79d3e2b83f0066f8d2abbe Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 5 Oct 2011 14:00:26 +0200
+Subject: [PATCH 012/271] powerpc: Mark IPI interrupts IRQF_NO_THREAD
+
+IPI handlers cannot be threaded. Remove the obsolete IRQF_DISABLED
+flag (see commit e58aa3d2) while at it.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/powerpc/kernel/smp.c              |    4 ++--
+ arch/powerpc/platforms/powermac/smp.c  |    4 ++--
+ arch/powerpc/sysdev/xics/xics-common.c |    5 +++--
+ 3 files changed, 7 insertions(+), 6 deletions(-)
+
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 6df7090..abdedd3 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -187,8 +187,8 @@ int smp_request_message_ipi(int virq, int msg)
+ 		return 1;
+ 	}
+ #endif
+-	err = request_irq(virq, smp_ipi_action[msg], IRQF_PERCPU,
+-			  smp_ipi_name[msg], 0);
++	err = request_irq(virq, smp_ipi_action[msg],
++			  IRQF_PERCPU | IRQF_NO_THREAD, smp_ipi_name[msg], 0);
+ 	WARN(err < 0, "unable to request_irq %d for %s (rc %d)\n",
+ 		virq, smp_ipi_name[msg], err);
+ 
+diff --git a/arch/powerpc/platforms/powermac/smp.c b/arch/powerpc/platforms/powermac/smp.c
+index 3394254..8d75ac8 100644
+--- a/arch/powerpc/platforms/powermac/smp.c
++++ b/arch/powerpc/platforms/powermac/smp.c
+@@ -200,7 +200,7 @@ static int psurge_secondary_ipi_init(void)
+ 
+ 	if (psurge_secondary_virq)
+ 		rc = request_irq(psurge_secondary_virq, psurge_ipi_intr,
+-			IRQF_PERCPU, "IPI", NULL);
++				 IRQF_NO_THREAD | IRQF_PERCPU, "IPI", NULL);
+ 
+ 	if (rc)
+ 		pr_err("Failed to setup secondary cpu IPI\n");
+@@ -408,7 +408,7 @@ static int __init smp_psurge_kick_cpu(int nr)
+ 
+ static struct irqaction psurge_irqaction = {
+ 	.handler = psurge_ipi_intr,
+-	.flags = IRQF_PERCPU,
++	.flags = IRQF_PERCPU | IRQF_NO_THREAD,
+ 	.name = "primary IPI",
+ };
+ 
+diff --git a/arch/powerpc/sysdev/xics/xics-common.c b/arch/powerpc/sysdev/xics/xics-common.c
+index 63762c6..4ba6194 100644
+--- a/arch/powerpc/sysdev/xics/xics-common.c
++++ b/arch/powerpc/sysdev/xics/xics-common.c
+@@ -134,10 +134,11 @@ static void xics_request_ipi(void)
+ 	BUG_ON(ipi == NO_IRQ);
+ 
+ 	/*
+-	 * IPIs are marked IRQF_PERCPU. The handler was set in map.
++	 * IPIs are marked PERCPU and also IRQF_NO_THREAD as they must
++	 * run in hard interrupt context. The handler was set in map.
+ 	 */
+ 	BUG_ON(request_irq(ipi, icp_ops->ipi_action,
+-			   IRQF_PERCPU, "IPI", NULL));
++			   IRQF_NO_THREAD|IRQF_PERCPU, "IPI", NULL));
+ }
+ 
+ int __init xics_smp_probe(void)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0013-powerpc-Allow-irq-threading.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0013-powerpc-Allow-irq-threading.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0013-powerpc-Allow-irq-threading.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0013-powerpc-Allow-irq-threading.patch)
@@ -0,0 +1,23 @@
+From 52877abae7b0b37f748c73b43445201e969a9d16 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 16 Jul 2011 13:16:24 +0200
+Subject: [PATCH 013/271] powerpc: Allow irq threading
+
+All interrupts which must be non threaded are marked
+IRQF_NO_THREAD. So it's safe to allow force threaded handlers.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/powerpc/Kconfig |    1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -132,6 +132,7 @@
+ 	select IRQ_PER_CPU
+ 	select GENERIC_IRQ_SHOW
+ 	select GENERIC_IRQ_SHOW_LEVEL
++	select IRQ_FORCED_THREADING
+ 	select HAVE_RCU_TABLE_FREE if SMP
+ 	select HAVE_SYSCALL_TRACEPOINTS
+ 	select HAVE_BPF_JIT if PPC64

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0014-sched-Keep-period-timer-ticking-when-throttling-acti.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0014-sched-Keep-period-timer-ticking-when-throttling-acti.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0014-sched-Keep-period-timer-ticking-when-throttling-acti.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0014-sched-Keep-period-timer-ticking-when-throttling-acti.patch)
@@ -0,0 +1,68 @@
+From 685c46e56e613212e1d7ded498903481e67fea27 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz at infradead.org>
+Date: Tue, 18 Oct 2011 22:03:48 +0200
+Subject: [PATCH 014/271] sched: Keep period timer ticking when throttling
+ active
+
+When a runqueue is throttled we cannot disable the period timer
+because that timer is the only way to undo the throttling.
+
+We got stale throttling entries when a rq was throttled and then the
+global sysctl was disabled, which stopped the timer.
+
+[ tglx: Preliminary changelog ]
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched_rt.c |   13 ++++++++-----
+ 1 file changed, 8 insertions(+), 5 deletions(-)
+
+diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
+index 78fcacf..40d97e1 100644
+--- a/kernel/sched_rt.c
++++ b/kernel/sched_rt.c
+@@ -580,12 +580,9 @@ static inline int balance_runtime(struct rt_rq *rt_rq)
+ 
+ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
+ {
+-	int i, idle = 1;
++	int i, idle = 1, throttled = 0;
+ 	const struct cpumask *span;
+ 
+-	if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
+-		return 1;
+-
+ 	span = sched_rt_period_mask();
+ 	for_each_cpu(i, span) {
+ 		int enqueue = 0;
+@@ -620,12 +617,17 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
+ 			if (!rt_rq_throttled(rt_rq))
+ 				enqueue = 1;
+ 		}
++		if (rt_rq->rt_throttled)
++			throttled = 1;
+ 
+ 		if (enqueue)
+ 			sched_rt_rq_enqueue(rt_rq);
+ 		raw_spin_unlock(&rq->lock);
+ 	}
+ 
++	if (!throttled && (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF))
++		return 1;
++
+ 	return idle;
+ }
+ 
+@@ -686,7 +688,8 @@ static void update_curr_rt(struct rq *rq)
+ 	if (unlikely((s64)delta_exec < 0))
+ 		delta_exec = 0;
+ 
+-	schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec));
++	schedstat_set(curr->se.statistics.exec_max,
++		      max(curr->se.statistics.exec_max, delta_exec));
+ 
+ 	curr->se.sum_exec_runtime += delta_exec;
+ 	account_group_exec_runtime(curr, delta_exec);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0015-sched-Do-not-throttle-due-to-PI-boosting.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0015-sched-Do-not-throttle-due-to-PI-boosting.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0015-sched-Do-not-throttle-due-to-PI-boosting.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0015-sched-Do-not-throttle-due-to-PI-boosting.patch)
@@ -0,0 +1,52 @@
+From 660afa7a661711f93bd763d76c19950ad2fca2c7 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz at infradead.org>
+Date: Tue, 18 Oct 2011 22:03:48 +0200
+Subject: [PATCH 015/271] sched: Do not throttle due to PI boosting
+
+When a runqueue has rt_runtime_us = 0 then the only way it can
+accumulate rt_time is via PI boosting. Though that causes the runqueue
+to be throttled and replenishing does not change anything due to
+rt_runtime_us = 0. So avoid that situation by clearing rt_time and
+skip the throttling alltogether.
+
+[ tglx: Preliminary changelog ]
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched_rt.c |   20 ++++++++++++++++++--
+ 1 file changed, 18 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
+index 40d97e1..c108b9c 100644
+--- a/kernel/sched_rt.c
++++ b/kernel/sched_rt.c
+@@ -659,8 +659,24 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq)
+ 		return 0;
+ 
+ 	if (rt_rq->rt_time > runtime) {
+-		rt_rq->rt_throttled = 1;
+-		printk_once(KERN_WARNING "sched: RT throttling activated\n");
++		struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
++
++		/*
++		 * Don't actually throttle groups that have no runtime assigned
++		 * but accrue some time due to boosting.
++		 */
++		if (likely(rt_b->rt_runtime)) {
++			rt_rq->rt_throttled = 1;
++			printk_once(KERN_WARNING "sched: RT throttling activated\n");
++		} else {
++			/*
++			 * In case we did anyway, make it go away,
++			 * replenishment is a joke, since it will replenish us
++			 * with exactly 0 ns.
++			 */
++			rt_rq->rt_time = 0;
++		}
++
+ 		if (rt_rq_throttled(rt_rq)) {
+ 			sched_rt_rq_dequeue(rt_rq);
+ 			return 1;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0016-time-Remove-bogus-comments.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0016-time-Remove-bogus-comments.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0016-time-Remove-bogus-comments.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0016-time-Remove-bogus-comments.patch)
@@ -0,0 +1,42 @@
+From 438377b97518d805a37f8ea78ea79fa46a6cc2fb Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 28 Feb 2012 19:06:50 +0100
+Subject: [PATCH 016/271] time: Remove bogus comments
+
+There is no global irq lock which makes a syscall magically SMP
+safe. Remove the outdated comment concerning do_settimeofday() as
+well.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/time.c |    6 ------
+ 1 file changed, 6 deletions(-)
+
+diff --git a/kernel/time.c b/kernel/time.c
+index 73e416d..ba744cf 100644
+--- a/kernel/time.c
++++ b/kernel/time.c
+@@ -163,7 +163,6 @@ int do_sys_settimeofday(const struct timespec *tv, const struct timezone *tz)
+ 		return error;
+ 
+ 	if (tz) {
+-		/* SMP safe, global irq locking makes it work. */
+ 		sys_tz = *tz;
+ 		update_vsyscall_tz();
+ 		if (firsttime) {
+@@ -173,12 +172,7 @@ int do_sys_settimeofday(const struct timespec *tv, const struct timezone *tz)
+ 		}
+ 	}
+ 	if (tv)
+-	{
+-		/* SMP safe, again the code in arch/foo/time.c should
+-		 * globally block out interrupts when it runs.
+-		 */
+ 		return do_settimeofday(tv);
+-	}
+ 	return 0;
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0017-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0017-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0017-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0017-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch)
@@ -0,0 +1,37 @@
+From 84abd0341c534a1e3d63580128b588da87b035ab Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 28 Feb 2012 19:10:46 +0100
+Subject: [PATCH 017/271] x86: vdso: Remove bogus locking in
+ update_vsyscall_tz()
+
+Changing the sequence count in update_vsyscall_tz() is completely
+pointless.
+
+The vdso code copies the data unprotected. There is no point to change
+this as sys_tz is nowhere protected at all. See sys_gettimeofday().
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/kernel/vsyscall_64.c |    5 -----
+ 1 file changed, 5 deletions(-)
+
+diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
+index e4d4a22..f04adbd 100644
+--- a/arch/x86/kernel/vsyscall_64.c
++++ b/arch/x86/kernel/vsyscall_64.c
+@@ -80,12 +80,7 @@ early_param("vsyscall", vsyscall_setup);
+ 
+ void update_vsyscall_tz(void)
+ {
+-	unsigned long flags;
+-
+-	write_seqlock_irqsave(&vsyscall_gtod_data.lock, flags);
+-	/* sys_tz has changed */
+ 	vsyscall_gtod_data.sys_tz = sys_tz;
+-	write_sequnlock_irqrestore(&vsyscall_gtod_data.lock, flags);
+ }
+ 
+ void update_vsyscall(struct timespec *wall_time, struct timespec *wtm,
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0018-x86-vdso-Use-seqcount-instead-of-seqlock.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0018-x86-vdso-Use-seqcount-instead-of-seqlock.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0018-x86-vdso-Use-seqcount-instead-of-seqlock.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0018-x86-vdso-Use-seqcount-instead-of-seqlock.patch)
@@ -0,0 +1,128 @@
+From 5a913c66115a6890982a59ee0c90da82acb1e8cd Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 28 Feb 2012 18:24:07 +0100
+Subject: [PATCH 018/271] x86: vdso: Use seqcount instead of seqlock
+
+The update of the vdso data happens under xtime_lock, so adding a
+nested lock is pointless. Just use a seqcount to sync the readers.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/include/asm/vgtod.h   |    2 +-
+ arch/x86/kernel/vsyscall_64.c  |   11 +++--------
+ arch/x86/vdso/vclock_gettime.c |   16 ++++++++--------
+ 3 files changed, 12 insertions(+), 17 deletions(-)
+
+diff --git a/arch/x86/include/asm/vgtod.h b/arch/x86/include/asm/vgtod.h
+index 815285b..1f00717 100644
+--- a/arch/x86/include/asm/vgtod.h
++++ b/arch/x86/include/asm/vgtod.h
+@@ -5,7 +5,7 @@
+ #include <linux/clocksource.h>
+ 
+ struct vsyscall_gtod_data {
+-	seqlock_t	lock;
++	seqcount_t	seq;
+ 
+ 	/* open coded 'struct timespec' */
+ 	time_t		wall_time_sec;
+diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
+index f04adbd..50392ee 100644
+--- a/arch/x86/kernel/vsyscall_64.c
++++ b/arch/x86/kernel/vsyscall_64.c
+@@ -52,10 +52,7 @@
+ #include "vsyscall_trace.h"
+ 
+ DEFINE_VVAR(int, vgetcpu_mode);
+-DEFINE_VVAR(struct vsyscall_gtod_data, vsyscall_gtod_data) =
+-{
+-	.lock = __SEQLOCK_UNLOCKED(__vsyscall_gtod_data.lock),
+-};
++DEFINE_VVAR(struct vsyscall_gtod_data, vsyscall_gtod_data);
+ 
+ static enum { EMULATE, NATIVE, NONE } vsyscall_mode = NATIVE;
+ 
+@@ -86,9 +83,7 @@ void update_vsyscall_tz(void)
+ void update_vsyscall(struct timespec *wall_time, struct timespec *wtm,
+ 			struct clocksource *clock, u32 mult)
+ {
+-	unsigned long flags;
+-
+-	write_seqlock_irqsave(&vsyscall_gtod_data.lock, flags);
++	write_seqcount_begin(&vsyscall_gtod_data.seq);
+ 
+ 	/* copy vsyscall data */
+ 	vsyscall_gtod_data.clock.vclock_mode	= clock->archdata.vclock_mode;
+@@ -101,7 +96,7 @@ void update_vsyscall(struct timespec *wall_time, struct timespec *wtm,
+ 	vsyscall_gtod_data.wall_to_monotonic	= *wtm;
+ 	vsyscall_gtod_data.wall_time_coarse	= __current_kernel_time();
+ 
+-	write_sequnlock_irqrestore(&vsyscall_gtod_data.lock, flags);
++	write_seqcount_end(&vsyscall_gtod_data.seq);
+ }
+ 
+ static void warn_bad_vsyscall(const char *level, struct pt_regs *regs,
+diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
+index 6bc0e72..d8511fb 100644
+--- a/arch/x86/vdso/vclock_gettime.c
++++ b/arch/x86/vdso/vclock_gettime.c
+@@ -86,11 +86,11 @@ notrace static noinline int do_realtime(struct timespec *ts)
+ {
+ 	unsigned long seq, ns;
+ 	do {
+-		seq = read_seqbegin(&gtod->lock);
++		seq = read_seqcount_begin(&gtod->seq);
+ 		ts->tv_sec = gtod->wall_time_sec;
+ 		ts->tv_nsec = gtod->wall_time_nsec;
+ 		ns = vgetns();
+-	} while (unlikely(read_seqretry(&gtod->lock, seq)));
++	} while (unlikely(read_seqcount_retry(&gtod->seq, seq)));
+ 	timespec_add_ns(ts, ns);
+ 	return 0;
+ }
+@@ -99,12 +99,12 @@ notrace static noinline int do_monotonic(struct timespec *ts)
+ {
+ 	unsigned long seq, ns, secs;
+ 	do {
+-		seq = read_seqbegin(&gtod->lock);
++		seq = read_seqcount_begin(&gtod->seq);
+ 		secs = gtod->wall_time_sec;
+ 		ns = gtod->wall_time_nsec + vgetns();
+ 		secs += gtod->wall_to_monotonic.tv_sec;
+ 		ns += gtod->wall_to_monotonic.tv_nsec;
+-	} while (unlikely(read_seqretry(&gtod->lock, seq)));
++	} while (unlikely(read_seqcount_retry(&gtod->seq, seq)));
+ 
+ 	/* wall_time_nsec, vgetns(), and wall_to_monotonic.tv_nsec
+ 	 * are all guaranteed to be nonnegative.
+@@ -123,10 +123,10 @@ notrace static noinline int do_realtime_coarse(struct timespec *ts)
+ {
+ 	unsigned long seq;
+ 	do {
+-		seq = read_seqbegin(&gtod->lock);
++		seq = read_seqcount_begin(&gtod->seq);
+ 		ts->tv_sec = gtod->wall_time_coarse.tv_sec;
+ 		ts->tv_nsec = gtod->wall_time_coarse.tv_nsec;
+-	} while (unlikely(read_seqretry(&gtod->lock, seq)));
++	} while (unlikely(read_seqcount_retry(&gtod->seq, seq)));
+ 	return 0;
+ }
+ 
+@@ -134,12 +134,12 @@ notrace static noinline int do_monotonic_coarse(struct timespec *ts)
+ {
+ 	unsigned long seq, ns, secs;
+ 	do {
+-		seq = read_seqbegin(&gtod->lock);
++		seq = read_seqcount_begin(&gtod->seq);
+ 		secs = gtod->wall_time_coarse.tv_sec;
+ 		ns = gtod->wall_time_coarse.tv_nsec;
+ 		secs += gtod->wall_to_monotonic.tv_sec;
+ 		ns += gtod->wall_to_monotonic.tv_nsec;
+-	} while (unlikely(read_seqretry(&gtod->lock, seq)));
++	} while (unlikely(read_seqcount_retry(&gtod->seq, seq)));
+ 
+ 	/* wall_time_nsec and wall_to_monotonic.tv_nsec are
+ 	 * guaranteed to be between 0 and NSEC_PER_SEC.
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0019-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0019-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0019-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0019-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch)
@@ -0,0 +1,95 @@
+From 1057b762d0b9296e9df793ce4801082b7aa52c42 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 28 Feb 2012 18:33:08 +0100
+Subject: [PATCH 019/271] ia64: vsyscall: Use seqcount instead of seqlock
+
+The update of the vdso data happens under xtime_lock, so adding a
+nested lock is pointless. Just use a seqcount to sync the readers.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: Tony Luck <tony.luck at intel.com>
+---
+ arch/ia64/kernel/asm-offsets.c        |    4 ++--
+ arch/ia64/kernel/fsys.S               |    2 +-
+ arch/ia64/kernel/fsyscall_gtod_data.h |    2 +-
+ arch/ia64/kernel/time.c               |   10 +++-------
+ 4 files changed, 7 insertions(+), 11 deletions(-)
+
+diff --git a/arch/ia64/kernel/asm-offsets.c b/arch/ia64/kernel/asm-offsets.c
+index af56501..106aeb6 100644
+--- a/arch/ia64/kernel/asm-offsets.c
++++ b/arch/ia64/kernel/asm-offsets.c
+@@ -269,8 +269,8 @@ void foo(void)
+ 	BLANK();
+ 
+ 	/* used by fsys_gettimeofday in arch/ia64/kernel/fsys.S */
+-	DEFINE(IA64_GTOD_LOCK_OFFSET,
+-		offsetof (struct fsyscall_gtod_data_t, lock));
++	DEFINE(IA64_GTOD_SEQ_OFFSET,
++		offsetof (struct fsyscall_gtod_data_t, seq);
+ 	DEFINE(IA64_GTOD_WALL_TIME_OFFSET,
+ 		offsetof (struct fsyscall_gtod_data_t, wall_time));
+ 	DEFINE(IA64_GTOD_MONO_TIME_OFFSET,
+diff --git a/arch/ia64/kernel/fsys.S b/arch/ia64/kernel/fsys.S
+index 331d42b..fa77de7 100644
+--- a/arch/ia64/kernel/fsys.S
++++ b/arch/ia64/kernel/fsys.S
+@@ -174,7 +174,7 @@ ENTRY(fsys_set_tid_address)
+ 	FSYS_RETURN
+ END(fsys_set_tid_address)
+ 
+-#if IA64_GTOD_LOCK_OFFSET !=0
++#if IA64_GTOD_SEQ_OFFSET !=0
+ #error fsys_gettimeofday incompatible with changes to struct fsyscall_gtod_data_t
+ #endif
+ #if IA64_ITC_JITTER_OFFSET !=0
+diff --git a/arch/ia64/kernel/fsyscall_gtod_data.h b/arch/ia64/kernel/fsyscall_gtod_data.h
+index 57d2ee6..146b15b 100644
+--- a/arch/ia64/kernel/fsyscall_gtod_data.h
++++ b/arch/ia64/kernel/fsyscall_gtod_data.h
+@@ -6,7 +6,7 @@
+  */
+ 
+ struct fsyscall_gtod_data_t {
+-	seqlock_t	lock;
++	seqcount_t	seq;
+ 	struct timespec	wall_time;
+ 	struct timespec monotonic_time;
+ 	cycle_t		clk_mask;
+diff --git a/arch/ia64/kernel/time.c b/arch/ia64/kernel/time.c
+index 43920de..8e991a0 100644
+--- a/arch/ia64/kernel/time.c
++++ b/arch/ia64/kernel/time.c
+@@ -35,9 +35,7 @@
+ 
+ static cycle_t itc_get_cycles(struct clocksource *cs);
+ 
+-struct fsyscall_gtod_data_t fsyscall_gtod_data = {
+-	.lock = __SEQLOCK_UNLOCKED(fsyscall_gtod_data.lock),
+-};
++struct fsyscall_gtod_data_t fsyscall_gtod_data;
+ 
+ struct itc_jitter_data_t itc_jitter_data;
+ 
+@@ -460,9 +458,7 @@ void update_vsyscall_tz(void)
+ void update_vsyscall(struct timespec *wall, struct timespec *wtm,
+ 			struct clocksource *c, u32 mult)
+ {
+-        unsigned long flags;
+-
+-        write_seqlock_irqsave(&fsyscall_gtod_data.lock, flags);
++	write_seqcount_begin(&fsyscall_gtod_data.seq);
+ 
+         /* copy fsyscall clock data */
+         fsyscall_gtod_data.clk_mask = c->mask;
+@@ -485,6 +481,6 @@ void update_vsyscall(struct timespec *wall, struct timespec *wtm,
+ 		fsyscall_gtod_data.monotonic_time.tv_sec++;
+ 	}
+ 
+-        write_sequnlock_irqrestore(&fsyscall_gtod_data.lock, flags);
++	write_seqcount_end(&fsyscall_gtod_data.seq);
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0020-seqlock-Remove-unused-functions.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0020-seqlock-Remove-unused-functions.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0020-seqlock-Remove-unused-functions.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0020-seqlock-Remove-unused-functions.patch)
@@ -0,0 +1,50 @@
+From ec65872a2ba08b65113cde1a9c65f3243ca6a37f Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 16 Jul 2011 18:38:22 +0200
+Subject: [PATCH 020/271] seqlock: Remove unused functions
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/seqlock.h |   21 ---------------------
+ 1 file changed, 21 deletions(-)
+
+diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
+index bb1fac5..f12fc43 100644
+--- a/include/linux/seqlock.h
++++ b/include/linux/seqlock.h
+@@ -69,17 +69,6 @@ static inline void write_sequnlock(seqlock_t *sl)
+ 	spin_unlock(&sl->lock);
+ }
+ 
+-static inline int write_tryseqlock(seqlock_t *sl)
+-{
+-	int ret = spin_trylock(&sl->lock);
+-
+-	if (ret) {
+-		++sl->sequence;
+-		smp_wmb();
+-	}
+-	return ret;
+-}
+-
+ /* Start of read calculation -- fetch last complete writer token */
+ static __always_inline unsigned read_seqbegin(const seqlock_t *sl)
+ {
+@@ -248,14 +237,4 @@ static inline void write_seqcount_barrier(seqcount_t *s)
+ #define write_sequnlock_bh(lock)					\
+ 	do { write_sequnlock(lock); local_bh_enable(); } while(0)
+ 
+-#define read_seqbegin_irqsave(lock, flags)				\
+-	({ local_irq_save(flags);   read_seqbegin(lock); })
+-
+-#define read_seqretry_irqrestore(lock, iv, flags)			\
+-	({								\
+-		int ret = read_seqretry(lock, iv);			\
+-		local_irq_restore(flags);				\
+-		ret;							\
+-	})
+-
+ #endif /* __LINUX_SEQLOCK_H */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0021-seqlock-Use-seqcount.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0021-seqlock-Use-seqcount.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0021-seqlock-Use-seqcount.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0021-seqlock-Use-seqcount.patch)
@@ -0,0 +1,224 @@
+From f6397ccda02ee89e9aeb1abb6f0324a7021c127b Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 16 Jul 2011 18:40:26 +0200
+Subject: [PATCH 021/271] seqlock: Use seqcount
+
+No point in having different implementations for the same thing.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/seqlock.h |  176 +++++++++++++++++++++++++----------------------
+ 1 file changed, 93 insertions(+), 83 deletions(-)
+
+diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
+index f12fc43..cc7b65d 100644
+--- a/include/linux/seqlock.h
++++ b/include/linux/seqlock.h
+@@ -30,81 +30,12 @@
+ #include <linux/preempt.h>
+ #include <asm/processor.h>
+ 
+-typedef struct {
+-	unsigned sequence;
+-	spinlock_t lock;
+-} seqlock_t;
+-
+-/*
+- * These macros triggered gcc-3.x compile-time problems.  We think these are
+- * OK now.  Be cautious.
+- */
+-#define __SEQLOCK_UNLOCKED(lockname) \
+-		 { 0, __SPIN_LOCK_UNLOCKED(lockname) }
+-
+-#define seqlock_init(x)					\
+-	do {						\
+-		(x)->sequence = 0;			\
+-		spin_lock_init(&(x)->lock);		\
+-	} while (0)
+-
+-#define DEFINE_SEQLOCK(x) \
+-		seqlock_t x = __SEQLOCK_UNLOCKED(x)
+-
+-/* Lock out other writers and update the count.
+- * Acts like a normal spin_lock/unlock.
+- * Don't need preempt_disable() because that is in the spin_lock already.
+- */
+-static inline void write_seqlock(seqlock_t *sl)
+-{
+-	spin_lock(&sl->lock);
+-	++sl->sequence;
+-	smp_wmb();
+-}
+-
+-static inline void write_sequnlock(seqlock_t *sl)
+-{
+-	smp_wmb();
+-	sl->sequence++;
+-	spin_unlock(&sl->lock);
+-}
+-
+-/* Start of read calculation -- fetch last complete writer token */
+-static __always_inline unsigned read_seqbegin(const seqlock_t *sl)
+-{
+-	unsigned ret;
+-
+-repeat:
+-	ret = ACCESS_ONCE(sl->sequence);
+-	if (unlikely(ret & 1)) {
+-		cpu_relax();
+-		goto repeat;
+-	}
+-	smp_rmb();
+-
+-	return ret;
+-}
+-
+-/*
+- * Test if reader processed invalid data.
+- *
+- * If sequence value changed then writer changed data while in section.
+- */
+-static __always_inline int read_seqretry(const seqlock_t *sl, unsigned start)
+-{
+-	smp_rmb();
+-
+-	return unlikely(sl->sequence != start);
+-}
+-
+-
+ /*
+  * Version using sequence counter only.
+  * This can be used when code has its own mutex protecting the
+  * updating starting before the write_seqcountbeqin() and ending
+  * after the write_seqcount_end().
+  */
+-
+ typedef struct seqcount {
+ 	unsigned sequence;
+ } seqcount_t;
+@@ -186,7 +117,6 @@ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
+ static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
+ {
+ 	smp_rmb();
+-
+ 	return __read_seqcount_retry(s, start);
+ }
+ 
+@@ -220,21 +150,101 @@ static inline void write_seqcount_barrier(seqcount_t *s)
+ 	s->sequence+=2;
+ }
+ 
++typedef struct {
++	struct seqcount seqcount;
++	spinlock_t lock;
++} seqlock_t;
++
+ /*
+- * Possible sw/hw IRQ protected versions of the interfaces.
++ * These macros triggered gcc-3.x compile-time problems.  We think these are
++ * OK now.  Be cautious.
+  */
++#define __SEQLOCK_UNLOCKED(lockname)			\
++	{						\
++		.seqcount = SEQCNT_ZERO,		\
++		.lock =	__SPIN_LOCK_UNLOCKED(lockname)	\
++	}
++
++#define seqlock_init(x)					\
++	do {						\
++		seqcount_init(&(x)->seqcount);		\
++		spin_lock_init(&(x)->lock);		\
++	} while (0)
++
++#define DEFINE_SEQLOCK(x) \
++		seqlock_t x = __SEQLOCK_UNLOCKED(x)
++
++/*
++ * Read side functions for starting and finalizing a read side section.
++ */
++static inline unsigned read_seqbegin(const seqlock_t *sl)
++{
++	return read_seqcount_begin(&sl->seqcount);
++}
++
++static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
++{
++	return read_seqcount_retry(&sl->seqcount, start);
++}
++
++/*
++ * Lock out other writers and update the count.
++ * Acts like a normal spin_lock/unlock.
++ * Don't need preempt_disable() because that is in the spin_lock already.
++ */
++static inline void write_seqlock(seqlock_t *sl)
++{
++	spin_lock(&sl->lock);
++	write_seqcount_begin(&sl->seqcount);
++}
++
++static inline void write_sequnlock(seqlock_t *sl)
++{
++	write_seqcount_end(&sl->seqcount);
++	spin_unlock(&sl->lock);
++}
++
++static inline void write_seqlock_bh(seqlock_t *sl)
++{
++	spin_lock_bh(&sl->lock);
++	write_seqcount_begin(&sl->seqcount);
++}
++
++static inline void write_sequnlock_bh(seqlock_t *sl)
++{
++	write_seqcount_end(&sl->seqcount);
++	spin_unlock_bh(&sl->lock);
++}
++
++static inline void write_seqlock_irq(seqlock_t *sl)
++{
++	spin_lock_irq(&sl->lock);
++	write_seqcount_begin(&sl->seqcount);
++}
++
++static inline void write_sequnlock_irq(seqlock_t *sl)
++{
++	write_seqcount_end(&sl->seqcount);
++	spin_unlock_irq(&sl->lock);
++}
++
++static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&sl->lock, flags);
++	write_seqcount_begin(&sl->seqcount);
++	return flags;
++}
++
+ #define write_seqlock_irqsave(lock, flags)				\
+-	do { local_irq_save(flags); write_seqlock(lock); } while (0)
+-#define write_seqlock_irq(lock)						\
+-	do { local_irq_disable();   write_seqlock(lock); } while (0)
+-#define write_seqlock_bh(lock)						\
+-        do { local_bh_disable();    write_seqlock(lock); } while (0)
+-
+-#define write_sequnlock_irqrestore(lock, flags)				\
+-	do { write_sequnlock(lock); local_irq_restore(flags); } while(0)
+-#define write_sequnlock_irq(lock)					\
+-	do { write_sequnlock(lock); local_irq_enable(); } while(0)
+-#define write_sequnlock_bh(lock)					\
+-	do { write_sequnlock(lock); local_bh_enable(); } while(0)
++	do { flags = __write_seqlock_irqsave(lock); } while (0)
++
++static inline void
++write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
++{
++	write_seqcount_end(&sl->seqcount);
++	spin_unlock_irqrestore(&sl->lock, flags);
++}
+ 
+ #endif /* __LINUX_SEQLOCK_H */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0022-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0022-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0022-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0022-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch)
@@ -0,0 +1,91 @@
+From 44ba97108b169858d0a5d8e2090a5a600786e5e1 Mon Sep 17 00:00:00 2001
+From: Al Viro <viro at ZenIV.linux.org.uk>
+Date: Thu, 15 Mar 2012 18:39:40 +0000
+Subject: [PATCH 022/271] vfs: fs_struct: Move code out of seqcount write
+ sections
+
+RT cannot disable preemption in the seqcount write sections due to
+functions called which take "sleeping" spinlocks.
+
+Move the code out of those sections. It does not need to be there.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ fs/fs_struct.c |   27 +++++++++++++++------------
+ 1 file changed, 15 insertions(+), 12 deletions(-)
+
+diff --git a/fs/fs_struct.c b/fs/fs_struct.c
+index 78b519c..f5818c4 100644
+--- a/fs/fs_struct.c
++++ b/fs/fs_struct.c
+@@ -26,11 +26,11 @@ void set_fs_root(struct fs_struct *fs, struct path *path)
+ {
+ 	struct path old_root;
+ 
++	path_get_longterm(path);
+ 	spin_lock(&fs->lock);
+ 	write_seqcount_begin(&fs->seq);
+ 	old_root = fs->root;
+ 	fs->root = *path;
+-	path_get_longterm(path);
+ 	write_seqcount_end(&fs->seq);
+ 	spin_unlock(&fs->lock);
+ 	if (old_root.dentry)
+@@ -45,11 +45,11 @@ void set_fs_pwd(struct fs_struct *fs, struct path *path)
+ {
+ 	struct path old_pwd;
+ 
++	path_get_longterm(path);
+ 	spin_lock(&fs->lock);
+ 	write_seqcount_begin(&fs->seq);
+ 	old_pwd = fs->pwd;
+ 	fs->pwd = *path;
+-	path_get_longterm(path);
+ 	write_seqcount_end(&fs->seq);
+ 	spin_unlock(&fs->lock);
+ 
+@@ -57,6 +57,14 @@ void set_fs_pwd(struct fs_struct *fs, struct path *path)
+ 		path_put_longterm(&old_pwd);
+ }
+ 
++static inline int replace_path(struct path *p, const struct path *old, const struct path *new)
++{
++	if (likely(p->dentry != old->dentry || p->mnt != old->mnt))
++		return 0;
++	*p = *new;
++	return 1;
++}
++
+ void chroot_fs_refs(struct path *old_root, struct path *new_root)
+ {
+ 	struct task_struct *g, *p;
+@@ -68,21 +76,16 @@ void chroot_fs_refs(struct path *old_root, struct path *new_root)
+ 		task_lock(p);
+ 		fs = p->fs;
+ 		if (fs) {
++			int hits = 0;
+ 			spin_lock(&fs->lock);
+ 			write_seqcount_begin(&fs->seq);
+-			if (fs->root.dentry == old_root->dentry
+-			    && fs->root.mnt == old_root->mnt) {
+-				path_get_longterm(new_root);
+-				fs->root = *new_root;
++			hits += replace_path(&fs->root, old_root, new_root);
++			hits += replace_path(&fs->pwd, old_root, new_root);
++			write_seqcount_end(&fs->seq);
++			while (hits--) {
+ 				count++;
+-			}
+-			if (fs->pwd.dentry == old_root->dentry
+-			    && fs->pwd.mnt == old_root->mnt) {
+ 				path_get_longterm(new_root);
+-				fs->pwd = *new_root;
+-				count++;
+ 			}
+-			write_seqcount_end(&fs->seq);
+ 			spin_unlock(&fs->lock);
+ 		}
+ 		task_unlock(p);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0023-timekeeping-Split-xtime_lock.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0023-timekeeping-Split-xtime_lock.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0023-timekeeping-Split-xtime_lock.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0023-timekeeping-Split-xtime_lock.patch)
@@ -0,0 +1,537 @@
+From 115138258d9efb007f1092d313c3717ec63363d1 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 1 Mar 2012 15:14:06 +0100
+Subject: [PATCH 023/271] timekeeping: Split xtime_lock
+
+xtime_lock is going to be split apart in mainline, so we can shorten
+the seqcount protected regions and avoid updating seqcount in some
+code pathes. This is a straight forward split, so we can avoid the
+whole mess with raw seqlocks for RT.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/time/jiffies.c       |    4 +-
+ kernel/time/ntp.c           |   24 ++++++++----
+ kernel/time/tick-common.c   |   10 +++--
+ kernel/time/tick-internal.h |    3 +-
+ kernel/time/tick-sched.c    |   16 +++++---
+ kernel/time/timekeeping.c   |   90 +++++++++++++++++++++++++------------------
+ 6 files changed, 88 insertions(+), 59 deletions(-)
+
+diff --git a/kernel/time/jiffies.c b/kernel/time/jiffies.c
+index a470154..21940eb 100644
+--- a/kernel/time/jiffies.c
++++ b/kernel/time/jiffies.c
+@@ -74,9 +74,9 @@ u64 get_jiffies_64(void)
+ 	u64 ret;
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 		ret = jiffies_64;
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 	return ret;
+ }
+ EXPORT_SYMBOL(get_jiffies_64);
+diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
+index 4b85a7a..419cbaa 100644
+--- a/kernel/time/ntp.c
++++ b/kernel/time/ntp.c
+@@ -358,7 +358,8 @@ static enum hrtimer_restart ntp_leap_second(struct hrtimer *timer)
+ {
+ 	enum hrtimer_restart res = HRTIMER_NORESTART;
+ 
+-	write_seqlock(&xtime_lock);
++	raw_spin_lock(&xtime_lock);
++	write_seqcount_begin(&xtime_seq);
+ 
+ 	switch (time_state) {
+ 	case TIME_OK:
+@@ -388,7 +389,8 @@ static enum hrtimer_restart ntp_leap_second(struct hrtimer *timer)
+ 		break;
+ 	}
+ 
+-	write_sequnlock(&xtime_lock);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock(&xtime_lock);
+ 
+ 	return res;
+ }
+@@ -663,7 +665,8 @@ int do_adjtimex(struct timex *txc)
+ 
+ 	getnstimeofday(&ts);
+ 
+-	write_seqlock_irq(&xtime_lock);
++	raw_spin_lock_irq(&xtime_lock);
++	write_seqcount_begin(&xtime_seq);
+ 
+ 	if (txc->modes & ADJ_ADJTIME) {
+ 		long save_adjust = time_adjust;
+@@ -705,7 +708,8 @@ int do_adjtimex(struct timex *txc)
+ 	/* fill PPS status fields */
+ 	pps_fill_timex(txc);
+ 
+-	write_sequnlock_irq(&xtime_lock);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock_irq(&xtime_lock);
+ 
+ 	txc->time.tv_sec = ts.tv_sec;
+ 	txc->time.tv_usec = ts.tv_nsec;
+@@ -903,7 +907,8 @@ void hardpps(const struct timespec *phase_ts, const struct timespec *raw_ts)
+ 
+ 	pts_norm = pps_normalize_ts(*phase_ts);
+ 
+-	write_seqlock_irqsave(&xtime_lock, flags);
++	raw_spin_lock_irqsave(&xtime_lock, flags);
++	write_seqcount_begin(&xtime_seq);
+ 
+ 	/* clear the error bits, they will be set again if needed */
+ 	time_status &= ~(STA_PPSJITTER | STA_PPSWANDER | STA_PPSERROR);
+@@ -916,7 +921,8 @@ void hardpps(const struct timespec *phase_ts, const struct timespec *raw_ts)
+ 	 * just start the frequency interval */
+ 	if (unlikely(pps_fbase.tv_sec == 0)) {
+ 		pps_fbase = *raw_ts;
+-		write_sequnlock_irqrestore(&xtime_lock, flags);
++		write_seqcount_end(&xtime_seq);
++		raw_spin_unlock_irqrestore(&xtime_lock, flags);
+ 		return;
+ 	}
+ 
+@@ -931,7 +937,8 @@ void hardpps(const struct timespec *phase_ts, const struct timespec *raw_ts)
+ 		time_status |= STA_PPSJITTER;
+ 		/* restart the frequency calibration interval */
+ 		pps_fbase = *raw_ts;
+-		write_sequnlock_irqrestore(&xtime_lock, flags);
++		write_seqcount_end(&xtime_seq);
++		raw_spin_unlock_irqrestore(&xtime_lock, flags);
+ 		pr_err("hardpps: PPSJITTER: bad pulse\n");
+ 		return;
+ 	}
+@@ -948,7 +955,8 @@ void hardpps(const struct timespec *phase_ts, const struct timespec *raw_ts)
+ 
+ 	hardpps_update_phase(pts_norm.nsec);
+ 
+-	write_sequnlock_irqrestore(&xtime_lock, flags);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock_irqrestore(&xtime_lock, flags);
+ }
+ EXPORT_SYMBOL(hardpps);
+ 
+diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
+index da6c9ec..39de540 100644
+--- a/kernel/time/tick-common.c
++++ b/kernel/time/tick-common.c
+@@ -63,13 +63,15 @@ int tick_is_oneshot_available(void)
+ static void tick_periodic(int cpu)
+ {
+ 	if (tick_do_timer_cpu == cpu) {
+-		write_seqlock(&xtime_lock);
++		raw_spin_lock(&xtime_lock);
++		write_seqcount_begin(&xtime_seq);
+ 
+ 		/* Keep track of the next tick event */
+ 		tick_next_period = ktime_add(tick_next_period, tick_period);
+ 
+ 		do_timer(1);
+-		write_sequnlock(&xtime_lock);
++		write_seqcount_end(&xtime_seq);
++		raw_spin_unlock(&xtime_lock);
+ 	}
+ 
+ 	update_process_times(user_mode(get_irq_regs()));
+@@ -130,9 +132,9 @@ void tick_setup_periodic(struct clock_event_device *dev, int broadcast)
+ 		ktime_t next;
+ 
+ 		do {
+-			seq = read_seqbegin(&xtime_lock);
++			seq = read_seqcount_begin(&xtime_seq);
+ 			next = tick_next_period;
+-		} while (read_seqretry(&xtime_lock, seq));
++		} while (read_seqcount_retry(&xtime_seq, seq));
+ 
+ 		clockevents_set_mode(dev, CLOCK_EVT_MODE_ONESHOT);
+ 
+diff --git a/kernel/time/tick-internal.h b/kernel/time/tick-internal.h
+index 4e265b9..c91100d 100644
+--- a/kernel/time/tick-internal.h
++++ b/kernel/time/tick-internal.h
+@@ -141,4 +141,5 @@ static inline int tick_device_is_functional(struct clock_event_device *dev)
+ #endif
+ 
+ extern void do_timer(unsigned long ticks);
+-extern seqlock_t xtime_lock;
++extern raw_spinlock_t xtime_lock;
++extern seqcount_t xtime_seq;
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index c923640..d7abd2f 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -56,7 +56,8 @@ static void tick_do_update_jiffies64(ktime_t now)
+ 		return;
+ 
+ 	/* Reevalute with xtime_lock held */
+-	write_seqlock(&xtime_lock);
++	raw_spin_lock(&xtime_lock);
++	write_seqcount_begin(&xtime_seq);
+ 
+ 	delta = ktime_sub(now, last_jiffies_update);
+ 	if (delta.tv64 >= tick_period.tv64) {
+@@ -79,7 +80,8 @@ static void tick_do_update_jiffies64(ktime_t now)
+ 		/* Keep the tick_next_period variable up to date */
+ 		tick_next_period = ktime_add(last_jiffies_update, tick_period);
+ 	}
+-	write_sequnlock(&xtime_lock);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock(&xtime_lock);
+ }
+ 
+ /*
+@@ -89,12 +91,14 @@ static ktime_t tick_init_jiffy_update(void)
+ {
+ 	ktime_t period;
+ 
+-	write_seqlock(&xtime_lock);
++	raw_spin_lock(&xtime_lock);
++	write_seqcount_begin(&xtime_seq);
+ 	/* Did we start the jiffies update yet ? */
+ 	if (last_jiffies_update.tv64 == 0)
+ 		last_jiffies_update = tick_next_period;
+ 	period = last_jiffies_update;
+-	write_sequnlock(&xtime_lock);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock(&xtime_lock);
+ 	return period;
+ }
+ 
+@@ -345,11 +349,11 @@ void tick_nohz_stop_sched_tick(int inidle)
+ 	ts->idle_calls++;
+ 	/* Read jiffies and the time when jiffies were updated last */
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 		last_update = last_jiffies_update;
+ 		last_jiffies = jiffies;
+ 		time_delta = timekeeping_max_deferment();
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 
+ 	if (rcu_needs_cpu(cpu) || printk_needs_cpu(cpu) ||
+ 	    arch_needs_cpu(cpu)) {
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 2378413..da9e1f9 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -139,8 +139,8 @@ static inline s64 timekeeping_get_ns_raw(void)
+  * This read-write spinlock protects us from races in SMP while
+  * playing with xtime.
+  */
+-__cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);
+-
++__cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(xtime_lock);
++seqcount_t xtime_seq;
+ 
+ /*
+  * The current time
+@@ -222,7 +222,7 @@ void getnstimeofday(struct timespec *ts)
+ 	WARN_ON(timekeeping_suspended);
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 
+ 		*ts = xtime;
+ 		nsecs = timekeeping_get_ns();
+@@ -230,7 +230,7 @@ void getnstimeofday(struct timespec *ts)
+ 		/* If arch requires, add in gettimeoffset() */
+ 		nsecs += arch_gettimeoffset();
+ 
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 
+ 	timespec_add_ns(ts, nsecs);
+ }
+@@ -245,14 +245,14 @@ ktime_t ktime_get(void)
+ 	WARN_ON(timekeeping_suspended);
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 		secs = xtime.tv_sec + wall_to_monotonic.tv_sec;
+ 		nsecs = xtime.tv_nsec + wall_to_monotonic.tv_nsec;
+ 		nsecs += timekeeping_get_ns();
+ 		/* If arch requires, add in gettimeoffset() */
+ 		nsecs += arch_gettimeoffset();
+ 
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 	/*
+ 	 * Use ktime_set/ktime_add_ns to create a proper ktime on
+ 	 * 32-bit architectures without CONFIG_KTIME_SCALAR.
+@@ -278,14 +278,14 @@ void ktime_get_ts(struct timespec *ts)
+ 	WARN_ON(timekeeping_suspended);
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 		*ts = xtime;
+ 		tomono = wall_to_monotonic;
+ 		nsecs = timekeeping_get_ns();
+ 		/* If arch requires, add in gettimeoffset() */
+ 		nsecs += arch_gettimeoffset();
+ 
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 
+ 	set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec,
+ 				ts->tv_nsec + tomono.tv_nsec + nsecs);
+@@ -313,7 +313,7 @@ void getnstime_raw_and_real(struct timespec *ts_raw, struct timespec *ts_real)
+ 	do {
+ 		u32 arch_offset;
+ 
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 
+ 		*ts_raw = raw_time;
+ 		*ts_real = xtime;
+@@ -326,7 +326,7 @@ void getnstime_raw_and_real(struct timespec *ts_raw, struct timespec *ts_real)
+ 		nsecs_raw += arch_offset;
+ 		nsecs_real += arch_offset;
+ 
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 
+ 	timespec_add_ns(ts_raw, nsecs_raw);
+ 	timespec_add_ns(ts_real, nsecs_real);
+@@ -365,7 +365,8 @@ int do_settimeofday(const struct timespec *tv)
+ 	if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC)
+ 		return -EINVAL;
+ 
+-	write_seqlock_irqsave(&xtime_lock, flags);
++	raw_spin_lock_irqsave(&xtime_lock, flags);
++	write_seqcount_begin(&xtime_seq);
+ 
+ 	timekeeping_forward_now();
+ 
+@@ -381,7 +382,8 @@ int do_settimeofday(const struct timespec *tv)
+ 	update_vsyscall(&xtime, &wall_to_monotonic, timekeeper.clock,
+ 				timekeeper.mult);
+ 
+-	write_sequnlock_irqrestore(&xtime_lock, flags);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock_irqrestore(&xtime_lock, flags);
+ 
+ 	/* signal hrtimers about time change */
+ 	clock_was_set();
+@@ -405,7 +407,8 @@ int timekeeping_inject_offset(struct timespec *ts)
+ 	if ((unsigned long)ts->tv_nsec >= NSEC_PER_SEC)
+ 		return -EINVAL;
+ 
+-	write_seqlock_irqsave(&xtime_lock, flags);
++	raw_spin_lock_irqsave(&xtime_lock, flags);
++	write_seqcount_begin(&xtime_seq);
+ 
+ 	timekeeping_forward_now();
+ 
+@@ -418,7 +421,8 @@ int timekeeping_inject_offset(struct timespec *ts)
+ 	update_vsyscall(&xtime, &wall_to_monotonic, timekeeper.clock,
+ 				timekeeper.mult);
+ 
+-	write_sequnlock_irqrestore(&xtime_lock, flags);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock_irqrestore(&xtime_lock, flags);
+ 
+ 	/* signal hrtimers about time change */
+ 	clock_was_set();
+@@ -490,11 +494,11 @@ void getrawmonotonic(struct timespec *ts)
+ 	s64 nsecs;
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 		nsecs = timekeeping_get_ns_raw();
+ 		*ts = raw_time;
+ 
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 
+ 	timespec_add_ns(ts, nsecs);
+ }
+@@ -510,11 +514,11 @@ int timekeeping_valid_for_hres(void)
+ 	int ret;
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 
+ 		ret = timekeeper.clock->flags & CLOCK_SOURCE_VALID_FOR_HRES;
+ 
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 
+ 	return ret;
+ }
+@@ -572,7 +576,8 @@ void __init timekeeping_init(void)
+ 	read_persistent_clock(&now);
+ 	read_boot_clock(&boot);
+ 
+-	write_seqlock_irqsave(&xtime_lock, flags);
++	raw_spin_lock_irqsave(&xtime_lock, flags);
++	write_seqcount_begin(&xtime_seq);
+ 
+ 	ntp_init();
+ 
+@@ -593,7 +598,8 @@ void __init timekeeping_init(void)
+ 				-boot.tv_sec, -boot.tv_nsec);
+ 	total_sleep_time.tv_sec = 0;
+ 	total_sleep_time.tv_nsec = 0;
+-	write_sequnlock_irqrestore(&xtime_lock, flags);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock_irqrestore(&xtime_lock, flags);
+ }
+ 
+ /* time in seconds when suspend began */
+@@ -640,7 +646,8 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
+ 	if (!(ts.tv_sec == 0 && ts.tv_nsec == 0))
+ 		return;
+ 
+-	write_seqlock_irqsave(&xtime_lock, flags);
++	raw_spin_lock_irqsave(&xtime_lock, flags);
++	write_seqcount_begin(&xtime_seq);
+ 	timekeeping_forward_now();
+ 
+ 	__timekeeping_inject_sleeptime(delta);
+@@ -650,7 +657,8 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
+ 	update_vsyscall(&xtime, &wall_to_monotonic, timekeeper.clock,
+ 				timekeeper.mult);
+ 
+-	write_sequnlock_irqrestore(&xtime_lock, flags);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock_irqrestore(&xtime_lock, flags);
+ 
+ 	/* signal hrtimers about time change */
+ 	clock_was_set();
+@@ -673,7 +681,8 @@ static void timekeeping_resume(void)
+ 
+ 	clocksource_resume();
+ 
+-	write_seqlock_irqsave(&xtime_lock, flags);
++	raw_spin_lock_irqsave(&xtime_lock, flags);
++	write_seqcount_begin(&xtime_seq);
+ 
+ 	if (timespec_compare(&ts, &timekeeping_suspend_time) > 0) {
+ 		ts = timespec_sub(ts, timekeeping_suspend_time);
+@@ -683,7 +692,8 @@ static void timekeeping_resume(void)
+ 	timekeeper.clock->cycle_last = timekeeper.clock->read(timekeeper.clock);
+ 	timekeeper.ntp_error = 0;
+ 	timekeeping_suspended = 0;
+-	write_sequnlock_irqrestore(&xtime_lock, flags);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock_irqrestore(&xtime_lock, flags);
+ 
+ 	touch_softlockup_watchdog();
+ 
+@@ -701,7 +711,8 @@ static int timekeeping_suspend(void)
+ 
+ 	read_persistent_clock(&timekeeping_suspend_time);
+ 
+-	write_seqlock_irqsave(&xtime_lock, flags);
++	raw_spin_lock_irqsave(&xtime_lock, flags);
++	write_seqcount_begin(&xtime_seq);
+ 	timekeeping_forward_now();
+ 	timekeeping_suspended = 1;
+ 
+@@ -724,7 +735,8 @@ static int timekeeping_suspend(void)
+ 		timekeeping_suspend_time =
+ 			timespec_add(timekeeping_suspend_time, delta_delta);
+ 	}
+-	write_sequnlock_irqrestore(&xtime_lock, flags);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock_irqrestore(&xtime_lock, flags);
+ 
+ 	clockevents_notify(CLOCK_EVT_NOTIFY_SUSPEND, NULL);
+ 	clocksource_suspend();
+@@ -1101,13 +1113,13 @@ void get_monotonic_boottime(struct timespec *ts)
+ 	WARN_ON(timekeeping_suspended);
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 		*ts = xtime;
+ 		tomono = wall_to_monotonic;
+ 		sleep = total_sleep_time;
+ 		nsecs = timekeeping_get_ns();
+ 
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 
+ 	set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec + sleep.tv_sec,
+ 			ts->tv_nsec + tomono.tv_nsec + sleep.tv_nsec + nsecs);
+@@ -1158,10 +1170,10 @@ struct timespec current_kernel_time(void)
+ 	unsigned long seq;
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 
+ 		now = xtime;
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 
+ 	return now;
+ }
+@@ -1173,11 +1185,11 @@ struct timespec get_monotonic_coarse(void)
+ 	unsigned long seq;
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 
+ 		now = xtime;
+ 		mono = wall_to_monotonic;
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 
+ 	set_normalized_timespec(&now, now.tv_sec + mono.tv_sec,
+ 				now.tv_nsec + mono.tv_nsec);
+@@ -1209,11 +1221,11 @@ void get_xtime_and_monotonic_and_sleep_offset(struct timespec *xtim,
+ 	unsigned long seq;
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 		*xtim = xtime;
+ 		*wtom = wall_to_monotonic;
+ 		*sleep = total_sleep_time;
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ }
+ 
+ /**
+@@ -1225,9 +1237,9 @@ ktime_t ktime_get_monotonic_offset(void)
+ 	struct timespec wtom;
+ 
+ 	do {
+-		seq = read_seqbegin(&xtime_lock);
++		seq = read_seqcount_begin(&xtime_seq);
+ 		wtom = wall_to_monotonic;
+-	} while (read_seqretry(&xtime_lock, seq));
++	} while (read_seqcount_retry(&xtime_seq, seq));
+ 	return timespec_to_ktime(wtom);
+ }
+ 
+@@ -1239,7 +1251,9 @@ ktime_t ktime_get_monotonic_offset(void)
+  */
+ void xtime_update(unsigned long ticks)
+ {
+-	write_seqlock(&xtime_lock);
++	raw_spin_lock(&xtime_lock);
++	write_seqcount_begin(&xtime_seq);
+ 	do_timer(ticks);
+-	write_sequnlock(&xtime_lock);
++	write_seqcount_end(&xtime_seq);
++	raw_spin_unlock(&xtime_lock);
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0024-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0024-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0024-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0024-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch)
@@ -0,0 +1,72 @@
+From a876b5721192cd2f70f4422d06bad3c617a3dff4 Mon Sep 17 00:00:00 2001
+From: Mike Galbraith <efault at gmx.de>
+Date: Wed, 7 Dec 2011 12:48:42 +0100
+Subject: [PATCH 024/271] intel_idle: Convert i7300_idle_lock to raw spinlock
+
+24 core Intel box's first exposure to 3.0.12-rt30-rc3 didn't go well.
+
+[   27.104159] i7300_idle: loaded v1.55
+[   27.104192] BUG: scheduling while atomic: swapper/2/0/0x00000002
+[   27.104309] Pid: 0, comm: swapper/2 Tainted: G           N  3.0.12-rt30-rc3-rt #1
+[   27.104317] Call Trace:
+[   27.104338]  [<ffffffff810046a5>] dump_trace+0x85/0x2e0
+[   27.104372]  [<ffffffff8144eb00>] thread_return+0x12b/0x30b
+[   27.104381]  [<ffffffff8144f1b9>] schedule+0x29/0xb0
+[   27.104389]  [<ffffffff814506e5>] rt_spin_lock_slowlock+0xc5/0x240
+[   27.104401]  [<ffffffffa01f818f>] i7300_idle_notifier+0x3f/0x360 [i7300_idle]
+[   27.104415]  [<ffffffff814546c7>] notifier_call_chain+0x37/0x70
+[   27.104426]  [<ffffffff81454748>] __atomic_notifier_call_chain+0x48/0x70
+[   27.104439]  [<ffffffff81001a39>] cpu_idle+0x89/0xb0
+[   27.104449] bad: scheduling from the idle thread!
+
+Signed-off-by: Mike Galbraith <efault at gmx.de>
+Cc: Steven Rostedt <rostedt at goodmis.org>
+Link: http://lkml.kernel.org/r/1323258522.5057.73.camel@marge.simson.net
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/idle/i7300_idle.c |    8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/idle/i7300_idle.c b/drivers/idle/i7300_idle.c
+index c976285..5537d7c 100644
+--- a/drivers/idle/i7300_idle.c
++++ b/drivers/idle/i7300_idle.c
+@@ -75,7 +75,7 @@ static unsigned long past_skip;
+ 
+ static struct pci_dev *fbd_dev;
+ 
+-static spinlock_t i7300_idle_lock;
++static raw_spinlock_t i7300_idle_lock;
+ static int i7300_idle_active;
+ 
+ static u8 i7300_idle_thrtctl_saved;
+@@ -457,7 +457,7 @@ static int i7300_idle_notifier(struct notifier_block *nb, unsigned long val,
+ 		idle_begin_time = ktime_get();
+ 	}
+ 
+-	spin_lock_irqsave(&i7300_idle_lock, flags);
++	raw_spin_lock_irqsave(&i7300_idle_lock, flags);
+ 	if (val == IDLE_START) {
+ 
+ 		cpumask_set_cpu(smp_processor_id(), idle_cpumask);
+@@ -506,7 +506,7 @@ static int i7300_idle_notifier(struct notifier_block *nb, unsigned long val,
+ 		}
+ 	}
+ end:
+-	spin_unlock_irqrestore(&i7300_idle_lock, flags);
++	raw_spin_unlock_irqrestore(&i7300_idle_lock, flags);
+ 	return 0;
+ }
+ 
+@@ -554,7 +554,7 @@ struct debugfs_file_info {
+ 
+ static int __init i7300_idle_init(void)
+ {
+-	spin_lock_init(&i7300_idle_lock);
++	raw_spin_lock_init(&i7300_idle_lock);
+ 	total_us = 0;
+ 
+ 	if (i7300_idle_platform_probe(&fbd_dev, &ioat_dev, forceload))
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0025-mm-memcg-shorten-preempt-disabled-section-around-eve.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0025-mm-memcg-shorten-preempt-disabled-section-around-eve.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0025-mm-memcg-shorten-preempt-disabled-section-around-eve.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0025-mm-memcg-shorten-preempt-disabled-section-around-eve.patch)
@@ -0,0 +1,125 @@
+From 54f432df89e52aea8483514b2f07be4d93567a62 Mon Sep 17 00:00:00 2001
+From: Johannes Weiner <hannes at cmpxchg.org>
+Date: Thu, 17 Nov 2011 07:49:25 +0100
+Subject: [PATCH 025/271] mm: memcg: shorten preempt-disabled section around
+ event checks
+
+Only the ratelimit checks themselves have to run with preemption
+disabled, the resulting actions - checking for usage thresholds,
+updating the soft limit tree - can and should run with preemption
+enabled.
+
+Signed-off-by: Johannes Weiner <jweiner at redhat.com>
+Tested-by: Luis Henriques <henrix at camandro.org>
+Cc: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/memcontrol.c |   73 ++++++++++++++++++++++++++-----------------------------
+ 1 file changed, 35 insertions(+), 38 deletions(-)
+
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index c8425b1..9c92c4d 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -683,37 +683,32 @@ static unsigned long mem_cgroup_nr_lru_pages(struct mem_cgroup *memcg,
+ 	return total;
+ }
+ 
+-static bool __memcg_event_check(struct mem_cgroup *memcg, int target)
++static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg,
++				       enum mem_cgroup_events_target target)
+ {
+ 	unsigned long val, next;
+ 
+ 	val = __this_cpu_read(memcg->stat->events[MEM_CGROUP_EVENTS_COUNT]);
+ 	next = __this_cpu_read(memcg->stat->targets[target]);
+ 	/* from time_after() in jiffies.h */
+-	return ((long)next - (long)val < 0);
+-}
+-
+-static void __mem_cgroup_target_update(struct mem_cgroup *memcg, int target)
+-{
+-	unsigned long val, next;
+-
+-	val = __this_cpu_read(memcg->stat->events[MEM_CGROUP_EVENTS_COUNT]);
+-
+-	switch (target) {
+-	case MEM_CGROUP_TARGET_THRESH:
+-		next = val + THRESHOLDS_EVENTS_TARGET;
+-		break;
+-	case MEM_CGROUP_TARGET_SOFTLIMIT:
+-		next = val + SOFTLIMIT_EVENTS_TARGET;
+-		break;
+-	case MEM_CGROUP_TARGET_NUMAINFO:
+-		next = val + NUMAINFO_EVENTS_TARGET;
+-		break;
+-	default:
+-		return;
++	if ((long)next - (long)val < 0) {
++		switch (target) {
++		case MEM_CGROUP_TARGET_THRESH:
++			next = val + THRESHOLDS_EVENTS_TARGET;
++			break;
++		case MEM_CGROUP_TARGET_SOFTLIMIT:
++			next = val + SOFTLIMIT_EVENTS_TARGET;
++			break;
++		case MEM_CGROUP_TARGET_NUMAINFO:
++			next = val + NUMAINFO_EVENTS_TARGET;
++			break;
++		default:
++			break;
++		}
++		__this_cpu_write(memcg->stat->targets[target], next);
++		return true;
+ 	}
+-
+-	__this_cpu_write(memcg->stat->targets[target], next);
++	return false;
+ }
+ 
+ /*
+@@ -724,25 +719,27 @@ static void memcg_check_events(struct mem_cgroup *memcg, struct page *page)
+ {
+ 	preempt_disable();
+ 	/* threshold event is triggered in finer grain than soft limit */
+-	if (unlikely(__memcg_event_check(memcg, MEM_CGROUP_TARGET_THRESH))) {
++	if (unlikely(mem_cgroup_event_ratelimit(memcg,
++						MEM_CGROUP_TARGET_THRESH))) {
++		bool do_softlimit, do_numainfo;
++
++		do_softlimit = mem_cgroup_event_ratelimit(memcg,
++						MEM_CGROUP_TARGET_SOFTLIMIT);
++#if MAX_NUMNODES > 1
++		do_numainfo = mem_cgroup_event_ratelimit(memcg,
++						MEM_CGROUP_TARGET_NUMAINFO);
++#endif
++		preempt_enable();
++
+ 		mem_cgroup_threshold(memcg);
+-		__mem_cgroup_target_update(memcg, MEM_CGROUP_TARGET_THRESH);
+-		if (unlikely(__memcg_event_check(memcg,
+-			     MEM_CGROUP_TARGET_SOFTLIMIT))) {
++		if (unlikely(do_softlimit))
+ 			mem_cgroup_update_tree(memcg, page);
+-			__mem_cgroup_target_update(memcg,
+-						   MEM_CGROUP_TARGET_SOFTLIMIT);
+-		}
+ #if MAX_NUMNODES > 1
+-		if (unlikely(__memcg_event_check(memcg,
+-			MEM_CGROUP_TARGET_NUMAINFO))) {
++		if (unlikely(do_numainfo))
+ 			atomic_inc(&memcg->numainfo_events);
+-			__mem_cgroup_target_update(memcg,
+-				MEM_CGROUP_TARGET_NUMAINFO);
+-		}
+ #endif
+-	}
+-	preempt_enable();
++	} else
++		preempt_enable();
+ }
+ 
+ static struct mem_cgroup *mem_cgroup_from_cont(struct cgroup *cont)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0026-tracing-Account-for-preempt-off-in-preempt_schedule.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0026-tracing-Account-for-preempt-off-in-preempt_schedule.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0026-tracing-Account-for-preempt-off-in-preempt_schedule.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0026-tracing-Account-for-preempt-off-in-preempt_schedule.patch)
@@ -0,0 +1,53 @@
+From aaecb62746d261560f56ae6d81be3b5e8bb35c0e Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Thu, 29 Sep 2011 12:24:30 -0500
+Subject: [PATCH 026/271] tracing: Account for preempt off in
+ preempt_schedule()
+
+The preempt_schedule() uses the preempt_disable_notrace() version
+because it can cause infinite recursion by the function tracer as
+the function tracer uses preempt_enable_notrace() which may call
+back into the preempt_schedule() code as the NEED_RESCHED is still
+set and the PREEMPT_ACTIVE has not been set yet.
+
+See commit: d1f74e20b5b064a130cd0743a256c2d3cfe84010 that made this
+change.
+
+The preemptoff and preemptirqsoff latency tracers require the first
+and last preempt count modifiers to enable tracing. But this skips
+the checks. Since we can not convert them back to the non notrace
+version, we can use the idle() hooks for the latency tracers here.
+That is, the start/stop_critical_timings() works well to manually
+start and stop the latency tracer for preempt off timings.
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Signed-off-by: Clark Williams <williams at redhat.com>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c |    9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 1ae1cab..3d84a43 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -4519,7 +4519,16 @@ asmlinkage void __sched notrace preempt_schedule(void)
+ 
+ 	do {
+ 		add_preempt_count_notrace(PREEMPT_ACTIVE);
++		/*
++		 * The add/subtract must not be traced by the function
++		 * tracer. But we still want to account for the
++		 * preempt off latency tracer. Since the _notrace versions
++		 * of add/subtract skip the accounting for latency tracer
++		 * we must force it manually.
++		 */
++		start_critical_timings();
+ 		__schedule();
++		stop_critical_timings();
+ 		sub_preempt_count_notrace(PREEMPT_ACTIVE);
+ 
+ 		/*
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0027-signal-revert-ptrace-preempt-magic.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0027-signal-revert-ptrace-preempt-magic.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0027-signal-revert-ptrace-preempt-magic.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0027-signal-revert-ptrace-preempt-magic.patch.patch)
@@ -0,0 +1,33 @@
+From 502488c14bd0005a39f259de1bf04aa941d7eccd Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 21 Sep 2011 19:57:12 +0200
+Subject: [PATCH 027/271] signal-revert-ptrace-preempt-magic.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/signal.c |    8 --------
+ 1 file changed, 8 deletions(-)
+
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 08e0b97..9b6bd34 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1866,15 +1866,7 @@ static void ptrace_stop(int exit_code, int why, int clear_code, siginfo_t *info)
+ 		if (gstop_done && ptrace_reparented(current))
+ 			do_notify_parent_cldstop(current, false, why);
+ 
+-		/*
+-		 * Don't want to allow preemption here, because
+-		 * sys_ptrace() needs this task to be inactive.
+-		 *
+-		 * XXX: implement read_unlock_no_resched().
+-		 */
+-		preempt_disable();
+ 		read_unlock(&tasklist_lock);
+-		preempt_enable_no_resched();
+ 		schedule();
+ 	} else {
+ 		/*
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0028-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0028-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0028-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0028-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch)
@@ -0,0 +1,29 @@
+From b86d034533c72b2690058c945da9992fc9317cd6 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 16 Mar 2011 14:45:31 +0100
+Subject: [PATCH 028/271] arm: Mark pmu interupt IRQF_NO_THREAD
+
+PMU interrupt must not be threaded. Remove IRQF_DISABLED while at it
+as we run all handlers with interrupts disabled anyway.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/arm/kernel/perf_event.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c
+index ecebb89..e1286fb 100644
+--- a/arch/arm/kernel/perf_event.c
++++ b/arch/arm/kernel/perf_event.c
+@@ -435,7 +435,7 @@ armpmu_reserve_hardware(struct arm_pmu *armpmu)
+ 		}
+ 
+ 		err = request_irq(irq, handle_irq,
+-				  IRQF_DISABLED | IRQF_NOBALANCING,
++				  IRQF_NOBALANCING | IRQF_NO_THREAD,
+ 				  "arm-pmu", armpmu);
+ 		if (err) {
+ 			pr_err("unable to request IRQ%d for ARM PMU counters\n",
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0029-arm-Allow-forced-irq-threading.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0029-arm-Allow-forced-irq-threading.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0029-arm-Allow-forced-irq-threading.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0029-arm-Allow-forced-irq-threading.patch)
@@ -0,0 +1,23 @@
+From 338a0c58f532ac2c08911c3b36a007f693b66031 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 16 Jul 2011 13:15:20 +0200
+Subject: [PATCH 029/271] arm: Allow forced irq threading
+
+All timer interrupts and the perf interrupt are marked NO_THREAD, so
+its safe to allow forced interrupt threading.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/arm/Kconfig |    1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -29,6 +29,7 @@
+ 	select HAVE_GENERIC_HARDIRQS
+ 	select HAVE_SPARSE_IRQ
+ 	select GENERIC_IRQ_SHOW
++	select IRQ_FORCED_THREADING
+ 	select CPU_PM if (SUSPEND || CPU_IDLE)
+ 	select HAVE_BPF_JIT
+ 	help

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0030-preempt-rt-Convert-arm-boot_lock-to-raw.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0030-preempt-rt-Convert-arm-boot_lock-to-raw.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0030-preempt-rt-Convert-arm-boot_lock-to-raw.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0030-preempt-rt-Convert-arm-boot_lock-to-raw.patch)
@@ -0,0 +1,294 @@
+From cb1d3c15bd97ee10110e601ac0decb2ce6e271e2 Mon Sep 17 00:00:00 2001
+From: Frank Rowand <frank.rowand at am.sony.com>
+Date: Mon, 19 Sep 2011 14:51:14 -0700
+Subject: [PATCH 030/271] preempt-rt: Convert arm boot_lock to raw
+
+The arm boot_lock is used by the secondary processor startup code.  The locking
+task is the idle thread, which has idle->sched_class == &idle_sched_class.
+idle_sched_class->enqueue_task == NULL, so if the idle task blocks on the
+lock, the attempt to wake it when the lock becomes available will fail:
+
+try_to_wake_up()
+   ...
+      activate_task()
+         enqueue_task()
+            p->sched_class->enqueue_task(rq, p, flags)
+
+Fix by converting boot_lock to a raw spin lock.
+
+Signed-off-by: Frank Rowand <frank.rowand at am.sony.com>
+Link: http://lkml.kernel.org/r/4E77B952.3010606@am.sony.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/arm/mach-exynos/platsmp.c    |   12 ++++++------
+ arch/arm/mach-msm/platsmp.c       |   10 +++++-----
+ arch/arm/mach-omap2/omap-smp.c    |   10 +++++-----
+ arch/arm/mach-tegra/platsmp.c     |   10 +++++-----
+ arch/arm/mach-ux500/platsmp.c     |   10 +++++-----
+ arch/arm/plat-versatile/platsmp.c |   10 +++++-----
+ 6 files changed, 31 insertions(+), 31 deletions(-)
+
+diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c
+index 69ffb2f..fe321b0 100644
+--- a/arch/arm/mach-exynos/platsmp.c
++++ b/arch/arm/mach-exynos/platsmp.c
+@@ -63,7 +63,7 @@ static void __iomem *scu_base_addr(void)
+ 	return (void __iomem *)(S5P_VA_SCU);
+ }
+ 
+-static DEFINE_SPINLOCK(boot_lock);
++static DEFINE_RAW_SPINLOCK(boot_lock);
+ 
+ static void __cpuinit exynos4_gic_secondary_init(void)
+ {
+@@ -108,8 +108,8 @@ void __cpuinit platform_secondary_init(unsigned int cpu)
+ 	/*
+ 	 * Synchronise with the boot thread.
+ 	 */
+-	spin_lock(&boot_lock);
+-	spin_unlock(&boot_lock);
++	raw_spin_lock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ }
+ 
+ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -120,7 +120,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * Set synchronisation state between this boot processor
+ 	 * and the secondary one
+ 	 */
+-	spin_lock(&boot_lock);
++	raw_spin_lock(&boot_lock);
+ 
+ 	/*
+ 	 * The secondary processor is waiting to be released from
+@@ -149,7 +149,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 
+ 		if (timeout == 0) {
+ 			printk(KERN_ERR "cpu1 power enable failed");
+-			spin_unlock(&boot_lock);
++			raw_spin_unlock(&boot_lock);
+ 			return -ETIMEDOUT;
+ 		}
+ 	}
+@@ -177,7 +177,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * now the secondary core is starting up let it run its
+ 	 * calibrations, then wait for it to finish
+ 	 */
+-	spin_unlock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ 
+ 	return pen_release != -1 ? -ENOSYS : 0;
+ }
+diff --git a/arch/arm/mach-msm/platsmp.c b/arch/arm/mach-msm/platsmp.c
+index fdec58a..cad6b81 100644
+--- a/arch/arm/mach-msm/platsmp.c
++++ b/arch/arm/mach-msm/platsmp.c
+@@ -39,7 +39,7 @@ extern void msm_secondary_startup(void);
+  */
+ volatile int pen_release = -1;
+ 
+-static DEFINE_SPINLOCK(boot_lock);
++static DEFINE_RAW_SPINLOCK(boot_lock);
+ 
+ static inline int get_core_count(void)
+ {
+@@ -69,8 +69,8 @@ void __cpuinit platform_secondary_init(unsigned int cpu)
+ 	/*
+ 	 * Synchronise with the boot thread.
+ 	 */
+-	spin_lock(&boot_lock);
+-	spin_unlock(&boot_lock);
++	raw_spin_lock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ }
+ 
+ static __cpuinit void prepare_cold_cpu(unsigned int cpu)
+@@ -107,7 +107,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * set synchronisation state between this boot processor
+ 	 * and the secondary one
+ 	 */
+-	spin_lock(&boot_lock);
++	raw_spin_lock(&boot_lock);
+ 
+ 	/*
+ 	 * The secondary processor is waiting to be released from
+@@ -141,7 +141,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * now the secondary core is starting up let it run its
+ 	 * calibrations, then wait for it to finish
+ 	 */
+-	spin_unlock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ 
+ 	return pen_release != -1 ? -ENOSYS : 0;
+ }
+diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c
+index 4412ddb..490de9c 100644
+--- a/arch/arm/mach-omap2/omap-smp.c
++++ b/arch/arm/mach-omap2/omap-smp.c
+@@ -29,7 +29,7 @@
+ /* SCU base address */
+ static void __iomem *scu_base;
+ 
+-static DEFINE_SPINLOCK(boot_lock);
++static DEFINE_RAW_SPINLOCK(boot_lock);
+ 
+ void __cpuinit platform_secondary_init(unsigned int cpu)
+ {
+@@ -43,8 +43,8 @@ void __cpuinit platform_secondary_init(unsigned int cpu)
+ 	/*
+ 	 * Synchronise with the boot thread.
+ 	 */
+-	spin_lock(&boot_lock);
+-	spin_unlock(&boot_lock);
++	raw_spin_lock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ }
+ 
+ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -53,7 +53,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * Set synchronisation state between this boot processor
+ 	 * and the secondary one
+ 	 */
+-	spin_lock(&boot_lock);
++	raw_spin_lock(&boot_lock);
+ 
+ 	/*
+ 	 * Update the AuxCoreBoot0 with boot state for secondary core.
+@@ -70,7 +70,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * Now the secondary core is starting up let it run its
+ 	 * calibrations, then wait for it to finish
+ 	 */
+-	spin_unlock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm/mach-tegra/platsmp.c b/arch/arm/mach-tegra/platsmp.c
+index 7d2b5d0..571f61a 100644
+--- a/arch/arm/mach-tegra/platsmp.c
++++ b/arch/arm/mach-tegra/platsmp.c
+@@ -28,7 +28,7 @@
+ 
+ extern void tegra_secondary_startup(void);
+ 
+-static DEFINE_SPINLOCK(boot_lock);
++static DEFINE_RAW_SPINLOCK(boot_lock);
+ static void __iomem *scu_base = IO_ADDRESS(TEGRA_ARM_PERIF_BASE);
+ 
+ #define EVP_CPU_RESET_VECTOR \
+@@ -50,8 +50,8 @@ void __cpuinit platform_secondary_init(unsigned int cpu)
+ 	/*
+ 	 * Synchronise with the boot thread.
+ 	 */
+-	spin_lock(&boot_lock);
+-	spin_unlock(&boot_lock);
++	raw_spin_lock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ }
+ 
+ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -65,7 +65,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * set synchronisation state between this boot processor
+ 	 * and the secondary one
+ 	 */
+-	spin_lock(&boot_lock);
++	raw_spin_lock(&boot_lock);
+ 
+ 
+ 	/* set the reset vector to point to the secondary_startup routine */
+@@ -101,7 +101,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * now the secondary core is starting up let it run its
+ 	 * calibrations, then wait for it to finish
+ 	 */
+-	spin_unlock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm/mach-ux500/platsmp.c b/arch/arm/mach-ux500/platsmp.c
+index a19e398..9e92c6c 100644
+--- a/arch/arm/mach-ux500/platsmp.c
++++ b/arch/arm/mach-ux500/platsmp.c
+@@ -57,7 +57,7 @@ static void __iomem *scu_base_addr(void)
+ 	return NULL;
+ }
+ 
+-static DEFINE_SPINLOCK(boot_lock);
++static DEFINE_RAW_SPINLOCK(boot_lock);
+ 
+ void __cpuinit platform_secondary_init(unsigned int cpu)
+ {
+@@ -77,8 +77,8 @@ void __cpuinit platform_secondary_init(unsigned int cpu)
+ 	/*
+ 	 * Synchronise with the boot thread.
+ 	 */
+-	spin_lock(&boot_lock);
+-	spin_unlock(&boot_lock);
++	raw_spin_lock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ }
+ 
+ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -89,7 +89,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * set synchronisation state between this boot processor
+ 	 * and the secondary one
+ 	 */
+-	spin_lock(&boot_lock);
++	raw_spin_lock(&boot_lock);
+ 
+ 	/*
+ 	 * The secondary processor is waiting to be released from
+@@ -110,7 +110,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * now the secondary core is starting up let it run its
+ 	 * calibrations, then wait for it to finish
+ 	 */
+-	spin_unlock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ 
+ 	return pen_release != -1 ? -ENOSYS : 0;
+ }
+diff --git a/arch/arm/plat-versatile/platsmp.c b/arch/arm/plat-versatile/platsmp.c
+index 92f18d3..287bbb5 100644
+--- a/arch/arm/plat-versatile/platsmp.c
++++ b/arch/arm/plat-versatile/platsmp.c
+@@ -37,7 +37,7 @@ static void __cpuinit write_pen_release(int val)
+ 	outer_clean_range(__pa(&pen_release), __pa(&pen_release + 1));
+ }
+ 
+-static DEFINE_SPINLOCK(boot_lock);
++static DEFINE_RAW_SPINLOCK(boot_lock);
+ 
+ void __cpuinit platform_secondary_init(unsigned int cpu)
+ {
+@@ -57,8 +57,8 @@ void __cpuinit platform_secondary_init(unsigned int cpu)
+ 	/*
+ 	 * Synchronise with the boot thread.
+ 	 */
+-	spin_lock(&boot_lock);
+-	spin_unlock(&boot_lock);
++	raw_spin_lock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ }
+ 
+ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -69,7 +69,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * Set synchronisation state between this boot processor
+ 	 * and the secondary one
+ 	 */
+-	spin_lock(&boot_lock);
++	raw_spin_lock(&boot_lock);
+ 
+ 	/*
+ 	 * This is really belt and braces; we hold unintended secondary
+@@ -99,7 +99,7 @@ int __cpuinit boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	 * now the secondary core is starting up let it run its
+ 	 * calibrations, then wait for it to finish
+ 	 */
+-	spin_unlock(&boot_lock);
++	raw_spin_unlock(&boot_lock);
+ 
+ 	return pen_release != -1 ? -ENOSYS : 0;
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0031-sched-Create-schedule_preempt_disabled.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0031-sched-Create-schedule_preempt_disabled.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0031-sched-Create-schedule_preempt_disabled.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0031-sched-Create-schedule_preempt_disabled.patch)
@@ -0,0 +1,55 @@
+From 22acbdee13dbc479e9326159d9498348c59c2dea Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 21 Mar 2011 12:09:35 +0100
+Subject: [PATCH 031/271] sched: Create schedule_preempt_disabled()
+
+Get rid of the ever repeating:
+
+    preempt_enable_no_resched();
+    schedule();
+    preempt_disable();
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/sched.h |    1 +
+ kernel/sched.c        |   12 ++++++++++++
+ 2 files changed, 13 insertions(+)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 1c4f3e9..9897fe6 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -359,6 +359,7 @@ extern signed long schedule_timeout_interruptible(signed long timeout);
+ extern signed long schedule_timeout_killable(signed long timeout);
+ extern signed long schedule_timeout_uninterruptible(signed long timeout);
+ asmlinkage void schedule(void);
++extern void schedule_preempt_disabled(void);
+ extern int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner);
+ 
+ struct nsproxy;
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 3d84a43..342114c 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -4455,6 +4455,18 @@ asmlinkage void __sched schedule(void)
+ }
+ EXPORT_SYMBOL(schedule);
+ 
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
+ #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
+ 
+ static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0032-sched-Use-schedule_preempt_disabled.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0032-sched-Use-schedule_preempt_disabled.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0032-sched-Use-schedule_preempt_disabled.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0032-sched-Use-schedule_preempt_disabled.patch)
@@ -0,0 +1,500 @@
+From 698b029ee2447b5ce48a584dc662c91ad690c385 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 21 Mar 2011 12:33:18 +0100
+Subject: [PATCH 032/271] sched: Use schedule_preempt_disabled()
+
+Coccinelle based conversion.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/arm/kernel/process.c              |    4 +---
+ arch/avr32/kernel/process.c            |    4 +---
+ arch/blackfin/kernel/process.c         |    4 +---
+ arch/cris/kernel/process.c             |    4 +---
+ arch/frv/kernel/process.c              |    4 +---
+ arch/h8300/kernel/process.c            |    4 +---
+ arch/ia64/kernel/process.c             |    4 +---
+ arch/m32r/kernel/process.c             |    4 +---
+ arch/m68k/kernel/process_mm.c          |    4 +---
+ arch/m68k/kernel/process_no.c          |    4 +---
+ arch/microblaze/kernel/process.c       |    4 +---
+ arch/mips/kernel/process.c             |    4 +---
+ arch/mn10300/kernel/process.c          |    4 +---
+ arch/parisc/kernel/process.c           |    4 +---
+ arch/powerpc/kernel/idle.c             |    8 ++++----
+ arch/powerpc/platforms/iseries/setup.c |    8 ++------
+ arch/s390/kernel/process.c             |    4 +---
+ arch/score/kernel/process.c            |    4 +---
+ arch/sh/kernel/idle.c                  |    4 +---
+ arch/sparc/kernel/process_32.c         |    8 ++------
+ arch/sparc/kernel/process_64.c         |   10 ++++------
+ arch/tile/kernel/process.c             |    4 +---
+ arch/x86/kernel/process_32.c           |    4 +---
+ arch/x86/kernel/process_64.c           |    4 +---
+ arch/xtensa/kernel/process.c           |    4 +---
+ init/main.c                            |    5 +----
+ kernel/mutex.c                         |    4 +---
+ kernel/softirq.c                       |    4 +---
+ 28 files changed, 36 insertions(+), 95 deletions(-)
+
+diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
+index 3d0c6fb..54833ff 100644
+--- a/arch/arm/kernel/process.c
++++ b/arch/arm/kernel/process.c
+@@ -214,9 +214,7 @@ void cpu_idle(void)
+ 		}
+ 		leds_event(led_idle_end);
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/avr32/kernel/process.c b/arch/avr32/kernel/process.c
+index ef5a2a0..c8724c9 100644
+--- a/arch/avr32/kernel/process.c
++++ b/arch/avr32/kernel/process.c
+@@ -38,9 +38,7 @@ void cpu_idle(void)
+ 		while (!need_resched())
+ 			cpu_idle_sleep();
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/blackfin/kernel/process.c b/arch/blackfin/kernel/process.c
+index 6a80a9e..11acc10 100644
+--- a/arch/blackfin/kernel/process.c
++++ b/arch/blackfin/kernel/process.c
+@@ -92,9 +92,7 @@ void cpu_idle(void)
+ 		while (!need_resched())
+ 			idle();
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/cris/kernel/process.c b/arch/cris/kernel/process.c
+index aa585e4..d8f50ff 100644
+--- a/arch/cris/kernel/process.c
++++ b/arch/cris/kernel/process.c
+@@ -115,9 +115,7 @@ void cpu_idle (void)
+ 				idle = default_idle;
+ 			idle();
+ 		}
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/frv/kernel/process.c b/arch/frv/kernel/process.c
+index 3901df1..29cc497 100644
+--- a/arch/frv/kernel/process.c
++++ b/arch/frv/kernel/process.c
+@@ -92,9 +92,7 @@ void cpu_idle(void)
+ 				idle();
+ 		}
+ 
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/h8300/kernel/process.c b/arch/h8300/kernel/process.c
+index 933bd38..1a173b3 100644
+--- a/arch/h8300/kernel/process.c
++++ b/arch/h8300/kernel/process.c
+@@ -81,9 +81,7 @@ void cpu_idle(void)
+ 	while (1) {
+ 		while (!need_resched())
+ 			idle();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c
+index 6d33c5c..9dc52b6 100644
+--- a/arch/ia64/kernel/process.c
++++ b/arch/ia64/kernel/process.c
+@@ -330,9 +330,7 @@ cpu_idle (void)
+ 			normal_xtp();
+ #endif
+ 		}
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 		check_pgt_cache();
+ 		if (cpu_is_offline(cpu))
+ 			play_dead();
+diff --git a/arch/m32r/kernel/process.c b/arch/m32r/kernel/process.c
+index 422bea9..3a4a32b 100644
+--- a/arch/m32r/kernel/process.c
++++ b/arch/m32r/kernel/process.c
+@@ -90,9 +90,7 @@ void cpu_idle (void)
+ 
+ 			idle();
+ 		}
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/m68k/kernel/process_mm.c b/arch/m68k/kernel/process_mm.c
+index aa4ffb8..c413aa0 100644
+--- a/arch/m68k/kernel/process_mm.c
++++ b/arch/m68k/kernel/process_mm.c
+@@ -94,9 +94,7 @@ void cpu_idle(void)
+ 	while (1) {
+ 		while (!need_resched())
+ 			idle();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/m68k/kernel/process_no.c b/arch/m68k/kernel/process_no.c
+index 5e1078c..f7fe6c3 100644
+--- a/arch/m68k/kernel/process_no.c
++++ b/arch/m68k/kernel/process_no.c
+@@ -73,9 +73,7 @@ void cpu_idle(void)
+ 	/* endless idle loop with no priority at all */
+ 	while (1) {
+ 		idle();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/microblaze/kernel/process.c b/arch/microblaze/kernel/process.c
+index 95cc295..d3b2b42 100644
+--- a/arch/microblaze/kernel/process.c
++++ b/arch/microblaze/kernel/process.c
+@@ -108,9 +108,7 @@ void cpu_idle(void)
+ 			idle();
+ 		tick_nohz_restart_sched_tick();
+ 
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 		check_pgt_cache();
+ 	}
+ }
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index c47f96e..4dbf66d 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -78,9 +78,7 @@ void __noreturn cpu_idle(void)
+ 			play_dead();
+ #endif
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/mn10300/kernel/process.c b/arch/mn10300/kernel/process.c
+index 28eec31..cac401d 100644
+--- a/arch/mn10300/kernel/process.c
++++ b/arch/mn10300/kernel/process.c
+@@ -123,9 +123,7 @@ void cpu_idle(void)
+ 			idle();
+ 		}
+ 
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index 4b4b918..f6eb367 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -71,9 +71,7 @@ void cpu_idle(void)
+ 	while (1) {
+ 		while (!need_resched())
+ 			barrier();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 		check_pgt_cache();
+ 	}
+ }
+diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c
+index 39a2baa..f46dae5 100644
+--- a/arch/powerpc/kernel/idle.c
++++ b/arch/powerpc/kernel/idle.c
+@@ -94,11 +94,11 @@ void cpu_idle(void)
+ 		HMT_medium();
+ 		ppc64_runlatch_on();
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		if (cpu_should_die())
++		if (cpu_should_die()) {
++			preempt_enable_no_resched();
+ 			cpu_die();
+-		schedule();
+-		preempt_disable();
++		}
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/platforms/iseries/setup.c b/arch/powerpc/platforms/iseries/setup.c
+index ea0acbd..e0c5b49 100644
+--- a/arch/powerpc/platforms/iseries/setup.c
++++ b/arch/powerpc/platforms/iseries/setup.c
+@@ -582,9 +582,7 @@ static void iseries_shared_idle(void)
+ 		if (hvlpevent_is_pending())
+ 			process_iSeries_events();
+ 
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+@@ -611,9 +609,7 @@ static void iseries_dedicated_idle(void)
+ 
+ 		ppc64_runlatch_on();
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
+index 53088e2..fa093f7 100644
+--- a/arch/s390/kernel/process.c
++++ b/arch/s390/kernel/process.c
+@@ -94,9 +94,7 @@ void cpu_idle(void)
+ 		while (!need_resched())
+ 			default_idle();
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/score/kernel/process.c b/arch/score/kernel/process.c
+index 25d0803..2707023 100644
+--- a/arch/score/kernel/process.c
++++ b/arch/score/kernel/process.c
+@@ -53,9 +53,7 @@ void __noreturn cpu_idle(void)
+ 		while (!need_resched())
+ 			barrier();
+ 
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/sh/kernel/idle.c b/arch/sh/kernel/idle.c
+index db4ecd7..b7c18f0 100644
+--- a/arch/sh/kernel/idle.c
++++ b/arch/sh/kernel/idle.c
+@@ -112,9 +112,7 @@ void cpu_idle(void)
+ 		}
+ 
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/sparc/kernel/process_32.c b/arch/sparc/kernel/process_32.c
+index f793742..935fdbc 100644
+--- a/arch/sparc/kernel/process_32.c
++++ b/arch/sparc/kernel/process_32.c
+@@ -113,9 +113,7 @@ void cpu_idle(void)
+ 			while (!need_resched())
+ 				cpu_relax();
+ 		}
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 		check_pgt_cache();
+ 	}
+ }
+@@ -138,9 +136,7 @@ void cpu_idle(void)
+ 			while (!need_resched())
+ 				cpu_relax();
+ 		}
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 		check_pgt_cache();
+ 	}
+ }
+diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
+index 3739a06..8ba0dbe 100644
+--- a/arch/sparc/kernel/process_64.c
++++ b/arch/sparc/kernel/process_64.c
+@@ -102,15 +102,13 @@ void cpu_idle(void)
+ 
+ 		tick_nohz_restart_sched_tick();
+ 
+-		preempt_enable_no_resched();
+-
+ #ifdef CONFIG_HOTPLUG_CPU
+-		if (cpu_is_offline(cpu))
++		if (cpu_is_offline(cpu)) {
++			preempt_enable_no_resched();
+ 			cpu_play_dead();
++		}
+ #endif
+-
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/tile/kernel/process.c b/arch/tile/kernel/process.c
+index 9c45d8b..5a8b631 100644
+--- a/arch/tile/kernel/process.c
++++ b/arch/tile/kernel/process.c
+@@ -106,9 +106,7 @@ void cpu_idle(void)
+ 			current_thread_info()->status |= TS_POLLING;
+ 		}
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index 8598296..ada175e3 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -117,9 +117,7 @@ void cpu_idle(void)
+ 			start_critical_timings();
+ 		}
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 6a364a6..08840ab 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -150,9 +150,7 @@ void cpu_idle(void)
+ 		}
+ 
+ 		tick_nohz_restart_sched_tick();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c
+index 47041e7..2c90047 100644
+--- a/arch/xtensa/kernel/process.c
++++ b/arch/xtensa/kernel/process.c
+@@ -113,9 +113,7 @@ void cpu_idle(void)
+ 	while (1) {
+ 		while (!need_resched())
+ 			platform_idle();
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 	}
+ }
+ 
+diff --git a/init/main.c b/init/main.c
+index cb08fea2..d30d42a 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -378,11 +378,8 @@ static noinline void __init_refok rest_init(void)
+ 	 * at least once to get things moving:
+ 	 */
+ 	init_idle_bootup_task(current);
+-	preempt_enable_no_resched();
+-	schedule();
+-
++	schedule_preempt_disabled();
+ 	/* Call into cpu_idle with preempt disabled */
+-	preempt_disable();
+ 	cpu_idle();
+ }
+ 
+diff --git a/kernel/mutex.c b/kernel/mutex.c
+index 89096dd..a307cc9 100644
+--- a/kernel/mutex.c
++++ b/kernel/mutex.c
+@@ -240,9 +240,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 
+ 		/* didn't get the lock, go to sleep: */
+ 		spin_unlock_mutex(&lock->wait_lock, flags);
+-		preempt_enable_no_resched();
+-		schedule();
+-		preempt_disable();
++		schedule_preempt_disabled();
+ 		spin_lock_mutex(&lock->wait_lock, flags);
+ 	}
+ 
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index 2c71d91..c9da2c8 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -744,9 +744,7 @@ static int run_ksoftirqd(void * __bind_cpu)
+ 	while (!kthread_should_stop()) {
+ 		preempt_disable();
+ 		if (!local_softirq_pending()) {
+-			preempt_enable_no_resched();
+-			schedule();
+-			preempt_disable();
++			schedule_preempt_disabled();
+ 		}
+ 
+ 		__set_current_state(TASK_RUNNING);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0033-signals-Do-not-wakeup-self.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0033-signals-Do-not-wakeup-self.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0033-signals-Do-not-wakeup-self.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0033-signals-Do-not-wakeup-self.patch)
@@ -0,0 +1,30 @@
+From de6b9fdc021f743053c644619e35a2975762f3a8 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 3 Jul 2009 08:44:44 -0500
+Subject: [PATCH 033/271] signals: Do not wakeup self
+
+Signals which are delivered by current to current can do without
+waking up current :)
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/signal.c |    3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 9b6bd34..fffb683 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -682,6 +682,9 @@ void signal_wake_up(struct task_struct *t, int resume)
+ 
+ 	set_tsk_thread_flag(t, TIF_SIGPENDING);
+ 
++	if (unlikely(t == current))
++		return;
++
+ 	/*
+ 	 * For SIGKILL, we want to wake it up in the stopped/traced/killable
+ 	 * case. We don't check t->state here because there is a race with it
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0034-posix-timers-Prevent-broadcast-signals.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0034-posix-timers-Prevent-broadcast-signals.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0034-posix-timers-Prevent-broadcast-signals.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0034-posix-timers-Prevent-broadcast-signals.patch)
@@ -0,0 +1,38 @@
+From de1ed77114140246003727012a0fcbe2ce687b82 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 3 Jul 2009 08:29:20 -0500
+Subject: [PATCH 034/271] posix-timers: Prevent broadcast signals
+
+Posix timers should not send broadcast signals and kernel only
+signals. Prevent it.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/posix-timers.c |    4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/posix-timers.c b/kernel/posix-timers.c
+index 69185ae..7b73c34 100644
+--- a/kernel/posix-timers.c
++++ b/kernel/posix-timers.c
+@@ -439,6 +439,7 @@ static enum hrtimer_restart posix_timer_fn(struct hrtimer *timer)
+ static struct pid *good_sigevent(sigevent_t * event)
+ {
+ 	struct task_struct *rtn = current->group_leader;
++	int sig = event->sigev_signo;
+ 
+ 	if ((event->sigev_notify & SIGEV_THREAD_ID ) &&
+ 		(!(rtn = find_task_by_vpid(event->sigev_notify_thread_id)) ||
+@@ -447,7 +448,8 @@ static struct pid *good_sigevent(sigevent_t * event)
+ 		return NULL;
+ 
+ 	if (((event->sigev_notify & ~SIGEV_THREAD_ID) != SIGEV_NONE) &&
+-	    ((event->sigev_signo <= 0) || (event->sigev_signo > SIGRTMAX)))
++	    (sig <= 0 || sig > SIGRTMAX || sig_kernel_only(sig) ||
++	     sig_kernel_coredump(sig)))
+ 		return NULL;
+ 
+ 	return task_pid(rtn);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0035-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0035-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0035-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0035-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch)
@@ -0,0 +1,218 @@
+From bff1f10262b172489f000c8913d4944143fa9e07 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 3 Jul 2009 08:44:56 -0500
+Subject: [PATCH 035/271] signals: Allow rt tasks to cache one sigqueue struct
+
+To avoid allocation allow rt tasks to cache one sigqueue struct in
+task struct.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/sched.h  |    1 +
+ include/linux/signal.h |    1 +
+ kernel/exit.c          |    2 +-
+ kernel/fork.c          |    1 +
+ kernel/signal.c        |   83 +++++++++++++++++++++++++++++++++++++++++++++---
+ 5 files changed, 83 insertions(+), 5 deletions(-)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 9897fe6..7268acf 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1387,6 +1387,7 @@ struct task_struct {
+ /* signal handlers */
+ 	struct signal_struct *signal;
+ 	struct sighand_struct *sighand;
++	struct sigqueue *sigqueue_cache;
+ 
+ 	sigset_t blocked, real_blocked;
+ 	sigset_t saved_sigmask;	/* restored if set_restore_sigmask() was used */
+diff --git a/include/linux/signal.h b/include/linux/signal.h
+index a822300..a448900 100644
+--- a/include/linux/signal.h
++++ b/include/linux/signal.h
+@@ -229,6 +229,7 @@ static inline void init_sigpending(struct sigpending *sig)
+ }
+ 
+ extern void flush_sigqueue(struct sigpending *queue);
++extern void flush_task_sigqueue(struct task_struct *tsk);
+ 
+ /* Test if 'sig' is valid signal. Use this instead of testing _NSIG directly */
+ static inline int valid_signal(unsigned long sig)
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 5a8a66e..9ed0883 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -141,7 +141,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 	 * Do this under ->siglock, we can race with another thread
+ 	 * doing sigqueue_free() if we have SIGQUEUE_PREALLOC signals.
+ 	 */
+-	flush_sigqueue(&tsk->pending);
++	flush_task_sigqueue(tsk);
+ 	tsk->sighand = NULL;
+ 	spin_unlock(&sighand->siglock);
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 79ee71f..7335449 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1136,6 +1136,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
+ 	spin_lock_init(&p->alloc_lock);
+ 
+ 	init_sigpending(&p->pending);
++	p->sigqueue_cache = NULL;
+ 
+ 	p->utime = cputime_zero;
+ 	p->stime = cputime_zero;
+diff --git a/kernel/signal.c b/kernel/signal.c
+index fffb683..92c5605 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -344,13 +344,45 @@ static bool task_participate_group_stop(struct task_struct *task)
+ 	return false;
+ }
+ 
++#ifdef __HAVE_ARCH_CMPXCHG
++static inline struct sigqueue *get_task_cache(struct task_struct *t)
++{
++	struct sigqueue *q = t->sigqueue_cache;
++
++	if (cmpxchg(&t->sigqueue_cache, q, NULL) != q)
++		return NULL;
++	return q;
++}
++
++static inline int put_task_cache(struct task_struct *t, struct sigqueue *q)
++{
++	if (cmpxchg(&t->sigqueue_cache, NULL, q) == NULL)
++		return 0;
++	return 1;
++}
++
++#else
++
++static inline struct sigqueue *get_task_cache(struct task_struct *t)
++{
++	return NULL;
++}
++
++static inline int put_task_cache(struct task_struct *t, struct sigqueue *q)
++{
++	return 1;
++}
++
++#endif
++
+ /*
+  * allocate a new signal queue record
+  * - this may be called without locks if and only if t == current, otherwise an
+  *   appropriate lock must be held to stop the target task from exiting
+  */
+ static struct sigqueue *
+-__sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimit)
++__sigqueue_do_alloc(int sig, struct task_struct *t, gfp_t flags,
++		    int override_rlimit, int fromslab)
+ {
+ 	struct sigqueue *q = NULL;
+ 	struct user_struct *user;
+@@ -367,7 +399,10 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimi
+ 	if (override_rlimit ||
+ 	    atomic_read(&user->sigpending) <=
+ 			task_rlimit(t, RLIMIT_SIGPENDING)) {
+-		q = kmem_cache_alloc(sigqueue_cachep, flags);
++		if (!fromslab)
++			q = get_task_cache(t);
++		if (!q)
++			q = kmem_cache_alloc(sigqueue_cachep, flags);
+ 	} else {
+ 		print_dropped_signal(sig);
+ 	}
+@@ -384,6 +419,13 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimi
+ 	return q;
+ }
+ 
++static struct sigqueue *
++__sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags,
++		 int override_rlimit)
++{
++	return __sigqueue_do_alloc(sig, t, flags, override_rlimit, 0);
++}
++
+ static void __sigqueue_free(struct sigqueue *q)
+ {
+ 	if (q->flags & SIGQUEUE_PREALLOC)
+@@ -393,6 +435,21 @@ static void __sigqueue_free(struct sigqueue *q)
+ 	kmem_cache_free(sigqueue_cachep, q);
+ }
+ 
++static void sigqueue_free_current(struct sigqueue *q)
++{
++	struct user_struct *up;
++
++	if (q->flags & SIGQUEUE_PREALLOC)
++		return;
++
++	up = q->user;
++	if (rt_prio(current->normal_prio) && !put_task_cache(current, q)) {
++		atomic_dec(&up->sigpending);
++		free_uid(up);
++	} else
++		  __sigqueue_free(q);
++}
++
+ void flush_sigqueue(struct sigpending *queue)
+ {
+ 	struct sigqueue *q;
+@@ -406,6 +463,21 @@ void flush_sigqueue(struct sigpending *queue)
+ }
+ 
+ /*
++ * Called from __exit_signal. Flush tsk->pending and
++ * tsk->sigqueue_cache
++ */
++void flush_task_sigqueue(struct task_struct *tsk)
++{
++	struct sigqueue *q;
++
++	flush_sigqueue(&tsk->pending);
++
++	q = get_task_cache(tsk);
++	if (q)
++		kmem_cache_free(sigqueue_cachep, q);
++}
++
++/*
+  * Flush all pending signals for a task.
+  */
+ void __flush_signals(struct task_struct *t)
+@@ -554,7 +626,7 @@ static void collect_signal(int sig, struct sigpending *list, siginfo_t *info)
+ still_pending:
+ 		list_del_init(&first->list);
+ 		copy_siginfo(info, &first->info);
+-		__sigqueue_free(first);
++		sigqueue_free_current(first);
+ 	} else {
+ 		/*
+ 		 * Ok, it wasn't in the queue.  This must be
+@@ -600,6 +672,8 @@ int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info)
+ {
+ 	int signr;
+ 
++	WARN_ON_ONCE(tsk != current);
++
+ 	/* We only dequeue private signals from ourselves, we don't let
+ 	 * signalfd steal them
+ 	 */
+@@ -1518,7 +1592,8 @@ EXPORT_SYMBOL(kill_pid);
+  */
+ struct sigqueue *sigqueue_alloc(void)
+ {
+-	struct sigqueue *q = __sigqueue_alloc(-1, current, GFP_KERNEL, 0);
++	/* Preallocated sigqueue objects always from the slabcache ! */
++	struct sigqueue *q = __sigqueue_do_alloc(-1, current, GFP_KERNEL, 0, 1);
+ 
+ 	if (q)
+ 		q->flags |= SIGQUEUE_PREALLOC;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0036-signal-x86-Delay-calling-signals-in-atomic.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0036-signal-x86-Delay-calling-signals-in-atomic.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0036-signal-x86-Delay-calling-signals-in-atomic.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0036-signal-x86-Delay-calling-signals-in-atomic.patch)
@@ -0,0 +1,155 @@
+From dc137232bdae1d60e8b5ad7f2715ab1c331955be Mon Sep 17 00:00:00 2001
+From: Oleg Nesterov <oleg at redhat.com>
+Date: Tue, 10 Apr 2012 14:33:53 -0400
+Subject: [PATCH 036/271] signal/x86: Delay calling signals in atomic
+
+On x86_64 we must disable preemption before we enable interrupts
+for stack faults, int3 and debugging, because the current task is using
+a per CPU debug stack defined by the IST. If we schedule out, another task
+can come in and use the same stack and cause the stack to be corrupted
+and crash the kernel on return.
+
+When CONFIG_PREEMPT_RT_FULL is enabled, spin_locks become mutexes, and
+one of these is the spin lock used in signal handling.
+
+Some of the debug code (int3) causes do_trap() to send a signal.
+This function calls a spin lock that has been converted to a mutex
+and has the possibility to sleep. If this happens, the above issues with
+the corrupted stack is possible.
+
+Instead of calling the signal right away, for PREEMPT_RT and x86_64,
+the signal information is stored on the stacks task_struct and
+TIF_NOTIFY_RESUME is set. Then on exit of the trap, the signal resume
+code will send the signal when preemption is enabled.
+
+[ rostedt: Switched from #ifdef CONFIG_PREEMPT_RT_FULL to
+  ARCH_RT_DELAYS_SIGNAL_SEND and added comments to the code. ]
+
+Cc: stable-rt at vger.kernel.org
+Signed-off-by: Oleg Nesterov <oleg at redhat.com>
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/include/asm/signal.h |   13 +++++++++++++
+ arch/x86/kernel/signal.c      |    9 +++++++++
+ include/linux/sched.h         |    4 ++++
+ kernel/signal.c               |   37 +++++++++++++++++++++++++++++++++++--
+ 4 files changed, 61 insertions(+), 2 deletions(-)
+
+diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h
+index 598457c..1213ebd 100644
+--- a/arch/x86/include/asm/signal.h
++++ b/arch/x86/include/asm/signal.h
+@@ -31,6 +31,19 @@ typedef struct {
+ 	unsigned long sig[_NSIG_WORDS];
+ } sigset_t;
+ 
++/*
++ * Because some traps use the IST stack, we must keep
++ * preemption disabled while calling do_trap(), but do_trap()
++ * may call force_sig_info() which will grab the signal spin_locks
++ * for the task, which in PREEMPT_RT_FULL are mutexes.
++ * By defining ARCH_RT_DELAYS_SIGNAL_SEND the force_sig_info() will
++ * set TIF_NOTIFY_RESUME and set up the signal to be sent on exit
++ * of the trap.
++ */
++#if defined(CONFIG_PREEMPT_RT_FULL) && defined(CONFIG_X86_64)
++#define ARCH_RT_DELAYS_SIGNAL_SEND
++#endif
++
+ #else
+ /* Here we must cater to libcs that poke about in kernel headers.  */
+ 
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index 54ddaeb2..12c4d53 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -820,6 +820,15 @@ do_notify_resume(struct pt_regs *regs, void *unused, __u32 thread_info_flags)
+ 		mce_notify_process();
+ #endif /* CONFIG_X86_64 && CONFIG_X86_MCE */
+ 
++#ifdef ARCH_RT_DELAYS_SIGNAL_SEND
++	if (unlikely(current->forced_info.si_signo)) {
++		struct task_struct *t = current;
++		force_sig_info(t->forced_info.si_signo,
++					&t->forced_info, t);
++		t->forced_info.si_signo = 0;
++	}
++#endif
++
+ 	/* deal with pending signal delivery */
+ 	if (thread_info_flags & _TIF_SIGPENDING)
+ 		do_signal(regs);
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 7268acf..ed2b9f9 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1392,6 +1392,10 @@ struct task_struct {
+ 	sigset_t blocked, real_blocked;
+ 	sigset_t saved_sigmask;	/* restored if set_restore_sigmask() was used */
+ 	struct sigpending pending;
++#ifdef CONFIG_PREEMPT_RT_FULL
++	/* TODO: move me into ->restart_block ? */
++	struct siginfo forced_info;
++#endif
+ 
+ 	unsigned long sas_ss_sp;
+ 	size_t sas_ss_size;
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 92c5605..385d137 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1273,8 +1273,8 @@ int do_send_sig_info(int sig, struct siginfo *info, struct task_struct *p,
+  * We don't want to have recursive SIGSEGV's etc, for example,
+  * that is why we also clear SIGNAL_UNKILLABLE.
+  */
+-int
+-force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
++static int
++do_force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
+ {
+ 	unsigned long int flags;
+ 	int ret, blocked, ignored;
+@@ -1299,6 +1299,39 @@ force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
+ 	return ret;
+ }
+ 
++int force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
++{
++/*
++ * On some archs, PREEMPT_RT has to delay sending a signal from a trap
++ * since it can not enable preemption, and the signal code's spin_locks
++ * turn into mutexes. Instead, it must set TIF_NOTIFY_RESUME which will
++ * send the signal on exit of the trap.
++ */
++#ifdef ARCH_RT_DELAYS_SIGNAL_SEND
++	if (in_atomic()) {
++		if (WARN_ON_ONCE(t != current))
++			return 0;
++		if (WARN_ON_ONCE(t->forced_info.si_signo))
++			return 0;
++
++		if (is_si_special(info)) {
++			WARN_ON_ONCE(info != SEND_SIG_PRIV);
++			t->forced_info.si_signo = sig;
++			t->forced_info.si_errno = 0;
++			t->forced_info.si_code = SI_KERNEL;
++			t->forced_info.si_pid = 0;
++			t->forced_info.si_uid = 0;
++		} else {
++			t->forced_info = *info;
++		}
++
++		set_tsk_thread_flag(t, TIF_NOTIFY_RESUME);
++		return 0;
++	}
++#endif
++	return do_force_sig_info(sig, info, t);
++}
++
+ /*
+  * Nuke all other threads in the group.
+  */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0037-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0037-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0037-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0037-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch)
@@ -0,0 +1,53 @@
+From 1ee386247ad5d93f71f7ddfe723a117db72b3017 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:30 -0500
+Subject: [PATCH 037/271] generic: Use raw local irq variant for generic
+ cmpxchg
+
+No point in tracing those.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/asm-generic/cmpxchg-local.h |    8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/include/asm-generic/cmpxchg-local.h b/include/asm-generic/cmpxchg-local.h
+index 2533fdd..d8d4c89 100644
+--- a/include/asm-generic/cmpxchg-local.h
++++ b/include/asm-generic/cmpxchg-local.h
+@@ -21,7 +21,7 @@ static inline unsigned long __cmpxchg_local_generic(volatile void *ptr,
+ 	if (size == 8 && sizeof(unsigned long) != 8)
+ 		wrong_size_cmpxchg(ptr);
+ 
+-	local_irq_save(flags);
++	raw_local_irq_save(flags);
+ 	switch (size) {
+ 	case 1: prev = *(u8 *)ptr;
+ 		if (prev == old)
+@@ -42,7 +42,7 @@ static inline unsigned long __cmpxchg_local_generic(volatile void *ptr,
+ 	default:
+ 		wrong_size_cmpxchg(ptr);
+ 	}
+-	local_irq_restore(flags);
++	raw_local_irq_restore(flags);
+ 	return prev;
+ }
+ 
+@@ -55,11 +55,11 @@ static inline u64 __cmpxchg64_local_generic(volatile void *ptr,
+ 	u64 prev;
+ 	unsigned long flags;
+ 
+-	local_irq_save(flags);
++	raw_local_irq_save(flags);
+ 	prev = *(u64 *)ptr;
+ 	if (prev == old)
+ 		*(u64 *)ptr = new;
+-	local_irq_restore(flags);
++	raw_local_irq_restore(flags);
+ 	return prev;
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0038-drivers-random-Reduce-preempt-disabled-region.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0038-drivers-random-Reduce-preempt-disabled-region.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0038-drivers-random-Reduce-preempt-disabled-region.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0038-drivers-random-Reduce-preempt-disabled-region.patch)
@@ -0,0 +1,43 @@
+From 03704a06f34554f4b02554848ddc4f536c428c61 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:30 -0500
+Subject: [PATCH 038/271] drivers: random: Reduce preempt disabled region
+
+No need to keep preemption disabled across the whole function.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/char/random.c |    9 +++++----
+ 1 file changed, 5 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 6035ab8..786a856 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -633,8 +633,11 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
+ 	preempt_disable();
+ 	/* if over the trickle threshold, use only 1 in 4096 samples */
+ 	if (input_pool.entropy_count > trickle_thresh &&
+-	    ((__this_cpu_inc_return(trickle_count) - 1) & 0xfff))
+-		goto out;
++	    ((__this_cpu_inc_return(trickle_count) - 1) & 0xfff)) {
++		preempt_enable();
++		return;
++	}
++	preempt_enable();
+ 
+ 	sample.jiffies = jiffies;
+ 	sample.cycles = get_cycles();
+@@ -676,8 +679,6 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
+ 		credit_entropy_bits(&input_pool,
+ 				    min_t(int, fls(delta>>1), 11));
+ 	}
+-out:
+-	preempt_enable();
+ }
+ 
+ void add_input_randomness(unsigned int type, unsigned int code,
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0039-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0039-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0039-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0039-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch)
@@ -0,0 +1,71 @@
+From b6eb47cbeb06aa58faec4bc43b9a8b3e99252562 Mon Sep 17 00:00:00 2001
+From: Benedikt Spranger <b.spranger at linutronix.de>
+Date: Sat, 6 Mar 2010 17:47:10 +0100
+Subject: [PATCH 039/271] ARM: AT91: PIT: Remove irq handler when clock event
+ is unused
+
+Setup and remove the interrupt handler in clock event mode selection.
+This avoids calling the (shared) interrupt handler when the device is
+not used.
+
+Signed-off-by: Benedikt Spranger <b.spranger at linutronix.de>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/arm/mach-at91/at91rm9200_time.c  |    2 ++
+ arch/arm/mach-at91/at91sam926x_time.c |    6 +++++-
+ 2 files changed, 7 insertions(+), 1 deletion(-)
+
+diff --git a/arch/arm/mach-at91/at91rm9200_time.c b/arch/arm/mach-at91/at91rm9200_time.c
+index 1dd69c8..0666570 100644
+--- a/arch/arm/mach-at91/at91rm9200_time.c
++++ b/arch/arm/mach-at91/at91rm9200_time.c
+@@ -114,6 +114,7 @@ clkevt32k_mode(enum clock_event_mode mode, struct clock_event_device *dev)
+ 	last_crtr = read_CRTR();
+ 	switch (mode) {
+ 	case CLOCK_EVT_MODE_PERIODIC:
++		setup_irq(AT91_ID_SYS, &at91rm9200_timer_irq);
+ 		/* PIT for periodic irqs; fixed rate of 1/HZ */
+ 		irqmask = AT91_ST_PITS;
+ 		at91_sys_write(AT91_ST_PIMR, LATCH);
+@@ -127,6 +128,7 @@ clkevt32k_mode(enum clock_event_mode mode, struct clock_event_device *dev)
+ 		break;
+ 	case CLOCK_EVT_MODE_SHUTDOWN:
+ 	case CLOCK_EVT_MODE_UNUSED:
++		remove_irq(AT91_ID_SYS, &at91rm9200_timer_irq);
+ 	case CLOCK_EVT_MODE_RESUME:
+ 		irqmask = 0;
+ 		break;
+diff --git a/arch/arm/mach-at91/at91sam926x_time.c b/arch/arm/mach-at91/at91sam926x_time.c
+index 4ba8549..97d1e14 100644
+--- a/arch/arm/mach-at91/at91sam926x_time.c
++++ b/arch/arm/mach-at91/at91sam926x_time.c
+@@ -54,7 +54,7 @@ static struct clocksource pit_clk = {
+ 	.flags		= CLOCK_SOURCE_IS_CONTINUOUS,
+ };
+ 
+-
++static struct irqaction at91sam926x_pit_irq;
+ /*
+  * Clockevent device:  interrupts every 1/HZ (== pit_cycles * MCK/16)
+  */
+@@ -63,6 +63,9 @@ pit_clkevt_mode(enum clock_event_mode mode, struct clock_event_device *dev)
+ {
+ 	switch (mode) {
+ 	case CLOCK_EVT_MODE_PERIODIC:
++		/* Set up irq handler */
++		setup_irq(AT91_ID_SYS, &at91sam926x_pit_irq);
++
+ 		/* update clocksource counter */
+ 		pit_cnt += pit_cycle * PIT_PICNT(at91_sys_read(AT91_PIT_PIVR));
+ 		at91_sys_write(AT91_PIT_MR, (pit_cycle - 1) | AT91_PIT_PITEN
+@@ -75,6 +78,7 @@ pit_clkevt_mode(enum clock_event_mode mode, struct clock_event_device *dev)
+ 	case CLOCK_EVT_MODE_UNUSED:
+ 		/* disable irq, leaving the clocksource active */
+ 		at91_sys_write(AT91_PIT_MR, (pit_cycle - 1) | AT91_PIT_PITEN);
++		remove_irq(AT91_ID_SYS, &at91sam926x_pit_irq);
+ 		break;
+ 	case CLOCK_EVT_MODE_RESUME:
+ 		break;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0040-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0040-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0040-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0040-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch)
@@ -0,0 +1,167 @@
+From 310f36877b76b0b152d964886977d6773125ac69 Mon Sep 17 00:00:00 2001
+From: Benedikt Spranger <b.spranger at linutronix.de>
+Date: Mon, 8 Mar 2010 18:57:04 +0100
+Subject: [PATCH 040/271] clocksource: TCLIB: Allow higher clock rates for
+ clock events
+
+As default the TCLIB uses the 32KiHz base clock rate for clock events.
+Add a compile time selection to allow higher clock resulution.
+
+Signed-off-by: Benedikt Spranger <b.spranger at linutronix.de>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/clocksource/tcb_clksrc.c |   44 ++++++++++++++++++++++----------------
+ drivers/misc/Kconfig             |   11 ++++++++--
+ 2 files changed, 35 insertions(+), 20 deletions(-)
+
+diff --git a/drivers/clocksource/tcb_clksrc.c b/drivers/clocksource/tcb_clksrc.c
+index 79c47e8..8976b3d 100644
+--- a/drivers/clocksource/tcb_clksrc.c
++++ b/drivers/clocksource/tcb_clksrc.c
+@@ -21,8 +21,7 @@
+  *     resolution better than 200 nsec).
+  *
+  *   - The third channel may be used to provide a 16-bit clockevent
+- *     source, used in either periodic or oneshot mode.  This runs
+- *     at 32 KiHZ, and can handle delays of up to two seconds.
++ *     source, used in either periodic or oneshot mode.
+  *
+  * A boot clocksource and clockevent source are also currently needed,
+  * unless the relevant platforms (ARM/AT91, AVR32/AT32) are changed so
+@@ -68,6 +67,7 @@ static struct clocksource clksrc = {
+ struct tc_clkevt_device {
+ 	struct clock_event_device	clkevt;
+ 	struct clk			*clk;
++	u32				freq;
+ 	void __iomem			*regs;
+ };
+ 
+@@ -76,13 +76,6 @@ static struct tc_clkevt_device *to_tc_clkevt(struct clock_event_device *clkevt)
+ 	return container_of(clkevt, struct tc_clkevt_device, clkevt);
+ }
+ 
+-/* For now, we always use the 32K clock ... this optimizes for NO_HZ,
+- * because using one of the divided clocks would usually mean the
+- * tick rate can never be less than several dozen Hz (vs 0.5 Hz).
+- *
+- * A divided clock could be good for high resolution timers, since
+- * 30.5 usec resolution can seem "low".
+- */
+ static u32 timer_clock;
+ 
+ static void tc_mode(enum clock_event_mode m, struct clock_event_device *d)
+@@ -105,11 +98,12 @@ static void tc_mode(enum clock_event_mode m, struct clock_event_device *d)
+ 	case CLOCK_EVT_MODE_PERIODIC:
+ 		clk_enable(tcd->clk);
+ 
+-		/* slow clock, count up to RC, then irq and restart */
++		/* count up to RC, then irq and restart */
+ 		__raw_writel(timer_clock
+ 				| ATMEL_TC_WAVE | ATMEL_TC_WAVESEL_UP_AUTO,
+ 				regs + ATMEL_TC_REG(2, CMR));
+-		__raw_writel((32768 + HZ/2) / HZ, tcaddr + ATMEL_TC_REG(2, RC));
++		__raw_writel((tcd->freq + HZ/2)/HZ,
++			     tcaddr + ATMEL_TC_REG(2, RC));
+ 
+ 		/* Enable clock and interrupts on RC compare */
+ 		__raw_writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER));
+@@ -122,7 +116,7 @@ static void tc_mode(enum clock_event_mode m, struct clock_event_device *d)
+ 	case CLOCK_EVT_MODE_ONESHOT:
+ 		clk_enable(tcd->clk);
+ 
+-		/* slow clock, count up to RC, then irq and stop */
++		/* count up to RC, then irq and stop */
+ 		__raw_writel(timer_clock | ATMEL_TC_CPCSTOP
+ 				| ATMEL_TC_WAVE | ATMEL_TC_WAVESEL_UP_AUTO,
+ 				regs + ATMEL_TC_REG(2, CMR));
+@@ -152,8 +146,12 @@ static struct tc_clkevt_device clkevt = {
+ 		.features	= CLOCK_EVT_FEAT_PERIODIC
+ 					| CLOCK_EVT_FEAT_ONESHOT,
+ 		.shift		= 32,
++#ifdef CONFIG_ATMEL_TCB_CLKSRC_USE_SLOW_CLOCK
+ 		/* Should be lower than at91rm9200's system timer */
+ 		.rating		= 125,
++#else
++		.rating		= 200,
++#endif
+ 		.set_next_event	= tc_next_event,
+ 		.set_mode	= tc_mode,
+ 	},
+@@ -179,8 +177,9 @@ static struct irqaction tc_irqaction = {
+ 	.handler	= ch2_irq,
+ };
+ 
+-static void __init setup_clkevents(struct atmel_tc *tc, int clk32k_divisor_idx)
++static void __init setup_clkevents(struct atmel_tc *tc, int divisor_idx)
+ {
++	unsigned divisor = atmel_tc_divisors[divisor_idx];
+ 	struct clk *t2_clk = tc->clk[2];
+ 	int irq = tc->irq[2];
+ 
+@@ -188,11 +187,17 @@ static void __init setup_clkevents(struct atmel_tc *tc, int clk32k_divisor_idx)
+ 	clkevt.clk = t2_clk;
+ 	tc_irqaction.dev_id = &clkevt;
+ 
+-	timer_clock = clk32k_divisor_idx;
++	timer_clock = divisor_idx;
+ 
+-	clkevt.clkevt.mult = div_sc(32768, NSEC_PER_SEC, clkevt.clkevt.shift);
+-	clkevt.clkevt.max_delta_ns
+-		= clockevent_delta2ns(0xffff, &clkevt.clkevt);
++	if (!divisor)
++		clkevt.freq = 32768;
++	else
++		clkevt.freq = clk_get_rate(t2_clk)/divisor;
++
++	clkevt.clkevt.mult = div_sc(clkevt.freq, NSEC_PER_SEC,
++				    clkevt.clkevt.shift);
++	clkevt.clkevt.max_delta_ns =
++		clockevent_delta2ns(0xffff, &clkevt.clkevt);
+ 	clkevt.clkevt.min_delta_ns = clockevent_delta2ns(1, &clkevt.clkevt) + 1;
+ 	clkevt.clkevt.cpumask = cpumask_of(0);
+ 
+@@ -295,8 +300,11 @@ static int __init tcb_clksrc_init(void)
+ 	clocksource_register(&clksrc);
+ 
+ 	/* channel 2:  periodic and oneshot timer support */
++#ifdef CONFIG_ATMEL_TCB_CLKSRC_USE_SLOW_CLOCK
+ 	setup_clkevents(tc, clk32k_divisor_idx);
+-
++#else
++	setup_clkevents(tc, best_divisor_idx);
++#endif
+ 	return 0;
+ }
+ arch_initcall(tcb_clksrc_init);
+diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
+index 5664696..f3031a4 100644
+--- a/drivers/misc/Kconfig
++++ b/drivers/misc/Kconfig
+@@ -97,8 +97,7 @@ config ATMEL_TCB_CLKSRC
+ 	  are combined to make a single 32-bit timer.
+ 
+ 	  When GENERIC_CLOCKEVENTS is defined, the third timer channel
+-	  may be used as a clock event device supporting oneshot mode
+-	  (delays of up to two seconds) based on the 32 KiHz clock.
++	  may be used as a clock event device supporting oneshot mode.
+ 
+ config ATMEL_TCB_CLKSRC_BLOCK
+ 	int
+@@ -112,6 +111,14 @@ config ATMEL_TCB_CLKSRC_BLOCK
+ 	  TC can be used for other purposes, such as PWM generation and
+ 	  interval timing.
+ 
++config ATMEL_TCB_CLKSRC_USE_SLOW_CLOCK
++	bool "TC Block use 32 KiHz clock"
++	depends on ATMEL_TCB_CLKSRC
++	default y
++	help
++	  Select this to use 32 KiHz base clock rate as TC block clock
++	  source for clock events.
++
+ config IBM_ASM
+ 	tristate "Device driver for IBM RSA service processor"
+ 	depends on X86 && PCI && INPUT && EXPERIMENTAL
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0041-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0041-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0041-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0041-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch)
@@ -0,0 +1,29 @@
+From 794562a034d7f42a65c216ed8e9b1de35c93121d Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:18 -0500
+Subject: [PATCH 041/271] drivers/net: tulip_remove_one needs to call
+ pci_disable_device()
+
+Otherwise the device is not completely shut down.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/net/ethernet/dec/tulip/tulip_core.c |    1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/net/ethernet/dec/tulip/tulip_core.c b/drivers/net/ethernet/dec/tulip/tulip_core.c
+index 9656dd0..ef7df09 100644
+--- a/drivers/net/ethernet/dec/tulip/tulip_core.c
++++ b/drivers/net/ethernet/dec/tulip/tulip_core.c
+@@ -1949,6 +1949,7 @@ static void __devexit tulip_remove_one (struct pci_dev *pdev)
+ 	pci_iounmap(pdev, tp->base_addr);
+ 	free_netdev (dev);
+ 	pci_release_regions (pdev);
++	pci_disable_device (pdev);
+ 	pci_set_drvdata (pdev, NULL);
+ 
+ 	/* pci_power_off (pdev, -1); */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0042-drivers-net-Use-disable_irq_nosync-in-8139too.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0042-drivers-net-Use-disable_irq_nosync-in-8139too.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0042-drivers-net-Use-disable_irq_nosync-in-8139too.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0042-drivers-net-Use-disable_irq_nosync-in-8139too.patch)
@@ -0,0 +1,30 @@
+From 06e38eda0a63fb06223cd7e01cac0aecde279633 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:24 -0500
+Subject: [PATCH 042/271] drivers/net: Use disable_irq_nosync() in 8139too
+
+Use disable_irq_nosync() instead of disable_irq() as this might be
+called in atomic context with netpoll.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/net/ethernet/realtek/8139too.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/net/ethernet/realtek/8139too.c b/drivers/net/ethernet/realtek/8139too.c
+index 4d6b254..004c054 100644
+--- a/drivers/net/ethernet/realtek/8139too.c
++++ b/drivers/net/ethernet/realtek/8139too.c
+@@ -2174,7 +2174,7 @@ static irqreturn_t rtl8139_interrupt (int irq, void *dev_instance)
+  */
+ static void rtl8139_poll_controller(struct net_device *dev)
+ {
+-	disable_irq(dev->irq);
++	disable_irq_nosync(dev->irq);
+ 	rtl8139_interrupt(dev->irq, dev);
+ 	enable_irq(dev->irq);
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0043-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0043-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0043-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0043-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch)
@@ -0,0 +1,57 @@
+From fd24967f0994bb38ad62f23df19a5f77f02fd62b Mon Sep 17 00:00:00 2001
+From: Darren Hart <dvhltc at us.ibm.com>
+Date: Tue, 18 May 2010 14:33:07 -0700
+Subject: [PATCH 043/271] drivers: net: ehea: Make rx irq handler non-threaded
+ (IRQF_NO_THREAD)
+
+The underlying hardware is edge triggered but presented by XICS as level
+triggered. The edge triggered interrupts are not reissued after masking. This
+is not a problem in mainline which does not mask the interrupt (relying on the
+EOI mechanism instead). The threaded interrupts in PREEMPT_RT do mask the
+interrupt, and can lose interrupts that occurred while masked, resulting in a
+hung ethernet interface.
+
+The receive handler simply calls napi_schedule(), as such, there is no
+significant additional overhead in making this non-threaded, since we either
+wakeup the threaded irq handler to call napi_schedule(), or just call
+napi_schedule() directly to wakeup the softirqs.  As the receive handler is
+lockless, there is no need to convert any of the ehea spinlock_t's to
+raw_spinlock_t's.
+
+Without this patch, a simple scp file copy loop would fail quickly (usually
+seconds). We have over two hours of sustained scp activity with the patch
+applied.
+
+Credit goes to Will Schmidt for lots of instrumentation and tracing which
+clarified the scenario and to Thomas Gleixner for the incredibly simple
+solution.
+
+Signed-off-by: Darren Hart <dvhltc at us.ibm.com>
+Acked-by: Will Schmidt <will_schmidt at vnet.ibm.com>
+Cc: Jan-Bernd Themann <themann at de.ibm.com>
+Cc: Nivedita Singhvi <niv at us.ibm.com>
+Cc: Brian King <bjking1 at us.ibm.com>
+Cc: Michael Ellerman <ellerman at au1.ibm.com>
+Cc: Doug Maxey <doug.maxey at us.ibm.com>
+LKML-Reference: <4BF30793.5070300 at us.ibm.com>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/net/ethernet/ibm/ehea/ehea_main.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c
+index bfeccbf..fddfaf1 100644
+--- a/drivers/net/ethernet/ibm/ehea/ehea_main.c
++++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c
+@@ -1304,7 +1304,7 @@ static int ehea_reg_interrupts(struct net_device *dev)
+ 			 "%s-queue%d", dev->name, i);
+ 		ret = ibmebus_request_irq(pr->eq->attr.ist1,
+ 					  ehea_recv_irq_handler,
+-					  IRQF_DISABLED, pr->int_send_name,
++					  IRQF_NO_THREAD, pr->int_send_name,
+ 					  pr);
+ 		if (ret) {
+ 			netdev_err(dev, "failed registering irq for ehea_queue port_res_nr:%d, ist=%X\n",
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0044-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0044-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0044-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0044-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch)
@@ -0,0 +1,58 @@
+From acbfdf1bf249cd99effa66f77ca4640aa31b1700 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 17 Nov 2009 12:02:43 +0100
+Subject: [PATCH 044/271] drivers: net: at91_ether: Make mdio protection -rt
+ safe
+
+Neither the phy interrupt nor the timer callback which updates the
+link status in absense of a phy interrupt are taking lp->lock which
+serializes the MDIO access. This works on mainline as at91 is an UP
+machine. On preempt-rt the timer callback can run even in the
+spin_lock_irq(&lp->lock) protected code pathes because spin_lock_irq
+is neither disabling interrupts nor disabling preemption.
+
+Fix this by adding proper locking to at91ether_phy_interrupt() and
+at91_check_ether() which serializes the access on -rt.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/net/ethernet/cadence/at91_ether.c |    5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/drivers/net/ethernet/cadence/at91_ether.c b/drivers/net/ethernet/cadence/at91_ether.c
+index 56624d3..ad4dbea 100644
+--- a/drivers/net/ethernet/cadence/at91_ether.c
++++ b/drivers/net/ethernet/cadence/at91_ether.c
+@@ -200,7 +200,9 @@ static irqreturn_t at91ether_phy_interrupt(int irq, void *dev_id)
+ 	struct net_device *dev = (struct net_device *) dev_id;
+ 	struct at91_private *lp = netdev_priv(dev);
+ 	unsigned int phy;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&lp->lock, flags);
+ 	/*
+ 	 * This hander is triggered on both edges, but the PHY chips expect
+ 	 * level-triggering.  We therefore have to check if the PHY actually has
+@@ -242,6 +244,7 @@ static irqreturn_t at91ether_phy_interrupt(int irq, void *dev_id)
+ 
+ done:
+ 	disable_mdi();
++	spin_unlock_irqrestore(&lp->lock, flags);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -398,9 +401,11 @@ static void at91ether_check_link(unsigned long dev_id)
+ 	struct net_device *dev = (struct net_device *) dev_id;
+ 	struct at91_private *lp = netdev_priv(dev);
+ 
++	spin_lock_irq(&lp->lock);
+ 	enable_mdi();
+ 	update_linkspeed(dev, 1);
+ 	disable_mdi();
++	spin_unlock_irq(&lp->lock);
+ 
+ 	mod_timer(&lp->check_timer, jiffies + LINK_POLL_INTERVAL);
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0045-preempt-mark-legitimated-no-resched-sites.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0045-preempt-mark-legitimated-no-resched-sites.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0045-preempt-mark-legitimated-no-resched-sites.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0045-preempt-mark-legitimated-no-resched-sites.patch.patch)
@@ -0,0 +1,124 @@
+From 699b55f8b10c2c1ecf374558e8e6f8092580a972 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 21 Mar 2011 13:32:17 +0100
+Subject: [PATCH 045/271] preempt-mark-legitimated-no-resched-sites.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/powerpc/kernel/idle.c     |    2 +-
+ arch/sparc/kernel/process_64.c |    2 +-
+ include/linux/preempt.h        |    5 ++++-
+ kernel/sched.c                 |    6 +++---
+ kernel/softirq.c               |    4 ++--
+ 5 files changed, 11 insertions(+), 8 deletions(-)
+
+diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c
+index f46dae5..5d70d10 100644
+--- a/arch/powerpc/kernel/idle.c
++++ b/arch/powerpc/kernel/idle.c
+@@ -95,7 +95,7 @@ void cpu_idle(void)
+ 		ppc64_runlatch_on();
+ 		tick_nohz_restart_sched_tick();
+ 		if (cpu_should_die()) {
+-			preempt_enable_no_resched();
++			__preempt_enable_no_resched();
+ 			cpu_die();
+ 		}
+ 		schedule_preempt_disabled();
+diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
+index 8ba0dbe..86fe09a 100644
+--- a/arch/sparc/kernel/process_64.c
++++ b/arch/sparc/kernel/process_64.c
+@@ -104,7 +104,7 @@ void cpu_idle(void)
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+ 		if (cpu_is_offline(cpu)) {
+-			preempt_enable_no_resched();
++			__preempt_enable_no_resched();
+ 			cpu_play_dead();
+ 		}
+ #endif
+diff --git a/include/linux/preempt.h b/include/linux/preempt.h
+index 58969b2..227b0f5 100644
+--- a/include/linux/preempt.h
++++ b/include/linux/preempt.h
+@@ -48,12 +48,14 @@ do { \
+ 	barrier(); \
+ } while (0)
+ 
+-#define preempt_enable_no_resched() \
++#define __preempt_enable_no_resched() \
+ do { \
+ 	barrier(); \
+ 	dec_preempt_count(); \
+ } while (0)
+ 
++#define preempt_enable_no_resched()	__preempt_enable_no_resched()
++
+ #define preempt_enable() \
+ do { \
+ 	preempt_enable_no_resched(); \
+@@ -92,6 +94,7 @@ do { \
+ #else /* !CONFIG_PREEMPT_COUNT */
+ 
+ #define preempt_disable()		do { } while (0)
++#define __preempt_enable_no_resched()	do { } while (0)
+ #define preempt_enable_no_resched()	do { } while (0)
+ #define preempt_enable()		do { } while (0)
+ 
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 342114c..b432fe0 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -4414,7 +4414,7 @@ need_resched:
+ 
+ 	post_schedule(rq);
+ 
+-	preempt_enable_no_resched();
++	__preempt_enable_no_resched();
+ 	if (need_resched())
+ 		goto need_resched;
+ }
+@@ -4462,7 +4462,7 @@ EXPORT_SYMBOL(schedule);
+  */
+ void __sched schedule_preempt_disabled(void)
+ {
+-	preempt_enable_no_resched();
++	__preempt_enable_no_resched();
+ 	schedule();
+ 	preempt_disable();
+ }
+@@ -5704,7 +5704,7 @@ SYSCALL_DEFINE0(sched_yield)
+ 	__release(rq->lock);
+ 	spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
+ 	do_raw_spin_unlock(&rq->lock);
+-	preempt_enable_no_resched();
++	__preempt_enable_no_resched();
+ 
+ 	schedule();
+ 
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index c9da2c8..a8becbf 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -353,7 +353,7 @@ void irq_exit(void)
+ 	if (idle_cpu(smp_processor_id()) && !in_interrupt() && !need_resched())
+ 		tick_nohz_stop_sched_tick(0);
+ #endif
+-	preempt_enable_no_resched();
++	__preempt_enable_no_resched();
+ }
+ 
+ /*
+@@ -759,7 +759,7 @@ static int run_ksoftirqd(void * __bind_cpu)
+ 			if (local_softirq_pending())
+ 				__do_softirq();
+ 			local_irq_enable();
+-			preempt_enable_no_resched();
++			__preempt_enable_no_resched();
+ 			cond_resched();
+ 			preempt_disable();
+ 			rcu_note_context_switch((long)__bind_cpu);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0046-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0046-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0046-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0046-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch)
@@ -0,0 +1,130 @@
+From 58b1e10ff1e4fdcfe8e39e580f9fd994409bbc68 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:37 -0500
+Subject: [PATCH 046/271] mm: Prepare decoupling the page fault disabling
+ logic
+
+Add a pagefault_disabled variable to task_struct to allow decoupling
+the pagefault-disabled logic from the preempt count.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/sched.h   |    1 +
+ include/linux/uaccess.h |   33 +++------------------------------
+ kernel/fork.c           |    1 +
+ mm/memory.c             |   29 +++++++++++++++++++++++++++++
+ 4 files changed, 34 insertions(+), 30 deletions(-)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index ed2b9f9..bce86f9 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1435,6 +1435,7 @@ struct task_struct {
+ 	/* mutex deadlock detection */
+ 	struct mutex_waiter *blocked_on;
+ #endif
++	int pagefault_disabled;
+ #ifdef CONFIG_TRACE_IRQFLAGS
+ 	unsigned int irq_events;
+ 	unsigned long hardirq_enable_ip;
+diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
+index 5ca0951..9414a1b 100644
+--- a/include/linux/uaccess.h
++++ b/include/linux/uaccess.h
+@@ -6,37 +6,10 @@
+ 
+ /*
+  * These routines enable/disable the pagefault handler in that
+- * it will not take any locks and go straight to the fixup table.
+- *
+- * They have great resemblance to the preempt_disable/enable calls
+- * and in fact they are identical; this is because currently there is
+- * no other way to make the pagefault handlers do this. So we do
+- * disable preemption but we don't necessarily care about that.
++ * it will not take any MM locks and go straight to the fixup table.
+  */
+-static inline void pagefault_disable(void)
+-{
+-	inc_preempt_count();
+-	/*
+-	 * make sure to have issued the store before a pagefault
+-	 * can hit.
+-	 */
+-	barrier();
+-}
+-
+-static inline void pagefault_enable(void)
+-{
+-	/*
+-	 * make sure to issue those last loads/stores before enabling
+-	 * the pagefault handler again.
+-	 */
+-	barrier();
+-	dec_preempt_count();
+-	/*
+-	 * make sure we do..
+-	 */
+-	barrier();
+-	preempt_check_resched();
+-}
++extern void pagefault_disable(void);
++extern void pagefault_enable(void);
+ 
+ #ifndef ARCH_HAS_NOCACHE_UACCESS
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 7335449..e2b9104 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1198,6 +1198,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
+ 	p->hardirq_context = 0;
+ 	p->softirq_context = 0;
+ #endif
++	p->pagefault_disabled = 0;
+ #ifdef CONFIG_LOCKDEP
+ 	p->lockdep_depth = 0; /* no locks held yet */
+ 	p->curr_chain_key = 0;
+diff --git a/mm/memory.c b/mm/memory.c
+index 1b1ca17..454ad3f 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3444,6 +3444,35 @@ unlock:
+ 	return 0;
+ }
+ 
++void pagefault_disable(void)
++{
++	inc_preempt_count();
++	current->pagefault_disabled++;
++	/*
++	 * make sure to have issued the store before a pagefault
++	 * can hit.
++	 */
++	barrier();
++}
++EXPORT_SYMBOL_GPL(pagefault_disable);
++
++void pagefault_enable(void)
++{
++	/*
++	 * make sure to issue those last loads/stores before enabling
++	 * the pagefault handler again.
++	 */
++	barrier();
++	current->pagefault_disabled--;
++	dec_preempt_count();
++	/*
++	 * make sure we do..
++	 */
++	barrier();
++	preempt_check_resched();
++}
++EXPORT_SYMBOL_GPL(pagefault_enable);
++
+ /*
+  * By the time we get here, we already hold the mm semaphore
+  */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0047-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0047-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0047-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0047-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch)
@@ -0,0 +1,337 @@
+From 3395c3e882ade6026f2cc551f218cc5ce652c1aa Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 17 Mar 2011 11:32:28 +0100
+Subject: [PATCH 047/271] mm: Fixup all fault handlers to check
+ current->pagefault_disable
+
+Necessary for decoupling pagefault disable from preempt count.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/alpha/mm/fault.c      |    2 +-
+ arch/arm/mm/fault.c        |    2 +-
+ arch/avr32/mm/fault.c      |    3 ++-
+ arch/cris/mm/fault.c       |    2 +-
+ arch/frv/mm/fault.c        |    2 +-
+ arch/ia64/mm/fault.c       |    2 +-
+ arch/m32r/mm/fault.c       |    2 +-
+ arch/m68k/mm/fault.c       |    2 +-
+ arch/microblaze/mm/fault.c |    2 +-
+ arch/mips/mm/fault.c       |    2 +-
+ arch/mn10300/mm/fault.c    |    2 +-
+ arch/parisc/mm/fault.c     |    2 +-
+ arch/powerpc/mm/fault.c    |    2 +-
+ arch/s390/mm/fault.c       |    6 ++++--
+ arch/score/mm/fault.c      |    2 +-
+ arch/sh/mm/fault_32.c      |    2 +-
+ arch/sparc/mm/fault_32.c   |    4 ++--
+ arch/sparc/mm/fault_64.c   |    2 +-
+ arch/tile/mm/fault.c       |    2 +-
+ arch/um/kernel/trap.c      |    2 +-
+ arch/x86/mm/fault.c        |    2 +-
+ arch/xtensa/mm/fault.c     |    2 +-
+ 22 files changed, 27 insertions(+), 24 deletions(-)
+
+diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
+index fadd5f8..6d73e1b 100644
+--- a/arch/alpha/mm/fault.c
++++ b/arch/alpha/mm/fault.c
+@@ -107,7 +107,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
+ 
+ 	/* If we're in an interrupt context, or have no user context,
+ 	   we must not take the fault.  */
+-	if (!mm || in_atomic())
++	if (!mm || in_atomic() || current->pagefault_disabled)
+ 		goto no_context;
+ 
+ #ifdef CONFIG_ALPHA_LARGE_VMALLOC
+diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
+index 4b0bc37..0fe9b9b 100644
+--- a/arch/arm/mm/fault.c
++++ b/arch/arm/mm/fault.c
+@@ -296,7 +296,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto no_context;
+ 
+ 	/*
+diff --git a/arch/avr32/mm/fault.c b/arch/avr32/mm/fault.c
+index f7040a1..623a027 100644
+--- a/arch/avr32/mm/fault.c
++++ b/arch/avr32/mm/fault.c
+@@ -81,7 +81,8 @@ asmlinkage void do_page_fault(unsigned long ecr, struct pt_regs *regs)
+ 	 * If we're in an interrupt or have no user context, we must
+ 	 * not take the fault...
+ 	 */
+-	if (in_atomic() || !mm || regs->sr & SYSREG_BIT(GM))
++	if (in_atomic() || !mm || regs->sr & SYSREG_BIT(GM) ||
++	    current->pagefault_disabled)
+ 		goto no_context;
+ 
+ 	local_irq_enable();
+diff --git a/arch/cris/mm/fault.c b/arch/cris/mm/fault.c
+index 9dcac8e..2b2c292 100644
+--- a/arch/cris/mm/fault.c
++++ b/arch/cris/mm/fault.c
+@@ -111,7 +111,7 @@ do_page_fault(unsigned long address, struct pt_regs *regs,
+ 	 * user context, we must not take the fault.
+ 	 */
+ 
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/frv/mm/fault.c b/arch/frv/mm/fault.c
+index a325d57..3da8ec7 100644
+--- a/arch/frv/mm/fault.c
++++ b/arch/frv/mm/fault.c
+@@ -79,7 +79,7 @@ asmlinkage void do_page_fault(int datammu, unsigned long esr0, unsigned long ear
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c
+index 20b3593..2a4e44f 100644
+--- a/arch/ia64/mm/fault.c
++++ b/arch/ia64/mm/fault.c
+@@ -89,7 +89,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
+ 	/*
+ 	 * If we're in an interrupt or have no user context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto no_context;
+ 
+ #ifdef CONFIG_VIRTUAL_MEM_MAP
+diff --git a/arch/m32r/mm/fault.c b/arch/m32r/mm/fault.c
+index 2c9aeb4..16fa2c7 100644
+--- a/arch/m32r/mm/fault.c
++++ b/arch/m32r/mm/fault.c
+@@ -115,7 +115,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code,
+ 	 * If we're in an interrupt or have no user context or are running in an
+ 	 * atomic region then we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto bad_area_nosemaphore;
+ 
+ 	/* When running in the kernel we expect faults to occur only to
+diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c
+index 2db6099..238ffc0 100644
+--- a/arch/m68k/mm/fault.c
++++ b/arch/m68k/mm/fault.c
+@@ -85,7 +85,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c
+index ae97d2c..c3f219c 100644
+--- a/arch/microblaze/mm/fault.c
++++ b/arch/microblaze/mm/fault.c
+@@ -107,7 +107,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address,
+ 	if ((error_code & 0x13) == 0x13 || (error_code & 0x11) == 0x11)
+ 		is_write = 0;
+ 
+-	if (unlikely(in_atomic() || !mm)) {
++	if (unlikely(in_atomic() || !mm || current->pagefault_disabled)) {
+ 		if (kernel_mode(regs))
+ 			goto bad_area_nosemaphore;
+ 
+diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
+index 937cf33..ce7e75e 100644
+--- a/arch/mips/mm/fault.c
++++ b/arch/mips/mm/fault.c
+@@ -88,7 +88,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, unsigned long writ
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto bad_area_nosemaphore;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/mn10300/mm/fault.c b/arch/mn10300/mm/fault.c
+index 0945409..53c8d16 100644
+--- a/arch/mn10300/mm/fault.c
++++ b/arch/mn10300/mm/fault.c
+@@ -168,7 +168,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long fault_code,
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
+index 18162ce..09ecc8a 100644
+--- a/arch/parisc/mm/fault.c
++++ b/arch/parisc/mm/fault.c
+@@ -176,7 +176,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code,
+ 	unsigned long acc_type;
+ 	int fault;
+ 
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 5efe8c9..17f8bbe 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -162,7 +162,7 @@ int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
+ 	}
+ #endif
+ 
+-	if (in_atomic() || mm == NULL) {
++	if (in_atomic() || mm == NULL || current->pagefault_disabled) {
+ 		if (!user_mode(regs))
+ 			return SIGSEGV;
+ 		/* in_atomic() in user mode is really bad,
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index b28aaa4..4aaffe7 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -294,7 +294,8 @@ static inline int do_exception(struct pt_regs *regs, int access,
+ 	 * user context.
+ 	 */
+ 	fault = VM_FAULT_BADCONTEXT;
+-	if (unlikely(!user_space_fault(trans_exc_code) || in_atomic() || !mm))
++	if (unlikely(!user_space_fault(trans_exc_code) || in_atomic() || !mm ||
++		    tsk->pagefault_disabled))
+ 		goto out;
+ 
+ 	address = trans_exc_code & __FAIL_ADDR_MASK;
+@@ -425,7 +426,8 @@ void __kprobes do_asce_exception(struct pt_regs *regs, long pgm_int_code,
+ 	struct mm_struct *mm = current->mm;
+ 	struct vm_area_struct *vma;
+ 
+-	if (unlikely(!user_space_fault(trans_exc_code) || in_atomic() || !mm))
++	if (unlikely(!user_space_fault(trans_exc_code) || in_atomic() || !mm ||
++		     current->pagefault_disabled))
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/score/mm/fault.c b/arch/score/mm/fault.c
+index 47b600e..4c12824 100644
+--- a/arch/score/mm/fault.c
++++ b/arch/score/mm/fault.c
+@@ -72,7 +72,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write,
+ 	* If we're in an interrupt or have no user
+ 	* context, we must not take the fault..
+ 	*/
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto bad_area_nosemaphore;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/sh/mm/fault_32.c b/arch/sh/mm/fault_32.c
+index 7bebd04..a67ac56 100644
+--- a/arch/sh/mm/fault_32.c
++++ b/arch/sh/mm/fault_32.c
+@@ -166,7 +166,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
+ 	 * If we're in an interrupt, have no user context or are running
+ 	 * in an atomic region then we must not take the fault:
+ 	 */
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c
+index 8023fd7..e0742c1 100644
+--- a/arch/sparc/mm/fault_32.c
++++ b/arch/sparc/mm/fault_32.c
+@@ -247,8 +247,8 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-        if (in_atomic() || !mm)
+-                goto no_context;
++	if (in_atomic() || !mm || current->pagefault_disabled)
++		goto no_context;
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+ 
+diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c
+index 504c062..9d691a5 100644
+--- a/arch/sparc/mm/fault_64.c
++++ b/arch/sparc/mm/fault_64.c
+@@ -322,7 +322,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm)
++	if (in_atomic() || !mm || current->pagefault_enabled)
+ 		goto intr_or_no_mm;
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+diff --git a/arch/tile/mm/fault.c b/arch/tile/mm/fault.c
+index 25b7b90..ecdb016 100644
+--- a/arch/tile/mm/fault.c
++++ b/arch/tile/mm/fault.c
+@@ -346,7 +346,7 @@ static int handle_page_fault(struct pt_regs *regs,
+ 	 * If we're in an interrupt, have no user context or are running in an
+ 	 * atomic region then we must not take the fault.
+ 	 */
+-	if (in_atomic() || !mm) {
++	if (in_atomic() || !mm || current->pagefault_disabled) {
+ 		vma = NULL;  /* happy compiler */
+ 		goto bad_area_nosemaphore;
+ 	}
+diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
+index dafc947..a283400 100644
+--- a/arch/um/kernel/trap.c
++++ b/arch/um/kernel/trap.c
+@@ -37,7 +37,7 @@ int handle_page_fault(unsigned long address, unsigned long ip,
+ 	 * If the fault was during atomic operation, don't take the fault, just
+ 	 * fail.
+ 	 */
+-	if (in_atomic())
++	if (in_atomic() || !mm || current->pagefault_disabled)
+ 		goto out_nosemaphore;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 5db0490..191015f 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -1084,7 +1084,7 @@ do_page_fault(struct pt_regs *regs, unsigned long error_code)
+ 	 * If we're in an interrupt, have no user context or are running
+ 	 * in an atomic region then we must not take the fault:
+ 	 */
+-	if (unlikely(in_atomic() || !mm)) {
++	if (unlikely(in_atomic() || !mm || current->pagefault_disabled)) {
+ 		bad_area_nosemaphore(regs, error_code, address);
+ 		return;
+ 	}
+diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c
+index e367e30..705f483 100644
+--- a/arch/xtensa/mm/fault.c
++++ b/arch/xtensa/mm/fault.c
+@@ -57,7 +57,7 @@ void do_page_fault(struct pt_regs *regs)
+ 	/* If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm) {
++	if (in_atomic() || !mm || current->pagefault_disabled) {
+ 		bad_page_fault(regs, address, SIGSEGV);
+ 		return;
+ 	}
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0048-mm-pagefault_disabled.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0048-mm-pagefault_disabled.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0048-mm-pagefault_disabled.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0048-mm-pagefault_disabled.patch)
@@ -0,0 +1,394 @@
+From 6c7041e9178d633a408a67c36ec164ed4fc07345 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Thu, 11 Aug 2011 15:31:31 +0200
+Subject: [PATCH 048/271] mm: pagefault_disabled()
+
+Wrap the test for pagefault_disabled() into a helper, this allows us
+to remove the need for current->pagefault_disabled on !-rt kernels.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/n/tip-3yy517m8zsi9fpsf14xfaqkw@git.kernel.org
+---
+ arch/alpha/mm/fault.c      |    2 +-
+ arch/arm/mm/fault.c        |    2 +-
+ arch/avr32/mm/fault.c      |    3 +--
+ arch/cris/mm/fault.c       |    2 +-
+ arch/frv/mm/fault.c        |    2 +-
+ arch/ia64/mm/fault.c       |    2 +-
+ arch/m32r/mm/fault.c       |    2 +-
+ arch/m68k/mm/fault.c       |    2 +-
+ arch/microblaze/mm/fault.c |    2 +-
+ arch/mips/mm/fault.c       |    2 +-
+ arch/mn10300/mm/fault.c    |    2 +-
+ arch/parisc/mm/fault.c     |    2 +-
+ arch/powerpc/mm/fault.c    |    2 +-
+ arch/s390/mm/fault.c       |    8 ++++----
+ arch/score/mm/fault.c      |    2 +-
+ arch/sh/mm/fault_32.c      |    2 +-
+ arch/sparc/mm/fault_32.c   |    2 +-
+ arch/sparc/mm/fault_64.c   |    2 +-
+ arch/tile/mm/fault.c       |    2 +-
+ arch/um/kernel/trap.c      |    2 +-
+ arch/x86/mm/fault.c        |    2 +-
+ arch/xtensa/mm/fault.c     |    2 +-
+ include/linux/sched.h      |   14 ++++++++++++++
+ kernel/fork.c              |    2 ++
+ 24 files changed, 41 insertions(+), 26 deletions(-)
+
+diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
+index 6d73e1b..4a0a0af 100644
+--- a/arch/alpha/mm/fault.c
++++ b/arch/alpha/mm/fault.c
+@@ -107,7 +107,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
+ 
+ 	/* If we're in an interrupt context, or have no user context,
+ 	   we must not take the fault.  */
+-	if (!mm || in_atomic() || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto no_context;
+ 
+ #ifdef CONFIG_ALPHA_LARGE_VMALLOC
+diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
+index 0fe9b9b..4c306f2 100644
+--- a/arch/arm/mm/fault.c
++++ b/arch/arm/mm/fault.c
+@@ -296,7 +296,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto no_context;
+ 
+ 	/*
+diff --git a/arch/avr32/mm/fault.c b/arch/avr32/mm/fault.c
+index 623a027..155ad8d 100644
+--- a/arch/avr32/mm/fault.c
++++ b/arch/avr32/mm/fault.c
+@@ -81,8 +81,7 @@ asmlinkage void do_page_fault(unsigned long ecr, struct pt_regs *regs)
+ 	 * If we're in an interrupt or have no user context, we must
+ 	 * not take the fault...
+ 	 */
+-	if (in_atomic() || !mm || regs->sr & SYSREG_BIT(GM) ||
+-	    current->pagefault_disabled)
++	if (!mm || regs->sr & SYSREG_BIT(GM) || pagefault_disabled())
+ 		goto no_context;
+ 
+ 	local_irq_enable();
+diff --git a/arch/cris/mm/fault.c b/arch/cris/mm/fault.c
+index 2b2c292..ba9cfbe 100644
+--- a/arch/cris/mm/fault.c
++++ b/arch/cris/mm/fault.c
+@@ -111,7 +111,7 @@ do_page_fault(unsigned long address, struct pt_regs *regs,
+ 	 * user context, we must not take the fault.
+ 	 */
+ 
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/frv/mm/fault.c b/arch/frv/mm/fault.c
+index 3da8ec7..a9ce0f0 100644
+--- a/arch/frv/mm/fault.c
++++ b/arch/frv/mm/fault.c
+@@ -79,7 +79,7 @@ asmlinkage void do_page_fault(int datammu, unsigned long esr0, unsigned long ear
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c
+index 2a4e44f..05946c2 100644
+--- a/arch/ia64/mm/fault.c
++++ b/arch/ia64/mm/fault.c
+@@ -89,7 +89,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
+ 	/*
+ 	 * If we're in an interrupt or have no user context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto no_context;
+ 
+ #ifdef CONFIG_VIRTUAL_MEM_MAP
+diff --git a/arch/m32r/mm/fault.c b/arch/m32r/mm/fault.c
+index 16fa2c7..6d763f6 100644
+--- a/arch/m32r/mm/fault.c
++++ b/arch/m32r/mm/fault.c
+@@ -115,7 +115,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code,
+ 	 * If we're in an interrupt or have no user context or are running in an
+ 	 * atomic region then we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto bad_area_nosemaphore;
+ 
+ 	/* When running in the kernel we expect faults to occur only to
+diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c
+index 238ffc0..74fe559 100644
+--- a/arch/m68k/mm/fault.c
++++ b/arch/m68k/mm/fault.c
+@@ -85,7 +85,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c
+index c3f219c..4cdd84d 100644
+--- a/arch/microblaze/mm/fault.c
++++ b/arch/microblaze/mm/fault.c
+@@ -107,7 +107,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address,
+ 	if ((error_code & 0x13) == 0x13 || (error_code & 0x11) == 0x11)
+ 		is_write = 0;
+ 
+-	if (unlikely(in_atomic() || !mm || current->pagefault_disabled)) {
++	if (unlikely(!mm || pagefault_disabled())) {
+ 		if (kernel_mode(regs))
+ 			goto bad_area_nosemaphore;
+ 
+diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
+index ce7e75e..7ade72b 100644
+--- a/arch/mips/mm/fault.c
++++ b/arch/mips/mm/fault.c
+@@ -88,7 +88,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, unsigned long writ
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto bad_area_nosemaphore;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/mn10300/mm/fault.c b/arch/mn10300/mm/fault.c
+index 53c8d16..2fea01c 100644
+--- a/arch/mn10300/mm/fault.c
++++ b/arch/mn10300/mm/fault.c
+@@ -168,7 +168,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long fault_code,
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
+index 09ecc8a..df22f39 100644
+--- a/arch/parisc/mm/fault.c
++++ b/arch/parisc/mm/fault.c
+@@ -176,7 +176,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long code,
+ 	unsigned long acc_type;
+ 	int fault;
+ 
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 17f8bbe..94bedd4 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -162,7 +162,7 @@ int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
+ 	}
+ #endif
+ 
+-	if (in_atomic() || mm == NULL || current->pagefault_disabled) {
++	if (!mm || pagefault_disabled()) {
+ 		if (!user_mode(regs))
+ 			return SIGSEGV;
+ 		/* in_atomic() in user mode is really bad,
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index 4aaffe7..78339f0 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -294,8 +294,8 @@ static inline int do_exception(struct pt_regs *regs, int access,
+ 	 * user context.
+ 	 */
+ 	fault = VM_FAULT_BADCONTEXT;
+-	if (unlikely(!user_space_fault(trans_exc_code) || in_atomic() || !mm ||
+-		    tsk->pagefault_disabled))
++	if (unlikely(!user_space_fault(trans_exc_code) ||
++		     !mm || pagefault_disabled()))
+ 		goto out;
+ 
+ 	address = trans_exc_code & __FAIL_ADDR_MASK;
+@@ -426,8 +426,8 @@ void __kprobes do_asce_exception(struct pt_regs *regs, long pgm_int_code,
+ 	struct mm_struct *mm = current->mm;
+ 	struct vm_area_struct *vma;
+ 
+-	if (unlikely(!user_space_fault(trans_exc_code) || in_atomic() || !mm ||
+-		     current->pagefault_disabled))
++	if (unlikely(!user_space_fault(trans_exc_code) ||
++		     !mm || pagefault_disabled()))
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/score/mm/fault.c b/arch/score/mm/fault.c
+index 4c12824..59fccbe 100644
+--- a/arch/score/mm/fault.c
++++ b/arch/score/mm/fault.c
+@@ -72,7 +72,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write,
+ 	* If we're in an interrupt or have no user
+ 	* context, we must not take the fault..
+ 	*/
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto bad_area_nosemaphore;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/sh/mm/fault_32.c b/arch/sh/mm/fault_32.c
+index a67ac56..643670d 100644
+--- a/arch/sh/mm/fault_32.c
++++ b/arch/sh/mm/fault_32.c
+@@ -166,7 +166,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
+ 	 * If we're in an interrupt, have no user context or are running
+ 	 * in an atomic region then we must not take the fault:
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto no_context;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c
+index e0742c1..054cf56 100644
+--- a/arch/sparc/mm/fault_32.c
++++ b/arch/sparc/mm/fault_32.c
+@@ -247,7 +247,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto no_context;
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c
+index 9d691a5..f6572f8 100644
+--- a/arch/sparc/mm/fault_64.c
++++ b/arch/sparc/mm/fault_64.c
+@@ -322,7 +322,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
+ 	 * If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_enabled)
++	if (!mm || pagefault_disabled())
+ 		goto intr_or_no_mm;
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+diff --git a/arch/tile/mm/fault.c b/arch/tile/mm/fault.c
+index ecdb016..1b6fa51 100644
+--- a/arch/tile/mm/fault.c
++++ b/arch/tile/mm/fault.c
+@@ -346,7 +346,7 @@ static int handle_page_fault(struct pt_regs *regs,
+ 	 * If we're in an interrupt, have no user context or are running in an
+ 	 * atomic region then we must not take the fault.
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled) {
++	if (!mm || pagefault_disabled()) {
+ 		vma = NULL;  /* happy compiler */
+ 		goto bad_area_nosemaphore;
+ 	}
+diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
+index a283400..7878069 100644
+--- a/arch/um/kernel/trap.c
++++ b/arch/um/kernel/trap.c
+@@ -37,7 +37,7 @@ int handle_page_fault(unsigned long address, unsigned long ip,
+ 	 * If the fault was during atomic operation, don't take the fault, just
+ 	 * fail.
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled)
++	if (!mm || pagefault_disabled())
+ 		goto out_nosemaphore;
+ 
+ 	down_read(&mm->mmap_sem);
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 191015f..b567837 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -1084,7 +1084,7 @@ do_page_fault(struct pt_regs *regs, unsigned long error_code)
+ 	 * If we're in an interrupt, have no user context or are running
+ 	 * in an atomic region then we must not take the fault:
+ 	 */
+-	if (unlikely(in_atomic() || !mm || current->pagefault_disabled)) {
++	if (unlikely(!mm || pagefault_disabled())) {
+ 		bad_area_nosemaphore(regs, error_code, address);
+ 		return;
+ 	}
+diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c
+index 705f483..8f3f52a 100644
+--- a/arch/xtensa/mm/fault.c
++++ b/arch/xtensa/mm/fault.c
+@@ -57,7 +57,7 @@ void do_page_fault(struct pt_regs *regs)
+ 	/* If we're in an interrupt or have no user
+ 	 * context, we must not take the fault..
+ 	 */
+-	if (in_atomic() || !mm || current->pagefault_disabled) {
++	if (!mm || pagefault_disabled()) {
+ 		bad_page_fault(regs, address, SIGSEGV);
+ 		return;
+ 	}
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index bce86f9..8cb4365 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -91,6 +91,7 @@ struct sched_param {
+ #include <linux/latencytop.h>
+ #include <linux/cred.h>
+ #include <linux/llist.h>
++#include <linux/hardirq.h>
+ 
+ #include <asm/processor.h>
+ 
+@@ -1435,7 +1436,9 @@ struct task_struct {
+ 	/* mutex deadlock detection */
+ 	struct mutex_waiter *blocked_on;
+ #endif
++#ifdef CONFIG_PREEMPT_RT_FULL
+ 	int pagefault_disabled;
++#endif
+ #ifdef CONFIG_TRACE_IRQFLAGS
+ 	unsigned int irq_events;
+ 	unsigned long hardirq_enable_ip;
+@@ -1584,6 +1587,17 @@ struct task_struct {
+ /* Future-safe accessor for struct task_struct's cpus_allowed. */
+ #define tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed)
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++static inline bool cur_pf_disabled(void) { return current->pagefault_disabled; }
++#else
++static inline bool cur_pf_disabled(void) { return false; }
++#endif
++
++static inline bool pagefault_disabled(void)
++{
++	return in_atomic() || cur_pf_disabled();
++}
++
+ /*
+  * Priority of a process goes from 0..MAX_PRIO-1, valid RT
+  * priority is 0..MAX_RT_PRIO-1, and SCHED_NORMAL/SCHED_BATCH
+diff --git a/kernel/fork.c b/kernel/fork.c
+index e2b9104..88712a6 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1198,7 +1198,9 @@ static struct task_struct *copy_process(unsigned long clone_flags,
+ 	p->hardirq_context = 0;
+ 	p->softirq_context = 0;
+ #endif
++#ifdef CONFIG_PREEMPT_RT_FULL
+ 	p->pagefault_disabled = 0;
++#endif
+ #ifdef CONFIG_LOCKDEP
+ 	p->lockdep_depth = 0; /* no locks held yet */
+ 	p->curr_chain_key = 0;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0049-mm-raw_pagefault_disable.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0049-mm-raw_pagefault_disable.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0049-mm-raw_pagefault_disable.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0049-mm-raw_pagefault_disable.patch)
@@ -0,0 +1,154 @@
+From 8621289a98ce2a995a58d82094bd45564dfd6313 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Fri, 5 Aug 2011 17:16:58 +0200
+Subject: [PATCH 049/271] mm: raw_pagefault_disable
+
+Adding migrate_disable() to pagefault_disable() to preserve the
+per-cpu thing for kmap_atomic might not have been the best of choices.
+But short of adding preempt_disable/migrate_disable foo all over the
+kmap code it still seems the best way.
+
+It does however yield the below borkage as well as wreck !-rt builds
+since !-rt does rely on pagefault_disable() not preempting. So fix all
+that up by adding raw_pagefault_disable().
+
+ <NMI>  [<ffffffff81076d5c>] warn_slowpath_common+0x85/0x9d
+ [<ffffffff81076e17>] warn_slowpath_fmt+0x46/0x48
+ [<ffffffff814f7fca>] ? _raw_spin_lock+0x6c/0x73
+ [<ffffffff810cac87>] ? watchdog_overflow_callback+0x9b/0xd0
+ [<ffffffff810caca3>] watchdog_overflow_callback+0xb7/0xd0
+ [<ffffffff810f51bb>] __perf_event_overflow+0x11c/0x1fe
+ [<ffffffff810f298f>] ? perf_event_update_userpage+0x149/0x151
+ [<ffffffff810f2846>] ? perf_event_task_disable+0x7c/0x7c
+ [<ffffffff810f5b7c>] perf_event_overflow+0x14/0x16
+ [<ffffffff81046e02>] x86_pmu_handle_irq+0xcb/0x108
+ [<ffffffff814f9a6b>] perf_event_nmi_handler+0x46/0x91
+ [<ffffffff814fb2ba>] notifier_call_chain+0x79/0xa6
+ [<ffffffff814fb34d>] __atomic_notifier_call_chain+0x66/0x98
+ [<ffffffff814fb2e7>] ? notifier_call_chain+0xa6/0xa6
+ [<ffffffff814fb393>] atomic_notifier_call_chain+0x14/0x16
+ [<ffffffff814fb3c3>] notify_die+0x2e/0x30
+ [<ffffffff814f8f75>] do_nmi+0x7e/0x22b
+ [<ffffffff814f8bca>] nmi+0x1a/0x2c
+ [<ffffffff814fb130>] ? sub_preempt_count+0x4b/0xaa
+ <<EOE>>  <IRQ>  [<ffffffff812d44cc>] delay_tsc+0xac/0xd1
+ [<ffffffff812d4399>] __delay+0xf/0x11
+ [<ffffffff812d95d9>] do_raw_spin_lock+0xd2/0x13c
+ [<ffffffff814f813e>] _raw_spin_lock_irqsave+0x6b/0x85
+ [<ffffffff8106772a>] ? task_rq_lock+0x35/0x8d
+ [<ffffffff8106772a>] task_rq_lock+0x35/0x8d
+ [<ffffffff8106fe2f>] migrate_disable+0x65/0x12c
+ [<ffffffff81114e69>] pagefault_disable+0xe/0x1f
+ [<ffffffff81039c73>] dump_trace+0x21f/0x2e2
+ [<ffffffff8103ad79>] show_trace_log_lvl+0x54/0x5d
+ [<ffffffff8103ad97>] show_trace+0x15/0x17
+ [<ffffffff814f4f5f>] dump_stack+0x77/0x80
+ [<ffffffff812d94b0>] spin_bug+0x9c/0xa3
+ [<ffffffff81067745>] ? task_rq_lock+0x50/0x8d
+ [<ffffffff812d954e>] do_raw_spin_lock+0x47/0x13c
+ [<ffffffff814f7fbe>] _raw_spin_lock+0x60/0x73
+ [<ffffffff81067745>] ? task_rq_lock+0x50/0x8d
+ [<ffffffff81067745>] task_rq_lock+0x50/0x8d
+ [<ffffffff8106fe2f>] migrate_disable+0x65/0x12c
+ [<ffffffff81114e69>] pagefault_disable+0xe/0x1f
+ [<ffffffff81039c73>] dump_trace+0x21f/0x2e2
+ [<ffffffff8104369b>] save_stack_trace+0x2f/0x4c
+ [<ffffffff810a7848>] save_trace+0x3f/0xaf
+ [<ffffffff810aa2bd>] mark_lock+0x228/0x530
+ [<ffffffff810aac27>] __lock_acquire+0x662/0x1812
+ [<ffffffff8103dad4>] ? native_sched_clock+0x37/0x6d
+ [<ffffffff810a790e>] ? trace_hardirqs_off_caller+0x1f/0x99
+ [<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218
+ [<ffffffff810ac403>] lock_acquire+0x145/0x18a
+ [<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218
+ [<ffffffff814f7f9e>] _raw_spin_lock+0x40/0x73
+ [<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218
+ [<ffffffff810693f6>] sched_rt_period_timer+0xbd/0x218
+ [<ffffffff8109aa39>] __run_hrtimer+0x1e4/0x347
+ [<ffffffff81069339>] ? can_migrate_task.clone.82+0x14a/0x14a
+ [<ffffffff8109b97c>] hrtimer_interrupt+0xee/0x1d6
+ [<ffffffff814fb23d>] ? add_preempt_count+0xae/0xb2
+ [<ffffffff814ffb38>] smp_apic_timer_interrupt+0x85/0x98
+ [<ffffffff814fef13>] apic_timer_interrupt+0x13/0x20
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/n/tip-31keae8mkjiv8esq4rl76cib@git.kernel.org
+---
+ include/linux/uaccess.h |   30 ++++++++++++++++++++++++++++--
+ mm/memory.c             |    2 ++
+ 2 files changed, 30 insertions(+), 2 deletions(-)
+
+diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
+index 9414a1b..44b3751 100644
+--- a/include/linux/uaccess.h
++++ b/include/linux/uaccess.h
+@@ -8,8 +8,34 @@
+  * These routines enable/disable the pagefault handler in that
+  * it will not take any MM locks and go straight to the fixup table.
+  */
++static inline void raw_pagefault_disable(void)
++{
++	inc_preempt_count();
++	barrier();
++}
++
++static inline void raw_pagefault_enable(void)
++{
++	barrier();
++	dec_preempt_count();
++	barrier();
++	preempt_check_resched();
++}
++
++#ifndef CONFIG_PREEMPT_RT_FULL
++static inline void pagefault_disable(void)
++{
++	raw_pagefault_disable();
++}
++
++static inline void pagefault_enable(void)
++{
++	raw_pagefault_enable();
++}
++#else
+ extern void pagefault_disable(void);
+ extern void pagefault_enable(void);
++#endif
+ 
+ #ifndef ARCH_HAS_NOCACHE_UACCESS
+ 
+@@ -50,9 +76,9 @@ static inline unsigned long __copy_from_user_nocache(void *to,
+ 		mm_segment_t old_fs = get_fs();		\
+ 							\
+ 		set_fs(KERNEL_DS);			\
+-		pagefault_disable();			\
++		raw_pagefault_disable();		\
+ 		ret = __copy_from_user_inatomic(&(retval), (__force typeof(retval) __user *)(addr), sizeof(retval));		\
+-		pagefault_enable();			\
++		raw_pagefault_enable();			\
+ 		set_fs(old_fs);				\
+ 		ret;					\
+ 	})
+diff --git a/mm/memory.c b/mm/memory.c
+index 454ad3f..a3f7ed8 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3444,6 +3444,7 @@ unlock:
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
+ void pagefault_disable(void)
+ {
+ 	inc_preempt_count();
+@@ -3472,6 +3473,7 @@ void pagefault_enable(void)
+ 	preempt_check_resched();
+ }
+ EXPORT_SYMBOL_GPL(pagefault_enable);
++#endif
+ 
+ /*
+  * By the time we get here, we already hold the mm semaphore
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0050-filemap-fix-up.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0050-filemap-fix-up.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0050-filemap-fix-up.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0050-filemap-fix-up.patch.patch)
@@ -0,0 +1,28 @@
+From 1cf2385c24c1a810952c82774d7603beb5788c3a Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 17 Jun 2011 18:56:24 +0200
+Subject: [PATCH 050/271] filemap-fix-up.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Wrecked-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/n/tip-m6yuzd6ul717hlnl2gj6p3ou@git.kernel.org
+---
+ mm/filemap.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 03c5b0e..4a30d36 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2044,7 +2044,7 @@ size_t iov_iter_copy_from_user_atomic(struct page *page,
+ 	char *kaddr;
+ 	size_t copied;
+ 
+-	BUG_ON(!in_atomic());
++	BUG_ON(!pagefault_disabled());
+ 	kaddr = kmap_atomic(page, KM_USER0);
+ 	if (likely(i->nr_segs == 1)) {
+ 		int left;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0051-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0051-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0051-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0051-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch)
@@ -0,0 +1,41 @@
+From 39924d44ecc1c17b703cbb7e0437dfc1e3cd739b Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 25 Jul 2009 22:06:27 +0200
+Subject: [PATCH 051/271] mm: Remove preempt count from pagefault
+ disable/enable
+
+Now that all users are cleaned up, we can remove the preemption count.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/memory.c |    7 -------
+ 1 file changed, 7 deletions(-)
+
+diff --git a/mm/memory.c b/mm/memory.c
+index a3f7ed8..7fa62d9 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3447,7 +3447,6 @@ unlock:
+ #ifdef CONFIG_PREEMPT_RT_FULL
+ void pagefault_disable(void)
+ {
+-	inc_preempt_count();
+ 	current->pagefault_disabled++;
+ 	/*
+ 	 * make sure to have issued the store before a pagefault
+@@ -3465,12 +3464,6 @@ void pagefault_enable(void)
+ 	 */
+ 	barrier();
+ 	current->pagefault_disabled--;
+-	dec_preempt_count();
+-	/*
+-	 * make sure we do..
+-	 */
+-	barrier();
+-	preempt_check_resched();
+ }
+ EXPORT_SYMBOL_GPL(pagefault_enable);
+ #endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0052-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0052-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0052-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0052-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch)
@@ -0,0 +1,30 @@
+From 076ac4ad95af7d86632b44189bd93e01ade134a7 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:25 -0500
+Subject: [PATCH 052/271] x86: highmem: Replace BUG_ON by WARN_ON
+
+The machine might survive that problem and be at least in a state
+which allows us to get more information about the problem.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/mm/highmem_32.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
+index f4f29b1..71bd7d6 100644
+--- a/arch/x86/mm/highmem_32.c
++++ b/arch/x86/mm/highmem_32.c
+@@ -43,7 +43,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+ 	type = kmap_atomic_idx_push();
+ 	idx = type + KM_TYPE_NR*smp_processor_id();
+ 	vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
+-	BUG_ON(!pte_none(*(kmap_pte-idx)));
++	WARN_ON(!pte_none(*(kmap_pte-idx)));
+ 	set_pte(kmap_pte-idx, mk_pte(page, prot));
+ 	arch_flush_lazy_mmu_mode();
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0053-suspend-Prevent-might-sleep-splats.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0053-suspend-Prevent-might-sleep-splats.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0053-suspend-Prevent-might-sleep-splats.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0053-suspend-Prevent-might-sleep-splats.patch)
@@ -0,0 +1,116 @@
+From f9829f3dee7e6397782981ce38ea6eea429583f5 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 15 Jul 2010 10:29:00 +0200
+Subject: [PATCH 053/271] suspend: Prevent might sleep splats
+
+timekeeping suspend/resume calls read_persistant_clock() which takes
+rtc_lock. That results in might sleep warnings because at that point
+we run with interrupts disabled.
+
+We cannot convert rtc_lock to a raw spinlock as that would trigger
+other might sleep warnings.
+
+As a temporary workaround we disable the might sleep warnings by
+setting system_state to SYSTEM_SUSPEND before calling sysdev_suspend()
+and restoring it to SYSTEM_RUNNING afer sysdev_resume().
+
+Needs to be revisited.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/kernel.h   |    2 +-
+ kernel/power/hibernate.c |    7 +++++++
+ kernel/power/suspend.c   |    4 ++++
+ 3 files changed, 12 insertions(+), 1 deletion(-)
+
+diff --git a/include/linux/kernel.h b/include/linux/kernel.h
+index a70783d..22bdd4b 100644
+--- a/include/linux/kernel.h
++++ b/include/linux/kernel.h
+@@ -369,7 +369,7 @@ extern enum system_states {
+ 	SYSTEM_HALT,
+ 	SYSTEM_POWER_OFF,
+ 	SYSTEM_RESTART,
+-	SYSTEM_SUSPEND_DISK,
++	SYSTEM_SUSPEND,
+ } system_state;
+ 
+ #define TAINT_PROPRIETARY_MODULE	0
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 7c0d578..32741e2 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -284,6 +284,8 @@ static int create_image(int platform_mode)
+ 
+ 	local_irq_disable();
+ 
++	system_state = SYSTEM_SUSPEND;
++
+ 	error = syscore_suspend();
+ 	if (error) {
+ 		printk(KERN_ERR "PM: Some system devices failed to power down, "
+@@ -311,6 +313,7 @@ static int create_image(int platform_mode)
+ 	syscore_resume();
+ 
+  Enable_irqs:
++	system_state = SYSTEM_RUNNING;
+ 	local_irq_enable();
+ 
+  Enable_cpus:
+@@ -437,6 +440,7 @@ static int resume_target_kernel(bool platform_mode)
+ 		goto Enable_cpus;
+ 
+ 	local_irq_disable();
++	system_state = SYSTEM_SUSPEND;
+ 
+ 	error = syscore_suspend();
+ 	if (error)
+@@ -470,6 +474,7 @@ static int resume_target_kernel(bool platform_mode)
+ 	syscore_resume();
+ 
+  Enable_irqs:
++	system_state = SYSTEM_RUNNING;
+ 	local_irq_enable();
+ 
+  Enable_cpus:
+@@ -549,6 +554,7 @@ int hibernation_platform_enter(void)
+ 		goto Platform_finish;
+ 
+ 	local_irq_disable();
++	system_state = SYSTEM_SUSPEND;
+ 	syscore_suspend();
+ 	if (pm_wakeup_pending()) {
+ 		error = -EAGAIN;
+@@ -561,6 +567,7 @@ int hibernation_platform_enter(void)
+ 
+  Power_up:
+ 	syscore_resume();
++	system_state = SYSTEM_RUNNING;
+ 	local_irq_enable();
+ 	enable_nonboot_cpus();
+ 
+diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
+index 4953dc0..691f46e 100644
+--- a/kernel/power/suspend.c
++++ b/kernel/power/suspend.c
+@@ -171,6 +171,8 @@ static int suspend_enter(suspend_state_t state, bool *wakeup)
+ 	arch_suspend_disable_irqs();
+ 	BUG_ON(!irqs_disabled());
+ 
++	system_state = SYSTEM_SUSPEND;
++
+ 	error = syscore_suspend();
+ 	if (!error) {
+ 		*wakeup = pm_wakeup_pending();
+@@ -181,6 +183,8 @@ static int suspend_enter(suspend_state_t state, bool *wakeup)
+ 		syscore_resume();
+ 	}
+ 
++	system_state = SYSTEM_RUNNING;
++
+ 	arch_suspend_enable_irqs();
+ 	BUG_ON(irqs_disabled());
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0054-OF-Fixup-resursive-locking-code-paths.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0054-OF-Fixup-resursive-locking-code-paths.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0054-OF-Fixup-resursive-locking-code-paths.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0054-OF-Fixup-resursive-locking-code-paths.patch)
@@ -0,0 +1,198 @@
+From 74894ff87857094267ea265d8e70a93fd2891d9c Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 13 Aug 2009 09:04:10 +0200
+Subject: [PATCH 054/271] OF: Fixup resursive locking code paths
+
+There is no real reason to use a rwlock for devtree_lock. It even
+could be a mutex, but unfortunately it's locked from cpu hotplug
+pathes which can't schedule :(
+
+So it needs to become a raw lock on rt as well. devtree_lock would be
+the only user of a raw_rw_lock, so we are better of cleaning the
+recursive locking pathes which allows us to convert devtree_lock to a
+read_lock.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/of/base.c |   93 ++++++++++++++++++++++++++++++++++++++++-------------
+ 1 file changed, 71 insertions(+), 22 deletions(-)
+
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 9b6588e..200f2dd 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -163,16 +163,14 @@ void of_node_put(struct device_node *node)
+ EXPORT_SYMBOL(of_node_put);
+ #endif /* !CONFIG_SPARC */
+ 
+-struct property *of_find_property(const struct device_node *np,
+-				  const char *name,
+-				  int *lenp)
++static struct property *__of_find_property(const struct device_node *np,
++					   const char *name, int *lenp)
+ {
+ 	struct property *pp;
+ 
+ 	if (!np)
+ 		return NULL;
+ 
+-	read_lock(&devtree_lock);
+ 	for (pp = np->properties; pp != 0; pp = pp->next) {
+ 		if (of_prop_cmp(pp->name, name) == 0) {
+ 			if (lenp != 0)
+@@ -180,6 +178,18 @@ struct property *of_find_property(const struct device_node *np,
+ 			break;
+ 		}
+ 	}
++
++	return pp;
++}
++
++struct property *of_find_property(const struct device_node *np,
++				  const char *name,
++				  int *lenp)
++{
++	struct property *pp;
++
++	read_lock(&devtree_lock);
++	pp = __of_find_property(np, name, lenp);
+ 	read_unlock(&devtree_lock);
+ 
+ 	return pp;
+@@ -213,8 +223,20 @@ EXPORT_SYMBOL(of_find_all_nodes);
+  * Find a property with a given name for a given node
+  * and return the value.
+  */
++static const void *__of_get_property(const struct device_node *np,
++				     const char *name, int *lenp)
++{
++	struct property *pp = __of_find_property(np, name, lenp);
++
++	return pp ? pp->value : NULL;
++}
++
++/*
++ * Find a property with a given name for a given node
++ * and return the value.
++ */
+ const void *of_get_property(const struct device_node *np, const char *name,
+-			 int *lenp)
++			    int *lenp)
+ {
+ 	struct property *pp = of_find_property(np, name, lenp);
+ 
+@@ -225,13 +247,13 @@ EXPORT_SYMBOL(of_get_property);
+ /** Checks if the given "compat" string matches one of the strings in
+  * the device's "compatible" property
+  */
+-int of_device_is_compatible(const struct device_node *device,
+-		const char *compat)
++static int __of_device_is_compatible(const struct device_node *device,
++				     const char *compat)
+ {
+ 	const char* cp;
+-	int cplen, l;
++	int uninitialized_var(cplen), l;
+ 
+-	cp = of_get_property(device, "compatible", &cplen);
++	cp = __of_get_property(device, "compatible", &cplen);
+ 	if (cp == NULL)
+ 		return 0;
+ 	while (cplen > 0) {
+@@ -244,6 +266,20 @@ int of_device_is_compatible(const struct device_node *device,
+ 
+ 	return 0;
+ }
++
++/** Checks if the given "compat" string matches one of the strings in
++ * the device's "compatible" property
++ */
++int of_device_is_compatible(const struct device_node *device,
++		const char *compat)
++{
++	int res;
++
++	read_lock(&devtree_lock);
++	res = __of_device_is_compatible(device, compat);
++	read_unlock(&devtree_lock);
++	return res;
++}
+ EXPORT_SYMBOL(of_device_is_compatible);
+ 
+ /**
+@@ -467,7 +503,8 @@ struct device_node *of_find_compatible_node(struct device_node *from,
+ 		if (type
+ 		    && !(np->type && (of_node_cmp(np->type, type) == 0)))
+ 			continue;
+-		if (of_device_is_compatible(np, compatible) && of_node_get(np))
++		if (__of_device_is_compatible(np, compatible) &&
++		    of_node_get(np))
+ 			break;
+ 	}
+ 	of_node_put(from);
+@@ -511,15 +548,9 @@ out:
+ }
+ EXPORT_SYMBOL(of_find_node_with_property);
+ 
+-/**
+- * of_match_node - Tell if an device_node has a matching of_match structure
+- *	@matches:	array of of device match structures to search in
+- *	@node:		the of device structure to match against
+- *
+- *	Low level utility function used by device matching.
+- */
+-const struct of_device_id *of_match_node(const struct of_device_id *matches,
+-					 const struct device_node *node)
++static
++const struct of_device_id *__of_match_node(const struct of_device_id *matches,
++					   const struct device_node *node)
+ {
+ 	if (!matches)
+ 		return NULL;
+@@ -533,14 +564,32 @@ const struct of_device_id *of_match_node(const struct of_device_id *matches,
+ 			match &= node->type
+ 				&& !strcmp(matches->type, node->type);
+ 		if (matches->compatible[0])
+-			match &= of_device_is_compatible(node,
+-						matches->compatible);
++			match &= __of_device_is_compatible(node,
++							   matches->compatible);
+ 		if (match)
+ 			return matches;
+ 		matches++;
+ 	}
+ 	return NULL;
+ }
++
++/**
++ * of_match_node - Tell if an device_node has a matching of_match structure
++ *	@matches:	array of of device match structures to search in
++ *	@node:		the of device structure to match against
++ *
++ *	Low level utility function used by device matching.
++ */
++const struct of_device_id *of_match_node(const struct of_device_id *matches,
++					 const struct device_node *node)
++{
++	const struct of_device_id *match;
++
++	read_lock(&devtree_lock);
++	match = __of_match_node(matches, node);
++	read_unlock(&devtree_lock);
++	return match;
++}
+ EXPORT_SYMBOL(of_match_node);
+ 
+ /**
+@@ -563,7 +612,7 @@ struct device_node *of_find_matching_node(struct device_node *from,
+ 	read_lock(&devtree_lock);
+ 	np = from ? from->allnext : allnodes;
+ 	for (; np; np = np->allnext) {
+-		if (of_match_node(matches, np) && of_node_get(np))
++		if (__of_match_node(matches, np) && of_node_get(np))
+ 			break;
+ 	}
+ 	of_node_put(from);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0055-of-convert-devtree-lock.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0055-of-convert-devtree-lock.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0055-of-convert-devtree-lock.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0055-of-convert-devtree-lock.patch.patch)
@@ -0,0 +1,396 @@
+From 8252ed8e9ef0e6395bc88bd3726f9b4412864a8b Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 21 Mar 2011 14:35:34 +0100
+Subject: [PATCH 055/271] of-convert-devtree-lock.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/sparc/kernel/prom_common.c |    4 +-
+ drivers/of/base.c               |   92 ++++++++++++++++++++++-----------------
+ include/linux/of.h              |    2 +-
+ 3 files changed, 55 insertions(+), 43 deletions(-)
+
+diff --git a/arch/sparc/kernel/prom_common.c b/arch/sparc/kernel/prom_common.c
+index 741df91..ca73a28 100644
+--- a/arch/sparc/kernel/prom_common.c
++++ b/arch/sparc/kernel/prom_common.c
+@@ -65,7 +65,7 @@ int of_set_property(struct device_node *dp, const char *name, void *val, int len
+ 	err = -ENODEV;
+ 
+ 	mutex_lock(&of_set_property_mutex);
+-	write_lock(&devtree_lock);
++	raw_spin_lock(&devtree_lock);
+ 	prevp = &dp->properties;
+ 	while (*prevp) {
+ 		struct property *prop = *prevp;
+@@ -92,7 +92,7 @@ int of_set_property(struct device_node *dp, const char *name, void *val, int len
+ 		}
+ 		prevp = &(*prevp)->next;
+ 	}
+-	write_unlock(&devtree_lock);
++	raw_spin_unlock(&devtree_lock);
+ 	mutex_unlock(&of_set_property_mutex);
+ 
+ 	/* XXX Upate procfs if necessary... */
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 200f2dd..becc6ca 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -54,7 +54,7 @@ static DEFINE_MUTEX(of_aliases_mutex);
+ /* use when traversing tree through the allnext, child, sibling,
+  * or parent members of struct device_node.
+  */
+-DEFINE_RWLOCK(devtree_lock);
++DEFINE_RAW_SPINLOCK(devtree_lock);
+ 
+ int of_n_addr_cells(struct device_node *np)
+ {
+@@ -187,10 +187,11 @@ struct property *of_find_property(const struct device_node *np,
+ 				  int *lenp)
+ {
+ 	struct property *pp;
++	unsigned long flags;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	pp = __of_find_property(np, name, lenp);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 
+ 	return pp;
+ }
+@@ -208,13 +209,13 @@ struct device_node *of_find_all_nodes(struct device_node *prev)
+ {
+ 	struct device_node *np;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock(&devtree_lock);
+ 	np = prev ? prev->allnext : allnodes;
+ 	for (; np != NULL; np = np->allnext)
+ 		if (of_node_get(np))
+ 			break;
+ 	of_node_put(prev);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock(&devtree_lock);
+ 	return np;
+ }
+ EXPORT_SYMBOL(of_find_all_nodes);
+@@ -273,11 +274,12 @@ static int __of_device_is_compatible(const struct device_node *device,
+ int of_device_is_compatible(const struct device_node *device,
+ 		const char *compat)
+ {
++	unsigned long flags;
+ 	int res;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	res = __of_device_is_compatible(device, compat);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return res;
+ }
+ EXPORT_SYMBOL(of_device_is_compatible);
+@@ -339,13 +341,14 @@ EXPORT_SYMBOL(of_device_is_available);
+ struct device_node *of_get_parent(const struct device_node *node)
+ {
+ 	struct device_node *np;
++	unsigned long flags;
+ 
+ 	if (!node)
+ 		return NULL;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	np = of_node_get(node->parent);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return np;
+ }
+ EXPORT_SYMBOL(of_get_parent);
+@@ -364,14 +367,15 @@ EXPORT_SYMBOL(of_get_parent);
+ struct device_node *of_get_next_parent(struct device_node *node)
+ {
+ 	struct device_node *parent;
++	unsigned long flags;
+ 
+ 	if (!node)
+ 		return NULL;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	parent = of_node_get(node->parent);
+ 	of_node_put(node);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return parent;
+ }
+ 
+@@ -387,14 +391,15 @@ struct device_node *of_get_next_child(const struct device_node *node,
+ 	struct device_node *prev)
+ {
+ 	struct device_node *next;
++	unsigned long flags;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	next = prev ? prev->sibling : node->child;
+ 	for (; next; next = next->sibling)
+ 		if (of_node_get(next))
+ 			break;
+ 	of_node_put(prev);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return next;
+ }
+ EXPORT_SYMBOL(of_get_next_child);
+@@ -409,14 +414,15 @@ EXPORT_SYMBOL(of_get_next_child);
+ struct device_node *of_find_node_by_path(const char *path)
+ {
+ 	struct device_node *np = allnodes;
++	unsigned long flags;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	for (; np; np = np->allnext) {
+ 		if (np->full_name && (of_node_cmp(np->full_name, path) == 0)
+ 		    && of_node_get(np))
+ 			break;
+ 	}
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return np;
+ }
+ EXPORT_SYMBOL(of_find_node_by_path);
+@@ -436,15 +442,16 @@ struct device_node *of_find_node_by_name(struct device_node *from,
+ 	const char *name)
+ {
+ 	struct device_node *np;
++	unsigned long flags;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	np = from ? from->allnext : allnodes;
+ 	for (; np; np = np->allnext)
+ 		if (np->name && (of_node_cmp(np->name, name) == 0)
+ 		    && of_node_get(np))
+ 			break;
+ 	of_node_put(from);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return np;
+ }
+ EXPORT_SYMBOL(of_find_node_by_name);
+@@ -465,15 +472,16 @@ struct device_node *of_find_node_by_type(struct device_node *from,
+ 	const char *type)
+ {
+ 	struct device_node *np;
++	unsigned long flags;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	np = from ? from->allnext : allnodes;
+ 	for (; np; np = np->allnext)
+ 		if (np->type && (of_node_cmp(np->type, type) == 0)
+ 		    && of_node_get(np))
+ 			break;
+ 	of_node_put(from);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return np;
+ }
+ EXPORT_SYMBOL(of_find_node_by_type);
+@@ -496,8 +504,9 @@ struct device_node *of_find_compatible_node(struct device_node *from,
+ 	const char *type, const char *compatible)
+ {
+ 	struct device_node *np;
++	unsigned long flags;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	np = from ? from->allnext : allnodes;
+ 	for (; np; np = np->allnext) {
+ 		if (type
+@@ -508,7 +517,7 @@ struct device_node *of_find_compatible_node(struct device_node *from,
+ 			break;
+ 	}
+ 	of_node_put(from);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return np;
+ }
+ EXPORT_SYMBOL(of_find_compatible_node);
+@@ -530,8 +539,9 @@ struct device_node *of_find_node_with_property(struct device_node *from,
+ {
+ 	struct device_node *np;
+ 	struct property *pp;
++	unsigned long flags;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	np = from ? from->allnext : allnodes;
+ 	for (; np; np = np->allnext) {
+ 		for (pp = np->properties; pp != 0; pp = pp->next) {
+@@ -543,7 +553,7 @@ struct device_node *of_find_node_with_property(struct device_node *from,
+ 	}
+ out:
+ 	of_node_put(from);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return np;
+ }
+ EXPORT_SYMBOL(of_find_node_with_property);
+@@ -584,10 +594,11 @@ const struct of_device_id *of_match_node(const struct of_device_id *matches,
+ 					 const struct device_node *node)
+ {
+ 	const struct of_device_id *match;
++	unsigned long flags;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	match = __of_match_node(matches, node);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return match;
+ }
+ EXPORT_SYMBOL(of_match_node);
+@@ -608,15 +619,16 @@ struct device_node *of_find_matching_node(struct device_node *from,
+ 					  const struct of_device_id *matches)
+ {
+ 	struct device_node *np;
++	unsigned long flags;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	np = from ? from->allnext : allnodes;
+ 	for (; np; np = np->allnext) {
+ 		if (__of_match_node(matches, np) && of_node_get(np))
+ 			break;
+ 	}
+ 	of_node_put(from);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 	return np;
+ }
+ EXPORT_SYMBOL(of_find_matching_node);
+@@ -659,12 +671,12 @@ struct device_node *of_find_node_by_phandle(phandle handle)
+ {
+ 	struct device_node *np;
+ 
+-	read_lock(&devtree_lock);
++	raw_spin_lock(&devtree_lock);
+ 	for (np = allnodes; np; np = np->allnext)
+ 		if (np->phandle == handle)
+ 			break;
+ 	of_node_get(np);
+-	read_unlock(&devtree_lock);
++	raw_spin_unlock(&devtree_lock);
+ 	return np;
+ }
+ EXPORT_SYMBOL(of_find_node_by_phandle);
+@@ -998,18 +1010,18 @@ int prom_add_property(struct device_node *np, struct property *prop)
+ 	unsigned long flags;
+ 
+ 	prop->next = NULL;
+-	write_lock_irqsave(&devtree_lock, flags);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	next = &np->properties;
+ 	while (*next) {
+ 		if (strcmp(prop->name, (*next)->name) == 0) {
+ 			/* duplicate ! don't insert it */
+-			write_unlock_irqrestore(&devtree_lock, flags);
++			raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 			return -1;
+ 		}
+ 		next = &(*next)->next;
+ 	}
+ 	*next = prop;
+-	write_unlock_irqrestore(&devtree_lock, flags);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 
+ #ifdef CONFIG_PROC_DEVICETREE
+ 	/* try to add to proc as well if it was initialized */
+@@ -1034,7 +1046,7 @@ int prom_remove_property(struct device_node *np, struct property *prop)
+ 	unsigned long flags;
+ 	int found = 0;
+ 
+-	write_lock_irqsave(&devtree_lock, flags);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	next = &np->properties;
+ 	while (*next) {
+ 		if (*next == prop) {
+@@ -1047,7 +1059,7 @@ int prom_remove_property(struct device_node *np, struct property *prop)
+ 		}
+ 		next = &(*next)->next;
+ 	}
+-	write_unlock_irqrestore(&devtree_lock, flags);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 
+ 	if (!found)
+ 		return -ENODEV;
+@@ -1077,7 +1089,7 @@ int prom_update_property(struct device_node *np,
+ 	unsigned long flags;
+ 	int found = 0;
+ 
+-	write_lock_irqsave(&devtree_lock, flags);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	next = &np->properties;
+ 	while (*next) {
+ 		if (*next == oldprop) {
+@@ -1091,7 +1103,7 @@ int prom_update_property(struct device_node *np,
+ 		}
+ 		next = &(*next)->next;
+ 	}
+-	write_unlock_irqrestore(&devtree_lock, flags);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ 
+ 	if (!found)
+ 		return -ENODEV;
+@@ -1121,12 +1133,12 @@ void of_attach_node(struct device_node *np)
+ {
+ 	unsigned long flags;
+ 
+-	write_lock_irqsave(&devtree_lock, flags);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 	np->sibling = np->parent->child;
+ 	np->allnext = allnodes;
+ 	np->parent->child = np;
+ 	allnodes = np;
+-	write_unlock_irqrestore(&devtree_lock, flags);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ }
+ 
+ /**
+@@ -1140,7 +1152,7 @@ void of_detach_node(struct device_node *np)
+ 	struct device_node *parent;
+ 	unsigned long flags;
+ 
+-	write_lock_irqsave(&devtree_lock, flags);
++	raw_spin_lock_irqsave(&devtree_lock, flags);
+ 
+ 	parent = np->parent;
+ 	if (!parent)
+@@ -1171,7 +1183,7 @@ void of_detach_node(struct device_node *np)
+ 	of_node_set_flag(np, OF_DETACHED);
+ 
+ out_unlock:
+-	write_unlock_irqrestore(&devtree_lock, flags);
++	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+ }
+ #endif /* defined(CONFIG_OF_DYNAMIC) */
+ 
+diff --git a/include/linux/of.h b/include/linux/of.h
+index 4948552..e87303d 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -71,7 +71,7 @@ struct device_node {
+ extern struct device_node *allnodes;
+ extern struct device_node *of_chosen;
+ extern struct device_node *of_aliases;
+-extern rwlock_t devtree_lock;
++extern raw_spinlock_t devtree_lock;
+ 
+ static inline bool of_have_populated_dt(void)
+ {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0056-list-add-list-last-entry.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0056-list-add-list-last-entry.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0056-list-add-list-last-entry.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0056-list-add-list-last-entry.patch.patch)
@@ -0,0 +1,35 @@
+From 06226175ba2ac281ca10d0db0c268975870dd6d8 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz at infradead.org>
+Date: Tue, 21 Jun 2011 11:22:36 +0200
+Subject: [PATCH 056/271] list-add-list-last-entry.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/list.h |   11 +++++++++++
+ 1 file changed, 11 insertions(+)
+
+diff --git a/include/linux/list.h b/include/linux/list.h
+index cc6d2aa..7a9851b 100644
+--- a/include/linux/list.h
++++ b/include/linux/list.h
+@@ -362,6 +362,17 @@ static inline void list_splice_tail_init(struct list_head *list,
+ 	list_entry((ptr)->next, type, member)
+ 
+ /**
++ * list_last_entry - get the last element from a list
++ * @ptr:	the list head to take the element from.
++ * @type:	the type of the struct this is embedded in.
++ * @member:	the name of the list_struct within the struct.
++ *
++ * Note, that list is expected to be not empty.
++ */
++#define list_last_entry(ptr, type, member) \
++	list_entry((ptr)->prev, type, member)
++
++/**
+  * list_for_each	-	iterate over a list
+  * @pos:	the &struct list_head to use as a loop cursor.
+  * @head:	the head for your list.
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0057-mm-page-alloc-use-list-last-entry.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0057-mm-page-alloc-use-list-last-entry.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0057-mm-page-alloc-use-list-last-entry.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0057-mm-page-alloc-use-list-last-entry.patch.patch)
@@ -0,0 +1,26 @@
+From 423684113807a90ccab3d5dd79ede0f1d24f5334 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz at infradead.org>
+Date: Tue, 21 Jun 2011 11:24:35 +0200
+Subject: [PATCH 057/271] mm-page-alloc-use-list-last-entry.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/page_alloc.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 485be89..3344154 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -625,7 +625,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
+ 			batch_free = to_free;
+ 
+ 		do {
+-			page = list_entry(list->prev, struct page, lru);
++			page = list_last_entry(list, struct page, lru);
+ 			/* must delete as __free_one_page list manipulates */
+ 			list_del(&page->lru);
+ 			/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0058-mm-slab-move-debug-out.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0058-mm-slab-move-debug-out.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0058-mm-slab-move-debug-out.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0058-mm-slab-move-debug-out.patch.patch)
@@ -0,0 +1,42 @@
+From 614dc26581c4f193c97126e03adbd16c68502722 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 20 Jun 2011 10:42:04 +0200
+Subject: [PATCH 058/271] mm-slab-move-debug-out.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/slab.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/mm/slab.c b/mm/slab.c
+index b76905e..1fd9983 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -3851,10 +3851,10 @@ void kmem_cache_free(struct kmem_cache *cachep, void *objp)
+ {
+ 	unsigned long flags;
+ 
+-	local_irq_save(flags);
+ 	debug_check_no_locks_freed(objp, obj_size(cachep));
+ 	if (!(cachep->flags & SLAB_DEBUG_OBJECTS))
+ 		debug_check_no_obj_freed(objp, obj_size(cachep));
++	local_irq_save(flags);
+ 	__cache_free(cachep, objp, __builtin_return_address(0));
+ 	local_irq_restore(flags);
+ 
+@@ -3880,11 +3880,11 @@ void kfree(const void *objp)
+ 
+ 	if (unlikely(ZERO_OR_NULL_PTR(objp)))
+ 		return;
+-	local_irq_save(flags);
+ 	kfree_debugcheck(objp);
+ 	c = virt_to_cache(objp);
+ 	debug_check_no_locks_freed(objp, obj_size(c));
+ 	debug_check_no_obj_freed(objp, obj_size(c));
++	local_irq_save(flags);
+ 	__cache_free(c, (void *)objp, __builtin_return_address(0));
+ 	local_irq_restore(flags);
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0059-rwsem-inlcude-fix.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0059-rwsem-inlcude-fix.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0059-rwsem-inlcude-fix.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0059-rwsem-inlcude-fix.patch.patch)
@@ -0,0 +1,25 @@
+From 119f420d9e07dca0e5757f342c02c680d1e48780 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 15 Jul 2011 21:24:27 +0200
+Subject: [PATCH 059/271] rwsem-inlcude-fix.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/pid.h |    1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/include/linux/pid.h b/include/linux/pid.h
+index b152d44..7f33683 100644
+--- a/include/linux/pid.h
++++ b/include/linux/pid.h
+@@ -2,6 +2,7 @@
+ #define _LINUX_PID_H
+ 
+ #include <linux/rcupdate.h>
++#include <linux/atomic.h>
+ 
+ enum pid_type
+ {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0060-sysctl-include-fix.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0060-sysctl-include-fix.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0060-sysctl-include-fix.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0060-sysctl-include-fix.patch.patch)
@@ -0,0 +1,25 @@
+From d9eb97653e6bc4262f652443c93119e82639add3 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 14 Nov 2011 10:52:34 +0100
+Subject: [PATCH 060/271] sysctl-include-fix.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/sysctl.h |    1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
+index 703cfa33..b954c41 100644
+--- a/include/linux/sysctl.h
++++ b/include/linux/sysctl.h
+@@ -932,6 +932,7 @@ enum
+ #include <linux/list.h>
+ #include <linux/rcupdate.h>
+ #include <linux/wait.h>
++#include <linux/atomic.h>
+ 
+ /* For the /proc/sys support */
+ struct ctl_table;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0061-net-flip-lock-dep-thingy.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0061-net-flip-lock-dep-thingy.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0061-net-flip-lock-dep-thingy.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0061-net-flip-lock-dep-thingy.patch.patch)
@@ -0,0 +1,115 @@
+From 8aa0e728b15ef32d42669dfe33f2d9ac5d27ea3d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 28 Jun 2011 10:59:58 +0200
+Subject: [PATCH 061/271] net-flip-lock-dep-thingy.patch
+
+=======================================================
+[ INFO: possible circular locking dependency detected ]
+3.0.0-rc3+ #26
+-------------------------------------------------------
+ip/1104 is trying to acquire lock:
+ (local_softirq_lock){+.+...}, at: [<ffffffff81056d12>] __local_lock+0x25/0x68
+
+but task is already holding lock:
+ (sk_lock-AF_INET){+.+...}, at: [<ffffffff81433308>] lock_sock+0x10/0x12
+
+which lock already depends on the new lock.
+
+the existing dependency chain (in reverse order) is:
+
+-> #1 (sk_lock-AF_INET){+.+...}:
+       [<ffffffff810836e5>] lock_acquire+0x103/0x12e
+       [<ffffffff813e2781>] lock_sock_nested+0x82/0x92
+       [<ffffffff81433308>] lock_sock+0x10/0x12
+       [<ffffffff81433afa>] tcp_close+0x1b/0x355
+       [<ffffffff81453c99>] inet_release+0xc3/0xcd
+       [<ffffffff813dff3f>] sock_release+0x1f/0x74
+       [<ffffffff813dffbb>] sock_close+0x27/0x2b
+       [<ffffffff81129c63>] fput+0x11d/0x1e3
+       [<ffffffff81126577>] filp_close+0x70/0x7b
+       [<ffffffff8112667a>] sys_close+0xf8/0x13d
+       [<ffffffff814ae882>] system_call_fastpath+0x16/0x1b
+
+-> #0 (local_softirq_lock){+.+...}:
+       [<ffffffff81082ecc>] __lock_acquire+0xacc/0xdc8
+       [<ffffffff810836e5>] lock_acquire+0x103/0x12e
+       [<ffffffff814a7e40>] _raw_spin_lock+0x3b/0x4a
+       [<ffffffff81056d12>] __local_lock+0x25/0x68
+       [<ffffffff81056d8b>] local_bh_disable+0x36/0x3b
+       [<ffffffff814a7fc4>] _raw_write_lock_bh+0x16/0x4f
+       [<ffffffff81433c38>] tcp_close+0x159/0x355
+       [<ffffffff81453c99>] inet_release+0xc3/0xcd
+       [<ffffffff813dff3f>] sock_release+0x1f/0x74
+       [<ffffffff813dffbb>] sock_close+0x27/0x2b
+       [<ffffffff81129c63>] fput+0x11d/0x1e3
+       [<ffffffff81126577>] filp_close+0x70/0x7b
+       [<ffffffff8112667a>] sys_close+0xf8/0x13d
+       [<ffffffff814ae882>] system_call_fastpath+0x16/0x1b
+
+other info that might help us debug this:
+
+ Possible unsafe locking scenario:
+
+       CPU0                    CPU1
+       ----                    ----
+  lock(sk_lock-AF_INET);
+                               lock(local_softirq_lock);
+                               lock(sk_lock-AF_INET);
+  lock(local_softirq_lock);
+
+ *** DEADLOCK ***
+
+1 lock held by ip/1104:
+ #0:  (sk_lock-AF_INET){+.+...}, at: [<ffffffff81433308>] lock_sock+0x10/0x12
+
+stack backtrace:
+Pid: 1104, comm: ip Not tainted 3.0.0-rc3+ #26
+Call Trace:
+ [<ffffffff81081649>] print_circular_bug+0x1f8/0x209
+ [<ffffffff81082ecc>] __lock_acquire+0xacc/0xdc8
+ [<ffffffff81056d12>] ? __local_lock+0x25/0x68
+ [<ffffffff810836e5>] lock_acquire+0x103/0x12e
+ [<ffffffff81056d12>] ? __local_lock+0x25/0x68
+ [<ffffffff81046c75>] ? get_parent_ip+0x11/0x41
+ [<ffffffff814a7e40>] _raw_spin_lock+0x3b/0x4a
+ [<ffffffff81056d12>] ? __local_lock+0x25/0x68
+ [<ffffffff81046c8c>] ? get_parent_ip+0x28/0x41
+ [<ffffffff81056d12>] __local_lock+0x25/0x68
+ [<ffffffff81056d8b>] local_bh_disable+0x36/0x3b
+ [<ffffffff81433308>] ? lock_sock+0x10/0x12
+ [<ffffffff814a7fc4>] _raw_write_lock_bh+0x16/0x4f
+ [<ffffffff81433c38>] tcp_close+0x159/0x355
+ [<ffffffff81453c99>] inet_release+0xc3/0xcd
+ [<ffffffff813dff3f>] sock_release+0x1f/0x74
+ [<ffffffff813dffbb>] sock_close+0x27/0x2b
+ [<ffffffff81129c63>] fput+0x11d/0x1e3
+ [<ffffffff81126577>] filp_close+0x70/0x7b
+ [<ffffffff8112667a>] sys_close+0xf8/0x13d
+ [<ffffffff814ae882>] system_call_fastpath+0x16/0x1b
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ net/core/sock.c |    3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index b23f174..a87eb16 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2031,12 +2031,11 @@ void lock_sock_nested(struct sock *sk, int subclass)
+ 	if (sk->sk_lock.owned)
+ 		__lock_sock(sk);
+ 	sk->sk_lock.owned = 1;
+-	spin_unlock(&sk->sk_lock.slock);
++	spin_unlock_bh(&sk->sk_lock.slock);
+ 	/*
+ 	 * The sk_lock has mutex_lock() semantics here:
+ 	 */
+ 	mutex_acquire(&sk->sk_lock.dep_map, subclass, 0, _RET_IP_);
+-	local_bh_enable();
+ }
+ EXPORT_SYMBOL(lock_sock_nested);
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0062-softirq-thread-do-softirq.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0062-softirq-thread-do-softirq.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0062-softirq-thread-do-softirq.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0062-softirq-thread-do-softirq.patch.patch)
@@ -0,0 +1,40 @@
+From e8961150be29279a645e32f77c003547a50a6dd6 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 28 Jun 2011 15:44:15 +0200
+Subject: [PATCH 062/271] softirq-thread-do-softirq.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/interrupt.h |    2 ++
+ net/core/dev.c            |    2 +-
+ 2 files changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index a64b00e..21b94de 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -454,6 +454,8 @@ struct softirq_action
+ 
+ asmlinkage void do_softirq(void);
+ asmlinkage void __do_softirq(void);
++static inline void thread_do_softirq(void) { do_softirq(); }
++
+ extern void open_softirq(int nr, void (*action)(struct softirq_action *));
+ extern void softirq_init(void);
+ static inline void __raise_softirq_irqoff(unsigned int nr)
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 1cbddc9..1297da7 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3031,7 +3031,7 @@ int netif_rx_ni(struct sk_buff *skb)
+ 	preempt_disable();
+ 	err = netif_rx(skb);
+ 	if (local_softirq_pending())
+-		do_softirq();
++		thread_do_softirq();
+ 	preempt_enable();
+ 
+ 	return err;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0063-softirq-split-out-code.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0063-softirq-split-out-code.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0063-softirq-split-out-code.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0063-softirq-split-out-code.patch.patch)
@@ -0,0 +1,159 @@
+From b6a09c0018a9646352199a935c6a78438819011e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 28 Jun 2011 15:46:49 +0200
+Subject: [PATCH 063/271] softirq-split-out-code.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/softirq.c |   94 ++++++++++++++++++++++++++++++------------------------
+ 1 file changed, 52 insertions(+), 42 deletions(-)
+
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index a8becbf..c6c5824 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -76,6 +76,34 @@ static void wakeup_softirqd(void)
+ 		wake_up_process(tsk);
+ }
+ 
++static void handle_pending_softirqs(u32 pending, int cpu)
++{
++	struct softirq_action *h = softirq_vec;
++	unsigned int prev_count = preempt_count();
++
++	local_irq_enable();
++	for ( ; pending; h++, pending >>= 1) {
++		unsigned int vec_nr = h - softirq_vec;
++
++		if (!(pending & 1))
++			continue;
++
++		kstat_incr_softirqs_this_cpu(vec_nr);
++		trace_softirq_entry(vec_nr);
++		h->action(h);
++		trace_softirq_exit(vec_nr);
++		if (unlikely(prev_count != preempt_count())) {
++			printk(KERN_ERR
++ "huh, entered softirq %u %s %p with preempt_count %08x exited with %08x?\n",
++			       vec_nr, softirq_to_name[vec_nr], h->action,
++			       prev_count, (unsigned int) preempt_count());
++			preempt_count() = prev_count;
++		}
++		rcu_bh_qs(cpu);
++	}
++	local_irq_disable();
++}
++
+ /*
+  * preempt_count and SOFTIRQ_OFFSET usage:
+  * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
+@@ -206,7 +234,6 @@ EXPORT_SYMBOL(local_bh_enable_ip);
+ 
+ asmlinkage void __do_softirq(void)
+ {
+-	struct softirq_action *h;
+ 	__u32 pending;
+ 	int max_restart = MAX_SOFTIRQ_RESTART;
+ 	int cpu;
+@@ -215,7 +242,7 @@ asmlinkage void __do_softirq(void)
+ 	account_system_vtime(current);
+ 
+ 	__local_bh_disable((unsigned long)__builtin_return_address(0),
+-				SOFTIRQ_OFFSET);
++			   SOFTIRQ_OFFSET);
+ 	lockdep_softirq_enter();
+ 
+ 	cpu = smp_processor_id();
+@@ -223,36 +250,7 @@ restart:
+ 	/* Reset the pending bitmask before enabling irqs */
+ 	set_softirq_pending(0);
+ 
+-	local_irq_enable();
+-
+-	h = softirq_vec;
+-
+-	do {
+-		if (pending & 1) {
+-			unsigned int vec_nr = h - softirq_vec;
+-			int prev_count = preempt_count();
+-
+-			kstat_incr_softirqs_this_cpu(vec_nr);
+-
+-			trace_softirq_entry(vec_nr);
+-			h->action(h);
+-			trace_softirq_exit(vec_nr);
+-			if (unlikely(prev_count != preempt_count())) {
+-				printk(KERN_ERR "huh, entered softirq %u %s %p"
+-				       "with preempt_count %08x,"
+-				       " exited with %08x?\n", vec_nr,
+-				       softirq_to_name[vec_nr], h->action,
+-				       prev_count, preempt_count());
+-				preempt_count() = prev_count;
+-			}
+-
+-			rcu_bh_qs(cpu);
+-		}
+-		h++;
+-		pending >>= 1;
+-	} while (pending);
+-
+-	local_irq_disable();
++	handle_pending_softirqs(pending, cpu);
+ 
+ 	pending = local_softirq_pending();
+ 	if (pending && --max_restart)
+@@ -267,6 +265,26 @@ restart:
+ 	__local_bh_enable(SOFTIRQ_OFFSET);
+ }
+ 
++/*
++ * Called with preemption disabled from run_ksoftirqd()
++ */
++static int ksoftirqd_do_softirq(int cpu)
++{
++	/*
++	 * Preempt disable stops cpu going offline.
++	 * If already offline, we'll be on wrong CPU:
++	 * don't process.
++	 */
++	if (cpu_is_offline(cpu))
++		return -1;
++
++	local_irq_disable();
++	if (local_softirq_pending())
++		__do_softirq();
++	local_irq_enable();
++	return 0;
++}
++
+ #ifndef __ARCH_HAS_DO_SOFTIRQ
+ 
+ asmlinkage void do_softirq(void)
+@@ -743,22 +761,14 @@ static int run_ksoftirqd(void * __bind_cpu)
+ 
+ 	while (!kthread_should_stop()) {
+ 		preempt_disable();
+-		if (!local_softirq_pending()) {
++		if (!local_softirq_pending())
+ 			schedule_preempt_disabled();
+-		}
+ 
+ 		__set_current_state(TASK_RUNNING);
+ 
+ 		while (local_softirq_pending()) {
+-			/* Preempt disable stops cpu going offline.
+-			   If already offline, we'll be on wrong CPU:
+-			   don't process */
+-			if (cpu_is_offline((long)__bind_cpu))
++			if (ksoftirqd_do_softirq((long) __bind_cpu))
+ 				goto wait_to_die;
+-			local_irq_disable();
+-			if (local_softirq_pending())
+-				__do_softirq();
+-			local_irq_enable();
+ 			__preempt_enable_no_resched();
+ 			cond_resched();
+ 			preempt_disable();
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0064-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0064-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0064-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0064-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch)
@@ -0,0 +1,32 @@
+From b11c24cc29deb9c03a5a8bda3f7aae18ac13f5de Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:27 -0500
+Subject: [PATCH 064/271] x86: Do not unmask io_apic when interrupt is in
+ progress
+
+With threaded interrupts we might see an interrupt in progress on
+migration. Do not unmask it when this is the case.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/kernel/apic/io_apic.c |    3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 8980555..91527bc 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -2521,7 +2521,8 @@ static void ack_apic_level(struct irq_data *data)
+ 	irq_complete_move(cfg);
+ #ifdef CONFIG_GENERIC_PENDING_IRQ
+ 	/* If we are moving the irq we need to mask it */
+-	if (unlikely(irqd_is_setaffinity_pending(data))) {
++	if (unlikely(irqd_is_setaffinity_pending(data) &&
++		     !irqd_irq_inprogress(data))) {
+ 		do_unmask_irq = 1;
+ 		mask_ioapic(cfg);
+ 	}
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0065-x86-32-fix-signal-crap.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0065-x86-32-fix-signal-crap.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0065-x86-32-fix-signal-crap.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0065-x86-32-fix-signal-crap.patch.patch)
@@ -0,0 +1,41 @@
+From 2c7c330f000b79354471dde638fc52c24b12ddd8 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 18 Jul 2011 15:59:38 +0200
+Subject: [PATCH 065/271] x86-32-fix-signal-crap.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/kernel/entry_32.S |    8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
+index bcda816..426cf51 100644
+--- a/arch/x86/kernel/entry_32.S
++++ b/arch/x86/kernel/entry_32.S
+@@ -629,7 +629,11 @@ work_notifysig:				# deal with pending signals and
+ 	jne work_notifysig_v86		# returning to kernel-space or
+ 					# vm86-space
+ 	xorl %edx, %edx
++	TRACE_IRQS_ON
++	ENABLE_INTERRUPTS(CLBR_NONE)
+ 	call do_notify_resume
++	DISABLE_INTERRUPTS(CLBR_ANY)
++	TRACE_IRQS_OFF
+ 	jmp resume_userspace_sig
+ 
+ 	ALIGN
+@@ -642,7 +646,11 @@ work_notifysig_v86:
+ 	movl %esp, %eax
+ #endif
+ 	xorl %edx, %edx
++	TRACE_IRQS_ON
++	ENABLE_INTERRUPTS(CLBR_NONE)
+ 	call do_notify_resume
++	DISABLE_INTERRUPTS(CLBR_ANY)
++	TRACE_IRQS_OFF
+ 	jmp resume_userspace_sig
+ END(work_pending)
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0066-x86-Do-not-disable-preemption-in-int3-on-32bit.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0066-x86-Do-not-disable-preemption-in-int3-on-32bit.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0066-x86-Do-not-disable-preemption-in-int3-on-32bit.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0066-x86-Do-not-disable-preemption-in-int3-on-32bit.patch)
@@ -0,0 +1,118 @@
+From 74385c9f4871a36c3d6f4f3613c03eafe9787e50 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Tue, 10 Apr 2012 14:33:57 -0400
+Subject: [PATCH 066/271] x86: Do not disable preemption in int3 on 32bit
+
+Preemption must be disabled before enabling interrupts in do_trap
+on x86_64 because the stack in use for int3 and debug is a per CPU
+stack set by th IST. But 32bit does not have an IST and the stack
+still belongs to the current task and there is no problem in scheduling
+out the task.
+
+Keep preemption enabled on X86_32 when enabling interrupts for
+do_trap().
+
+The name of the function is changed from preempt_conditional_sti/cli()
+to conditional_sti/cli_ist(), to annotate that this function is used
+when the stack is on the IST.
+
+Cc: stable-rt at vger.kernel.org
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/kernel/traps.c |   32 +++++++++++++++++++++++---------
+ 1 file changed, 23 insertions(+), 9 deletions(-)
+
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 31d9d0f..cc88aec 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -87,9 +87,21 @@ static inline void conditional_sti(struct pt_regs *regs)
+ 		local_irq_enable();
+ }
+ 
+-static inline void preempt_conditional_sti(struct pt_regs *regs)
++static inline void conditional_sti_ist(struct pt_regs *regs)
+ {
++#ifdef CONFIG_X86_64
++	/*
++	 * X86_64 uses a per CPU stack on the IST for certain traps
++	 * like int3. The task can not be preempted when using one
++	 * of these stacks, thus preemption must be disabled, otherwise
++	 * the stack can be corrupted if the task is scheduled out,
++	 * and another task comes in and uses this stack.
++	 *
++	 * On x86_32 the task keeps its own stack and it is OK if the
++	 * task schedules out.
++	 */
+ 	inc_preempt_count();
++#endif
+ 	if (regs->flags & X86_EFLAGS_IF)
+ 		local_irq_enable();
+ }
+@@ -100,11 +112,13 @@ static inline void conditional_cli(struct pt_regs *regs)
+ 		local_irq_disable();
+ }
+ 
+-static inline void preempt_conditional_cli(struct pt_regs *regs)
++static inline void conditional_cli_ist(struct pt_regs *regs)
+ {
+ 	if (regs->flags & X86_EFLAGS_IF)
+ 		local_irq_disable();
++#ifdef CONFIG_X86_64
+ 	dec_preempt_count();
++#endif
+ }
+ 
+ static void __kprobes
+@@ -222,9 +236,9 @@ dotraplinkage void do_stack_segment(struct pt_regs *regs, long error_code)
+ 	if (notify_die(DIE_TRAP, "stack segment", regs, error_code,
+ 			12, SIGBUS) == NOTIFY_STOP)
+ 		return;
+-	preempt_conditional_sti(regs);
++	conditional_sti_ist(regs);
+ 	do_trap(12, SIGBUS, "stack segment", regs, error_code, NULL);
+-	preempt_conditional_cli(regs);
++	conditional_cli_ist(regs);
+ }
+ 
+ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
+@@ -316,9 +330,9 @@ dotraplinkage void __kprobes do_int3(struct pt_regs *regs, long error_code)
+ 		return;
+ #endif
+ 
+-	preempt_conditional_sti(regs);
++	conditional_sti_ist(regs);
+ 	do_trap(3, SIGTRAP, "int3", regs, error_code, NULL);
+-	preempt_conditional_cli(regs);
++	conditional_cli_ist(regs);
+ }
+ 
+ #ifdef CONFIG_X86_64
+@@ -412,12 +426,12 @@ dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code)
+ 		return;
+ 
+ 	/* It's safe to allow irq's after DR6 has been saved */
+-	preempt_conditional_sti(regs);
++	conditional_sti_ist(regs);
+ 
+ 	if (regs->flags & X86_VM_MASK) {
+ 		handle_vm86_trap((struct kernel_vm86_regs *) regs,
+ 				error_code, 1);
+-		preempt_conditional_cli(regs);
++		conditional_cli_ist(regs);
+ 		return;
+ 	}
+ 
+@@ -436,7 +450,7 @@ dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code)
+ 	si_code = get_si_code(tsk->thread.debugreg6);
+ 	if (tsk->thread.debugreg6 & (DR_STEP | DR_TRAP_BITS) || user_icebp)
+ 		send_sigtrap(tsk, regs, error_code, si_code);
+-	preempt_conditional_cli(regs);
++	conditional_cli_ist(regs);
+ 
+ 	return;
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0067-rcu-Reduce-lock-section.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0067-rcu-Reduce-lock-section.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0067-rcu-Reduce-lock-section.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0067-rcu-Reduce-lock-section.patch)
@@ -0,0 +1,98 @@
+From c8bab1c789269abc24dc8bc7bfe19e5a94e99bd1 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 24 Jun 2011 22:23:02 +0200
+Subject: [PATCH 067/271] rcu: Reduce lock section
+
+So the waitqueue wakeup is outside the raw locked section.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/rcutree.c        |    2 +-
+ kernel/rcutree.h        |    3 ++-
+ kernel/rcutree_plugin.h |   14 ++++++++------
+ 3 files changed, 11 insertions(+), 8 deletions(-)
+
+diff --git a/kernel/rcutree.c b/kernel/rcutree.c
+index 6b76d81..8ef8675 100644
+--- a/kernel/rcutree.c
++++ b/kernel/rcutree.c
+@@ -1221,7 +1221,7 @@ static void __rcu_offline_cpu(int cpu, struct rcu_state *rsp)
+ 	else
+ 		raw_spin_unlock_irqrestore(&rnp->lock, flags);
+ 	if (need_report & RCU_OFL_TASKS_EXP_GP)
+-		rcu_report_exp_rnp(rsp, rnp);
++		rcu_report_exp_rnp(rsp, rnp, true);
+ 	rcu_node_kthread_setaffinity(rnp, -1);
+ }
+ 
+diff --git a/kernel/rcutree.h b/kernel/rcutree.h
+index 849ce9e..dca495d 100644
+--- a/kernel/rcutree.h
++++ b/kernel/rcutree.h
+@@ -451,7 +451,8 @@ static void rcu_preempt_check_callbacks(int cpu);
+ static void rcu_preempt_process_callbacks(void);
+ void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu));
+ #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU)
+-static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp);
++static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
++			       bool wake);
+ #endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) */
+ static int rcu_preempt_pending(int cpu);
+ static int rcu_preempt_needs_cpu(int cpu);
+diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
+index 4b9b9f8..73cab33 100644
+--- a/kernel/rcutree_plugin.h
++++ b/kernel/rcutree_plugin.h
+@@ -407,7 +407,7 @@ static noinline void rcu_read_unlock_special(struct task_struct *t)
+ 		 * then we need to report up the rcu_node hierarchy.
+ 		 */
+ 		if (!empty_exp && !rcu_preempted_readers_exp(rnp))
+-			rcu_report_exp_rnp(&rcu_preempt_state, rnp);
++			rcu_report_exp_rnp(&rcu_preempt_state, rnp, true);
+ 	} else {
+ 		local_irq_restore(flags);
+ 	}
+@@ -731,7 +731,8 @@ static int sync_rcu_preempt_exp_done(struct rcu_node *rnp)
+  *
+  * Caller must hold sync_rcu_preempt_exp_mutex.
+  */
+-static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp)
++static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
++			       bool wake)
+ {
+ 	unsigned long flags;
+ 	unsigned long mask;
+@@ -744,7 +745,8 @@ static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp)
+ 		}
+ 		if (rnp->parent == NULL) {
+ 			raw_spin_unlock_irqrestore(&rnp->lock, flags);
+-			wake_up(&sync_rcu_preempt_exp_wq);
++			if (wake)
++				wake_up(&sync_rcu_preempt_exp_wq);
+ 			break;
+ 		}
+ 		mask = rnp->grpmask;
+@@ -777,7 +779,7 @@ sync_rcu_preempt_exp_init(struct rcu_state *rsp, struct rcu_node *rnp)
+ 		must_wait = 1;
+ 	}
+ 	if (!must_wait)
+-		rcu_report_exp_rnp(rsp, rnp);
++		rcu_report_exp_rnp(rsp, rnp, false);
+ }
+ 
+ /*
+@@ -1069,9 +1071,9 @@ EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
+  * report on tasks preempted in RCU read-side critical sections during
+  * expedited RCU grace periods.
+  */
+-static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp)
++static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
++			       bool wake)
+ {
+-	return;
+ }
+ 
+ #endif /* #ifdef CONFIG_HOTPLUG_CPU */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0068-locking-various-init-fixes.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0068-locking-various-init-fixes.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0068-locking-various-init-fixes.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0068-locking-various-init-fixes.patch.patch)
@@ -0,0 +1,100 @@
+From 28496932e35a52b3a0e3fdcb9451681a1892ec27 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 21:25:03 +0200
+Subject: [PATCH 068/271] locking-various-init-fixes.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/char/random.c            |    6 +++---
+ drivers/usb/gadget/ci13xxx_udc.c |    2 +-
+ fs/file.c                        |    2 +-
+ include/linux/idr.h              |    2 +-
+ kernel/cred.c                    |    2 +-
+ 5 files changed, 7 insertions(+), 7 deletions(-)
+
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 786a856..026bdbb 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -433,7 +433,7 @@ static struct entropy_store input_pool = {
+ 	.poolinfo = &poolinfo_table[0],
+ 	.name = "input",
+ 	.limit = 1,
+-	.lock = __SPIN_LOCK_UNLOCKED(&input_pool.lock),
++	.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
+ 	.pool = input_pool_data
+ };
+ 
+@@ -442,7 +442,7 @@ static struct entropy_store blocking_pool = {
+ 	.name = "blocking",
+ 	.limit = 1,
+ 	.pull = &input_pool,
+-	.lock = __SPIN_LOCK_UNLOCKED(&blocking_pool.lock),
++	.lock = __SPIN_LOCK_UNLOCKED(blocking_pool.lock),
+ 	.pool = blocking_pool_data
+ };
+ 
+@@ -450,7 +450,7 @@ static struct entropy_store nonblocking_pool = {
+ 	.poolinfo = &poolinfo_table[1],
+ 	.name = "nonblocking",
+ 	.pull = &input_pool,
+-	.lock = __SPIN_LOCK_UNLOCKED(&nonblocking_pool.lock),
++	.lock = __SPIN_LOCK_UNLOCKED(nonblocking_pool.lock),
+ 	.pool = nonblocking_pool_data
+ };
+ 
+diff --git a/drivers/usb/gadget/ci13xxx_udc.c b/drivers/usb/gadget/ci13xxx_udc.c
+index 9a0c397..f526873 100644
+--- a/drivers/usb/gadget/ci13xxx_udc.c
++++ b/drivers/usb/gadget/ci13xxx_udc.c
+@@ -819,7 +819,7 @@ static struct {
+ } dbg_data = {
+ 	.idx = 0,
+ 	.tty = 0,
+-	.lck = __RW_LOCK_UNLOCKED(lck)
++	.lck = __RW_LOCK_UNLOCKED(dbg_data.lck)
+ };
+ 
+ /**
+diff --git a/fs/file.c b/fs/file.c
+index 4c6992d..375472d 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -422,7 +422,7 @@ struct files_struct init_files = {
+ 		.close_on_exec	= (fd_set *)&init_files.close_on_exec_init,
+ 		.open_fds	= (fd_set *)&init_files.open_fds_init,
+ 	},
+-	.file_lock	= __SPIN_LOCK_UNLOCKED(init_task.file_lock),
++	.file_lock	= __SPIN_LOCK_UNLOCKED(init_files.file_lock),
+ };
+ 
+ /*
+diff --git a/include/linux/idr.h b/include/linux/idr.h
+index 255491c..4eaacf0 100644
+--- a/include/linux/idr.h
++++ b/include/linux/idr.h
+@@ -136,7 +136,7 @@ struct ida {
+ 	struct ida_bitmap	*free_bitmap;
+ };
+ 
+-#define IDA_INIT(name)		{ .idr = IDR_INIT(name), .free_bitmap = NULL, }
++#define IDA_INIT(name)		{ .idr = IDR_INIT((name).idr), .free_bitmap = NULL, }
+ #define DEFINE_IDA(name)	struct ida name = IDA_INIT(name)
+ 
+ int ida_pre_get(struct ida *ida, gfp_t gfp_mask);
+diff --git a/kernel/cred.c b/kernel/cred.c
+index 48c6fd3..482a0e3 100644
+--- a/kernel/cred.c
++++ b/kernel/cred.c
+@@ -35,7 +35,7 @@ static struct kmem_cache *cred_jar;
+ static struct thread_group_cred init_tgcred = {
+ 	.usage	= ATOMIC_INIT(2),
+ 	.tgid	= 0,
+-	.lock	= __SPIN_LOCK_UNLOCKED(init_cred.tgcred.lock),
++	.lock	= __SPIN_LOCK_UNLOCKED(init_tgcred.lock),
+ };
+ #endif
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0069-wait-Provide-__wake_up_all_locked.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0069-wait-Provide-__wake_up_all_locked.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0069-wait-Provide-__wake_up_all_locked.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0069-wait-Provide-__wake_up_all_locked.patch)
@@ -0,0 +1,58 @@
+From ae1b53b19a728c9b7fb6263e18a1d0a6f7bbb605 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 1 Dec 2011 00:04:00 +0100
+Subject: [PATCH 069/271] wait: Provide __wake_up_all_locked
+
+For code which protects the waitqueue itself with another lock it
+makes no sense to acquire the waitqueue lock for wakeup all. Provide
+__wake_up_all_locked.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+---
+ include/linux/wait.h |    5 +++--
+ kernel/sched.c       |    4 ++--
+ 2 files changed, 5 insertions(+), 4 deletions(-)
+
+diff --git a/include/linux/wait.h b/include/linux/wait.h
+index 3efc9f3..1e904b8 100644
+--- a/include/linux/wait.h
++++ b/include/linux/wait.h
+@@ -157,7 +157,7 @@ void __wake_up(wait_queue_head_t *q, unsigned int mode, int nr, void *key);
+ void __wake_up_locked_key(wait_queue_head_t *q, unsigned int mode, void *key);
+ void __wake_up_sync_key(wait_queue_head_t *q, unsigned int mode, int nr,
+ 			void *key);
+-void __wake_up_locked(wait_queue_head_t *q, unsigned int mode);
++void __wake_up_locked(wait_queue_head_t *q, unsigned int mode, int nr);
+ void __wake_up_sync(wait_queue_head_t *q, unsigned int mode, int nr);
+ void __wake_up_bit(wait_queue_head_t *, void *, int);
+ int __wait_on_bit(wait_queue_head_t *, struct wait_bit_queue *, int (*)(void *), unsigned);
+@@ -170,7 +170,8 @@ wait_queue_head_t *bit_waitqueue(void *, int);
+ #define wake_up(x)			__wake_up(x, TASK_NORMAL, 1, NULL)
+ #define wake_up_nr(x, nr)		__wake_up(x, TASK_NORMAL, nr, NULL)
+ #define wake_up_all(x)			__wake_up(x, TASK_NORMAL, 0, NULL)
+-#define wake_up_locked(x)		__wake_up_locked((x), TASK_NORMAL)
++#define wake_up_locked(x)		__wake_up_locked((x), TASK_NORMAL, 1)
++#define wake_up_all_locked(x)		__wake_up_locked((x), TASK_NORMAL, 0)
+ 
+ #define wake_up_interruptible(x)	__wake_up(x, TASK_INTERRUPTIBLE, 1, NULL)
+ #define wake_up_interruptible_nr(x, nr)	__wake_up(x, TASK_INTERRUPTIBLE, nr, NULL)
+diff --git a/kernel/sched.c b/kernel/sched.c
+index b432fe0..e1fee8d 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -4636,9 +4636,9 @@ EXPORT_SYMBOL(__wake_up);
+ /*
+  * Same as __wake_up but called with the spinlock in wait_queue_head_t held.
+  */
+-void __wake_up_locked(wait_queue_head_t *q, unsigned int mode)
++void __wake_up_locked(wait_queue_head_t *q, unsigned int mode, int nr)
+ {
+-	__wake_up_common(q, mode, 1, 0, NULL);
++	__wake_up_common(q, mode, nr, 0, NULL);
+ }
+ EXPORT_SYMBOL_GPL(__wake_up_locked);
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0070-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0070-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0070-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0070-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch)
@@ -0,0 +1,32 @@
+From e518262a8ce30be5c174a615f11c7babcd4e2fed Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 1 Dec 2011 00:07:16 +0100
+Subject: [PATCH 070/271] pci: Use __wake_up_all_locked
+ pci_unblock_user_cfg_access()
+
+The waitqueue is protected by the pci_lock, so we can just avoid to
+lock the waitqueue lock itself. That prevents the
+might_sleep()/scheduling while atomic problem on RT
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+---
+ drivers/pci/access.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/pci/access.c b/drivers/pci/access.c
+index fdaa42a..1a6cc67 100644
+--- a/drivers/pci/access.c
++++ b/drivers/pci/access.c
+@@ -441,7 +441,7 @@ void pci_unblock_user_cfg_access(struct pci_dev *dev)
+ 	WARN_ON(!dev->block_ucfg_access);
+ 
+ 	dev->block_ucfg_access = 0;
+-	wake_up_all(&pci_ucfg_wait);
++	wake_up_all_locked(&pci_ucfg_wait);
+ 	raw_spin_unlock_irqrestore(&pci_lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(pci_unblock_user_cfg_access);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0071-latency-hist.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0071-latency-hist.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0071-latency-hist.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0071-latency-hist.patch.patch)
@@ -0,0 +1,1796 @@
+From f1beb73035be977df46e1cc273d9165981d3c811 Mon Sep 17 00:00:00 2001
+From: Carsten Emde <C.Emde at osadl.org>
+Date: Tue, 19 Jul 2011 14:03:41 +0100
+Subject: [PATCH 071/271] latency-hist.patch
+
+This patch provides a recording mechanism to store data of potential
+sources of system latencies. The recordings separately determine the
+latency caused by a delayed timer expiration, by a delayed wakeup of the
+related user space program and by the sum of both. The histograms can be
+enabled and reset individually. The data are accessible via the debug
+filesystem. For details please consult Documentation/trace/histograms.txt.
+
+Signed-off-by: Carsten Emde <C.Emde at osadl.org>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ Documentation/trace/histograms.txt  |  186 ++++++
+ include/linux/sched.h               |    6 +
+ include/trace/events/hist.h         |   69 +++
+ include/trace/events/latency_hist.h |   30 +
+ kernel/hrtimer.c                    |   11 +
+ kernel/trace/Kconfig                |  104 ++++
+ kernel/trace/Makefile               |    4 +
+ kernel/trace/latency_hist.c         | 1170 +++++++++++++++++++++++++++++++++++
+ kernel/trace/trace_irqsoff.c        |   11 +
+ 9 files changed, 1591 insertions(+)
+ create mode 100644 Documentation/trace/histograms.txt
+ create mode 100644 include/trace/events/hist.h
+ create mode 100644 include/trace/events/latency_hist.h
+ create mode 100644 kernel/trace/latency_hist.c
+
+diff --git a/Documentation/trace/histograms.txt b/Documentation/trace/histograms.txt
+new file mode 100644
+index 0000000..6f2aeab
+--- /dev/null
++++ b/Documentation/trace/histograms.txt
+@@ -0,0 +1,186 @@
++		Using the Linux Kernel Latency Histograms
++
++
++This document gives a short explanation how to enable, configure and use
++latency histograms. Latency histograms are primarily relevant in the
++context of real-time enabled kernels (CONFIG_PREEMPT/CONFIG_PREEMPT_RT)
++and are used in the quality management of the Linux real-time
++capabilities.
++
++
++* Purpose of latency histograms
++
++A latency histogram continuously accumulates the frequencies of latency
++data. There are two types of histograms
++- potential sources of latencies
++- effective latencies
++
++
++* Potential sources of latencies
++
++Potential sources of latencies are code segments where interrupts,
++preemption or both are disabled (aka critical sections). To create
++histograms of potential sources of latency, the kernel stores the time
++stamp at the start of a critical section, determines the time elapsed
++when the end of the section is reached, and increments the frequency
++counter of that latency value - irrespective of whether any concurrently
++running process is affected by latency or not.
++- Configuration items (in the Kernel hacking/Tracers submenu)
++  CONFIG_INTERRUPT_OFF_LATENCY
++  CONFIG_PREEMPT_OFF_LATENCY
++
++
++* Effective latencies
++
++Effective latencies are actually occuring during wakeup of a process. To
++determine effective latencies, the kernel stores the time stamp when a
++process is scheduled to be woken up, and determines the duration of the
++wakeup time shortly before control is passed over to this process. Note
++that the apparent latency in user space may be somewhat longer, since the
++process may be interrupted after control is passed over to it but before
++the execution in user space takes place. Simply measuring the interval
++between enqueuing and wakeup may also not appropriate in cases when a
++process is scheduled as a result of a timer expiration. The timer may have
++missed its deadline, e.g. due to disabled interrupts, but this latency
++would not be registered. Therefore, the offsets of missed timers are
++recorded in a separate histogram. If both wakeup latency and missed timer
++offsets are configured and enabled, a third histogram may be enabled that
++records the overall latency as a sum of the timer latency, if any, and the
++wakeup latency. This histogram is called "timerandwakeup".
++- Configuration items (in the Kernel hacking/Tracers submenu)
++  CONFIG_WAKEUP_LATENCY
++  CONFIG_MISSED_TIMER_OFSETS
++
++
++* Usage
++
++The interface to the administration of the latency histograms is located
++in the debugfs file system. To mount it, either enter
++
++mount -t sysfs nodev /sys
++mount -t debugfs nodev /sys/kernel/debug
++
++from shell command line level, or add
++
++nodev	/sys			sysfs	defaults	0 0
++nodev	/sys/kernel/debug	debugfs	defaults	0 0
++
++to the file /etc/fstab. All latency histogram related files are then
++available in the directory /sys/kernel/debug/tracing/latency_hist. A
++particular histogram type is enabled by writing non-zero to the related
++variable in the /sys/kernel/debug/tracing/latency_hist/enable directory.
++Select "preemptirqsoff" for the histograms of potential sources of
++latencies and "wakeup" for histograms of effective latencies etc. The
++histogram data - one per CPU - are available in the files
++
++/sys/kernel/debug/tracing/latency_hist/preemptoff/CPUx
++/sys/kernel/debug/tracing/latency_hist/irqsoff/CPUx
++/sys/kernel/debug/tracing/latency_hist/preemptirqsoff/CPUx
++/sys/kernel/debug/tracing/latency_hist/wakeup/CPUx
++/sys/kernel/debug/tracing/latency_hist/wakeup/sharedprio/CPUx
++/sys/kernel/debug/tracing/latency_hist/missed_timer_offsets/CPUx
++/sys/kernel/debug/tracing/latency_hist/timerandwakeup/CPUx
++
++The histograms are reset by writing non-zero to the file "reset" in a
++particular latency directory. To reset all latency data, use
++
++#!/bin/sh
++
++TRACINGDIR=/sys/kernel/debug/tracing
++HISTDIR=$TRACINGDIR/latency_hist
++
++if test -d $HISTDIR
++then
++  cd $HISTDIR
++  for i in `find . | grep /reset$`
++  do
++    echo 1 >$i
++  done
++fi
++
++
++* Data format
++
++Latency data are stored with a resolution of one microsecond. The
++maximum latency is 10,240 microseconds. The data are only valid, if the
++overflow register is empty. Every output line contains the latency in
++microseconds in the first row and the number of samples in the second
++row. To display only lines with a positive latency count, use, for
++example,
++
++grep -v " 0$" /sys/kernel/debug/tracing/latency_hist/preemptoff/CPU0
++
++#Minimum latency: 0 microseconds.
++#Average latency: 0 microseconds.
++#Maximum latency: 25 microseconds.
++#Total samples: 3104770694
++#There are 0 samples greater or equal than 10240 microseconds
++#usecs	         samples
++    0	      2984486876
++    1	        49843506
++    2	        58219047
++    3	         5348126
++    4	         2187960
++    5	         3388262
++    6	          959289
++    7	          208294
++    8	           40420
++    9	            4485
++   10	           14918
++   11	           18340
++   12	           25052
++   13	           19455
++   14	            5602
++   15	             969
++   16	              47
++   17	              18
++   18	              14
++   19	               1
++   20	               3
++   21	               2
++   22	               5
++   23	               2
++   25	               1
++
++
++* Wakeup latency of a selected process
++
++To only collect wakeup latency data of a particular process, write the
++PID of the requested process to
++
++/sys/kernel/debug/tracing/latency_hist/wakeup/pid
++
++PIDs are not considered, if this variable is set to 0.
++
++
++* Details of the process with the highest wakeup latency so far
++
++Selected data of the process that suffered from the highest wakeup
++latency that occurred in a particular CPU are available in the file
++
++/sys/kernel/debug/tracing/latency_hist/wakeup/max_latency-CPUx.
++
++In addition, other relevant system data at the time when the
++latency occurred are given.
++
++The format of the data is (all in one line):
++<PID> <Priority> <Latency> (<Timeroffset>) <Command> \
++<- <PID> <Priority> <Command> <Timestamp>
++
++The value of <Timeroffset> is only relevant in the combined timer
++and wakeup latency recording. In the wakeup recording, it is
++always 0, in the missed_timer_offsets recording, it is the same
++as <Latency>.
++
++When retrospectively searching for the origin of a latency and
++tracing was not enabled, it may be helpful to know the name and
++some basic data of the task that (finally) was switching to the
++late real-tlme task. In addition to the victim's data, also the
++data of the possible culprit are therefore displayed after the
++"<-" symbol.
++
++Finally, the timestamp of the time when the latency occurred
++in <seconds>.<microseconds> after the most recent system boot
++is provided.
++
++These data are also reset when the wakeup histogram is reset.
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 8cb4365..30ac0b5 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1570,6 +1570,12 @@ struct task_struct {
+ 	unsigned long trace;
+ 	/* bitmask and counter of trace recursion */
+ 	unsigned long trace_recursion;
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++	u64 preempt_timestamp_hist;
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++	unsigned long timer_offset;
++#endif
++#endif
+ #endif /* CONFIG_TRACING */
+ #ifdef CONFIG_CGROUP_MEM_RES_CTLR /* memcg uses this to do batch job */
+ 	struct memcg_batch_info {
+diff --git a/include/trace/events/hist.h b/include/trace/events/hist.h
+new file mode 100644
+index 0000000..28646db
+--- /dev/null
++++ b/include/trace/events/hist.h
+@@ -0,0 +1,69 @@
++#undef TRACE_SYSTEM
++#define TRACE_SYSTEM hist
++
++#if !defined(_TRACE_HIST_H) || defined(TRACE_HEADER_MULTI_READ)
++#define _TRACE_HIST_H
++
++#include "latency_hist.h"
++#include <linux/tracepoint.h>
++
++#if !defined(CONFIG_PREEMPT_OFF_HIST) && !defined(CONFIG_INTERRUPT_OFF_HIST)
++#define trace_preemptirqsoff_hist(a,b)
++#else
++TRACE_EVENT(preemptirqsoff_hist,
++
++	TP_PROTO(int reason, int starthist),
++
++	TP_ARGS(reason, starthist),
++
++	TP_STRUCT__entry(
++		__field(int,	reason	)
++		__field(int,	starthist	)
++	),
++
++	TP_fast_assign(
++		__entry->reason		= reason;
++		__entry->starthist	= starthist;
++	),
++
++	TP_printk("reason=%s starthist=%s", getaction(__entry->reason),
++		  __entry->starthist ? "start" : "stop")
++);
++#endif
++
++#ifndef CONFIG_MISSED_TIMER_OFFSETS_HIST
++#define trace_hrtimer_interrupt(a,b,c,d)
++#else
++TRACE_EVENT(hrtimer_interrupt,
++
++	TP_PROTO(int cpu, long long offset, struct task_struct *curr, struct task_struct *task),
++
++	TP_ARGS(cpu, offset, curr, task),
++
++	TP_STRUCT__entry(
++		__field(int,		cpu	)
++		__field(long long,	offset	)
++		__array(char,		ccomm,	TASK_COMM_LEN)
++		__field(int,		cprio	)
++		__array(char,		tcomm,	TASK_COMM_LEN)
++		__field(int,		tprio	)
++	),
++
++	TP_fast_assign(
++		__entry->cpu	= cpu;
++		__entry->offset	= offset;
++		memcpy(__entry->ccomm, curr->comm, TASK_COMM_LEN);
++		__entry->cprio  = curr->prio;
++		memcpy(__entry->tcomm, task != NULL ? task->comm : "<none>", task != NULL ? TASK_COMM_LEN : 7);
++		__entry->tprio  = task != NULL ? task->prio : -1;
++	),
++
++	TP_printk("cpu=%d offset=%lld curr=%s[%d] thread=%s[%d]",
++		__entry->cpu, __entry->offset, __entry->ccomm, __entry->cprio, __entry->tcomm, __entry->tprio)
++);
++#endif
++
++#endif /* _TRACE_HIST_H */
++
++/* This part must be outside protection */
++#include <trace/define_trace.h>
+diff --git a/include/trace/events/latency_hist.h b/include/trace/events/latency_hist.h
+new file mode 100644
+index 0000000..d6b5d77
+--- /dev/null
++++ b/include/trace/events/latency_hist.h
+@@ -0,0 +1,30 @@
++#ifndef _LATENCY_HIST_H
++#define _LATENCY_HIST_H
++
++enum hist_action {
++	IRQS_ON,
++	PREEMPT_ON,
++	TRACE_STOP,
++	IRQS_OFF,
++	PREEMPT_OFF,
++	TRACE_START,
++};
++
++static char *actions[] = {
++	"IRQS_ON",
++	"PREEMPT_ON",
++	"TRACE_STOP",
++	"IRQS_OFF",
++	"PREEMPT_OFF",
++	"TRACE_START",
++};
++
++static inline char *getaction(int action)
++{
++	if (action >= 0 && action <= sizeof(actions)/sizeof(actions[0]))
++		return(actions[action]);
++	return("unknown");
++}
++
++#endif /* _LATENCY_HIST_H */
++
+diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
+index ae34bf5..1a3695e 100644
+--- a/kernel/hrtimer.c
++++ b/kernel/hrtimer.c
+@@ -49,6 +49,7 @@
+ #include <asm/uaccess.h>
+ 
+ #include <trace/events/timer.h>
++#include <trace/events/hist.h>
+ 
+ /*
+  * The timer bases:
+@@ -1236,6 +1237,8 @@ static void __run_hrtimer(struct hrtimer *timer, ktime_t *now)
+ 
+ #ifdef CONFIG_HIGH_RES_TIMERS
+ 
++static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer);
++
+ /*
+  * High resolution timer interrupt
+  * Called with interrupts disabled
+@@ -1280,6 +1283,14 @@ retry:
+ 
+ 			timer = container_of(node, struct hrtimer, node);
+ 
++			trace_hrtimer_interrupt(raw_smp_processor_id(),
++			    ktime_to_ns(ktime_sub(
++				hrtimer_get_expires(timer), basenow)),
++			    current,
++			    timer->function == hrtimer_wakeup ?
++			    container_of(timer, struct hrtimer_sleeper,
++				timer)->task : NULL);
++
+ 			/*
+ 			 * The immediate goal for using the softexpires is
+ 			 * minimizing wakeups, not running timers at the
+diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
+index cd31345..2685322 100644
+--- a/kernel/trace/Kconfig
++++ b/kernel/trace/Kconfig
+@@ -192,6 +192,24 @@ config IRQSOFF_TRACER
+ 	  enabled. This option and the preempt-off timing option can be
+ 	  used together or separately.)
+ 
++config INTERRUPT_OFF_HIST
++	bool "Interrupts-off Latency Histogram"
++	depends on IRQSOFF_TRACER
++	help
++	  This option generates continuously updated histograms (one per cpu)
++	  of the duration of time periods with interrupts disabled. The
++	  histograms are disabled by default. To enable them, write a non-zero
++	  number to
++
++	      /sys/kernel/debug/tracing/latency_hist/enable/preemptirqsoff
++
++	  If PREEMPT_OFF_HIST is also selected, additional histograms (one
++	  per cpu) are generated that accumulate the duration of time periods
++	  when both interrupts and preemption are disabled. The histogram data
++	  will be located in the debug file system at
++
++	      /sys/kernel/debug/tracing/latency_hist/irqsoff
++
+ config PREEMPT_TRACER
+ 	bool "Preemption-off Latency Tracer"
+ 	default n
+@@ -214,6 +232,24 @@ config PREEMPT_TRACER
+ 	  enabled. This option and the irqs-off timing option can be
+ 	  used together or separately.)
+ 
++config PREEMPT_OFF_HIST
++	bool "Preemption-off Latency Histogram"
++	depends on PREEMPT_TRACER
++	help
++	  This option generates continuously updated histograms (one per cpu)
++	  of the duration of time periods with preemption disabled. The
++	  histograms are disabled by default. To enable them, write a non-zero
++	  number to
++
++	      /sys/kernel/debug/tracing/latency_hist/enable/preemptirqsoff
++
++	  If INTERRUPT_OFF_HIST is also selected, additional histograms (one
++	  per cpu) are generated that accumulate the duration of time periods
++	  when both interrupts and preemption are disabled. The histogram data
++	  will be located in the debug file system at
++
++	      /sys/kernel/debug/tracing/latency_hist/preemptoff
++
+ config SCHED_TRACER
+ 	bool "Scheduling Latency Tracer"
+ 	select GENERIC_TRACER
+@@ -223,6 +259,74 @@ config SCHED_TRACER
+ 	  This tracer tracks the latency of the highest priority task
+ 	  to be scheduled in, starting from the point it has woken up.
+ 
++config WAKEUP_LATENCY_HIST
++	bool "Scheduling Latency Histogram"
++	depends on SCHED_TRACER
++	help
++	  This option generates continuously updated histograms (one per cpu)
++	  of the scheduling latency of the highest priority task.
++	  The histograms are disabled by default. To enable them, write a
++	  non-zero number to
++
++	      /sys/kernel/debug/tracing/latency_hist/enable/wakeup
++
++	  Two different algorithms are used, one to determine the latency of
++	  processes that exclusively use the highest priority of the system and
++	  another one to determine the latency of processes that share the
++	  highest system priority with other processes. The former is used to
++	  improve hardware and system software, the latter to optimize the
++	  priority design of a given system. The histogram data will be
++	  located in the debug file system at
++
++	      /sys/kernel/debug/tracing/latency_hist/wakeup
++
++	  and
++
++	      /sys/kernel/debug/tracing/latency_hist/wakeup/sharedprio
++
++	  If both Scheduling Latency Histogram and Missed Timer Offsets
++	  Histogram are selected, additional histogram data will be collected
++	  that contain, in addition to the wakeup latency, the timer latency, in
++	  case the wakeup was triggered by an expired timer. These histograms
++	  are available in the
++
++	      /sys/kernel/debug/tracing/latency_hist/timerandwakeup
++
++	  directory. They reflect the apparent interrupt and scheduling latency
++	  and are best suitable to determine the worst-case latency of a given
++	  system. To enable these histograms, write a non-zero number to
++
++	      /sys/kernel/debug/tracing/latency_hist/enable/timerandwakeup
++
++config MISSED_TIMER_OFFSETS_HIST
++	depends on HIGH_RES_TIMERS
++	select GENERIC_TRACER
++	bool "Missed Timer Offsets Histogram"
++	help
++	  Generate a histogram of missed timer offsets in microseconds. The
++	  histograms are disabled by default. To enable them, write a non-zero
++	  number to
++
++	      /sys/kernel/debug/tracing/latency_hist/enable/missed_timer_offsets
++
++	  The histogram data will be located in the debug file system at
++
++	      /sys/kernel/debug/tracing/latency_hist/missed_timer_offsets
++
++	  If both Scheduling Latency Histogram and Missed Timer Offsets
++	  Histogram are selected, additional histogram data will be collected
++	  that contain, in addition to the wakeup latency, the timer latency, in
++	  case the wakeup was triggered by an expired timer. These histograms
++	  are available in the
++
++	      /sys/kernel/debug/tracing/latency_hist/timerandwakeup
++
++	  directory. They reflect the apparent interrupt and scheduling latency
++	  and are best suitable to determine the worst-case latency of a given
++	  system. To enable these histograms, write a non-zero number to
++
++	      /sys/kernel/debug/tracing/latency_hist/enable/timerandwakeup
++
+ config ENABLE_DEFAULT_TRACERS
+ 	bool "Trace process context switches and events"
+ 	depends on !GENERIC_TRACER
+diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
+index 5f39a07..108a387 100644
+--- a/kernel/trace/Makefile
++++ b/kernel/trace/Makefile
+@@ -36,6 +36,10 @@ obj-$(CONFIG_FUNCTION_TRACER) += trace_functions.o
+ obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
+ obj-$(CONFIG_PREEMPT_TRACER) += trace_irqsoff.o
+ obj-$(CONFIG_SCHED_TRACER) += trace_sched_wakeup.o
++obj-$(CONFIG_INTERRUPT_OFF_HIST) += latency_hist.o
++obj-$(CONFIG_PREEMPT_OFF_HIST) += latency_hist.o
++obj-$(CONFIG_WAKEUP_LATENCY_HIST) += latency_hist.o
++obj-$(CONFIG_MISSED_TIMER_OFFSETS_HIST) += latency_hist.o
+ obj-$(CONFIG_NOP_TRACER) += trace_nop.o
+ obj-$(CONFIG_STACK_TRACER) += trace_stack.o
+ obj-$(CONFIG_MMIOTRACE) += trace_mmiotrace.o
+diff --git a/kernel/trace/latency_hist.c b/kernel/trace/latency_hist.c
+new file mode 100644
+index 0000000..9d49fcb
+--- /dev/null
++++ b/kernel/trace/latency_hist.c
+@@ -0,0 +1,1170 @@
++/*
++ * kernel/trace/latency_hist.c
++ *
++ * Add support for histograms of preemption-off latency and
++ * interrupt-off latency and wakeup latency, it depends on
++ * Real-Time Preemption Support.
++ *
++ *  Copyright (C) 2005 MontaVista Software, Inc.
++ *  Yi Yang <yyang at ch.mvista.com>
++ *
++ *  Converted to work with the new latency tracer.
++ *  Copyright (C) 2008 Red Hat, Inc.
++ *    Steven Rostedt <srostedt at redhat.com>
++ *
++ */
++#include <linux/module.h>
++#include <linux/debugfs.h>
++#include <linux/seq_file.h>
++#include <linux/percpu.h>
++#include <linux/kallsyms.h>
++#include <linux/uaccess.h>
++#include <linux/sched.h>
++#include <linux/slab.h>
++#include <asm/atomic.h>
++#include <asm/div64.h>
++
++#include "trace.h"
++#include <trace/events/sched.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/hist.h>
++
++enum {
++	IRQSOFF_LATENCY = 0,
++	PREEMPTOFF_LATENCY,
++	PREEMPTIRQSOFF_LATENCY,
++	WAKEUP_LATENCY,
++	WAKEUP_LATENCY_SHAREDPRIO,
++	MISSED_TIMER_OFFSETS,
++	TIMERANDWAKEUP_LATENCY,
++	MAX_LATENCY_TYPE,
++};
++
++#define MAX_ENTRY_NUM 10240
++
++struct hist_data {
++	atomic_t hist_mode; /* 0 log, 1 don't log */
++	long offset; /* set it to MAX_ENTRY_NUM/2 for a bipolar scale */
++	unsigned long min_lat;
++	unsigned long max_lat;
++	unsigned long long below_hist_bound_samples;
++	unsigned long long above_hist_bound_samples;
++	unsigned long long accumulate_lat;
++	unsigned long long total_samples;
++	unsigned long long hist_array[MAX_ENTRY_NUM];
++};
++
++struct enable_data {
++	int latency_type;
++	int enabled;
++};
++
++static char *latency_hist_dir_root = "latency_hist";
++
++#ifdef CONFIG_INTERRUPT_OFF_HIST
++static DEFINE_PER_CPU(struct hist_data, irqsoff_hist);
++static char *irqsoff_hist_dir = "irqsoff";
++static DEFINE_PER_CPU(cycles_t, hist_irqsoff_start);
++static DEFINE_PER_CPU(int, hist_irqsoff_counting);
++#endif
++
++#ifdef CONFIG_PREEMPT_OFF_HIST
++static DEFINE_PER_CPU(struct hist_data, preemptoff_hist);
++static char *preemptoff_hist_dir = "preemptoff";
++static DEFINE_PER_CPU(cycles_t, hist_preemptoff_start);
++static DEFINE_PER_CPU(int, hist_preemptoff_counting);
++#endif
++
++#if defined(CONFIG_PREEMPT_OFF_HIST) && defined(CONFIG_INTERRUPT_OFF_HIST)
++static DEFINE_PER_CPU(struct hist_data, preemptirqsoff_hist);
++static char *preemptirqsoff_hist_dir = "preemptirqsoff";
++static DEFINE_PER_CPU(cycles_t, hist_preemptirqsoff_start);
++static DEFINE_PER_CPU(int, hist_preemptirqsoff_counting);
++#endif
++
++#if defined(CONFIG_PREEMPT_OFF_HIST) || defined(CONFIG_INTERRUPT_OFF_HIST)
++static notrace void probe_preemptirqsoff_hist(void *v, int reason, int start);
++static struct enable_data preemptirqsoff_enabled_data = {
++	.latency_type = PREEMPTIRQSOFF_LATENCY,
++	.enabled = 0,
++};
++#endif
++
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++struct maxlatproc_data {
++	char comm[FIELD_SIZEOF(struct task_struct, comm)];
++	char current_comm[FIELD_SIZEOF(struct task_struct, comm)];
++	int pid;
++	int current_pid;
++	int prio;
++	int current_prio;
++	long latency;
++	long timeroffset;
++	cycle_t timestamp;
++};
++#endif
++
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++static DEFINE_PER_CPU(struct hist_data, wakeup_latency_hist);
++static DEFINE_PER_CPU(struct hist_data, wakeup_latency_hist_sharedprio);
++static char *wakeup_latency_hist_dir = "wakeup";
++static char *wakeup_latency_hist_dir_sharedprio = "sharedprio";
++static notrace void probe_wakeup_latency_hist_start(void *v,
++    struct task_struct *p, int success);
++static notrace void probe_wakeup_latency_hist_stop(void *v,
++    struct task_struct *prev, struct task_struct *next);
++static notrace void probe_sched_migrate_task(void *,
++    struct task_struct *task, int cpu);
++static struct enable_data wakeup_latency_enabled_data = {
++	.latency_type = WAKEUP_LATENCY,
++	.enabled = 0,
++};
++static DEFINE_PER_CPU(struct maxlatproc_data, wakeup_maxlatproc);
++static DEFINE_PER_CPU(struct maxlatproc_data, wakeup_maxlatproc_sharedprio);
++static DEFINE_PER_CPU(struct task_struct *, wakeup_task);
++static DEFINE_PER_CPU(int, wakeup_sharedprio);
++static unsigned long wakeup_pid;
++#endif
++
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++static DEFINE_PER_CPU(struct hist_data, missed_timer_offsets);
++static char *missed_timer_offsets_dir = "missed_timer_offsets";
++static notrace void probe_hrtimer_interrupt(void *v, int cpu,
++    long long offset, struct task_struct *curr, struct task_struct *task);
++static struct enable_data missed_timer_offsets_enabled_data = {
++	.latency_type = MISSED_TIMER_OFFSETS,
++	.enabled = 0,
++};
++static DEFINE_PER_CPU(struct maxlatproc_data, missed_timer_offsets_maxlatproc);
++static unsigned long missed_timer_offsets_pid;
++#endif
++
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) && \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++static DEFINE_PER_CPU(struct hist_data, timerandwakeup_latency_hist);
++static char *timerandwakeup_latency_hist_dir = "timerandwakeup";
++static struct enable_data timerandwakeup_enabled_data = {
++	.latency_type = TIMERANDWAKEUP_LATENCY,
++	.enabled = 0,
++};
++static DEFINE_PER_CPU(struct maxlatproc_data, timerandwakeup_maxlatproc);
++#endif
++
++void notrace latency_hist(int latency_type, int cpu, unsigned long latency,
++			  unsigned long timeroffset, cycle_t stop,
++			  struct task_struct *p)
++{
++	struct hist_data *my_hist;
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++	struct maxlatproc_data *mp = NULL;
++#endif
++
++	if (cpu < 0 || cpu >= NR_CPUS || latency_type < 0 ||
++	    latency_type >= MAX_LATENCY_TYPE)
++		return;
++
++	switch (latency_type) {
++#ifdef CONFIG_INTERRUPT_OFF_HIST
++	case IRQSOFF_LATENCY:
++		my_hist = &per_cpu(irqsoff_hist, cpu);
++		break;
++#endif
++#ifdef CONFIG_PREEMPT_OFF_HIST
++	case PREEMPTOFF_LATENCY:
++		my_hist = &per_cpu(preemptoff_hist, cpu);
++		break;
++#endif
++#if defined(CONFIG_PREEMPT_OFF_HIST) && defined(CONFIG_INTERRUPT_OFF_HIST)
++	case PREEMPTIRQSOFF_LATENCY:
++		my_hist = &per_cpu(preemptirqsoff_hist, cpu);
++		break;
++#endif
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++	case WAKEUP_LATENCY:
++		my_hist = &per_cpu(wakeup_latency_hist, cpu);
++		mp = &per_cpu(wakeup_maxlatproc, cpu);
++		break;
++	case WAKEUP_LATENCY_SHAREDPRIO:
++		my_hist = &per_cpu(wakeup_latency_hist_sharedprio, cpu);
++		mp = &per_cpu(wakeup_maxlatproc_sharedprio, cpu);
++		break;
++#endif
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++	case MISSED_TIMER_OFFSETS:
++		my_hist = &per_cpu(missed_timer_offsets, cpu);
++		mp = &per_cpu(missed_timer_offsets_maxlatproc, cpu);
++		break;
++#endif
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) && \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++	case TIMERANDWAKEUP_LATENCY:
++		my_hist = &per_cpu(timerandwakeup_latency_hist, cpu);
++		mp = &per_cpu(timerandwakeup_maxlatproc, cpu);
++		break;
++#endif
++
++	default:
++		return;
++	}
++
++	latency += my_hist->offset;
++
++	if (atomic_read(&my_hist->hist_mode) == 0)
++		return;
++
++	if (latency < 0 || latency >= MAX_ENTRY_NUM) {
++		if (latency < 0)
++			my_hist->below_hist_bound_samples++;
++		else
++			my_hist->above_hist_bound_samples++;
++	} else
++		my_hist->hist_array[latency]++;
++
++	if (unlikely(latency > my_hist->max_lat ||
++	    my_hist->min_lat == ULONG_MAX)) {
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++		if (latency_type == WAKEUP_LATENCY ||
++		    latency_type == WAKEUP_LATENCY_SHAREDPRIO ||
++		    latency_type == MISSED_TIMER_OFFSETS ||
++		    latency_type == TIMERANDWAKEUP_LATENCY) {
++			strncpy(mp->comm, p->comm, sizeof(mp->comm));
++			strncpy(mp->current_comm, current->comm,
++			    sizeof(mp->current_comm));
++			mp->pid = task_pid_nr(p);
++			mp->current_pid = task_pid_nr(current);
++			mp->prio = p->prio;
++			mp->current_prio = current->prio;
++			mp->latency = latency;
++			mp->timeroffset = timeroffset;
++			mp->timestamp = stop;
++		}
++#endif
++		my_hist->max_lat = latency;
++	}
++	if (unlikely(latency < my_hist->min_lat))
++		my_hist->min_lat = latency;
++	my_hist->total_samples++;
++	my_hist->accumulate_lat += latency;
++}
++
++static void *l_start(struct seq_file *m, loff_t *pos)
++{
++	loff_t *index_ptr = NULL;
++	loff_t index = *pos;
++	struct hist_data *my_hist = m->private;
++
++	if (index == 0) {
++		char minstr[32], avgstr[32], maxstr[32];
++
++		atomic_dec(&my_hist->hist_mode);
++
++		if (likely(my_hist->total_samples)) {
++			unsigned long avg = (unsigned long)
++			    div64_u64(my_hist->accumulate_lat,
++			    my_hist->total_samples);
++			snprintf(minstr, sizeof(minstr), "%ld",
++			    (long) my_hist->min_lat - my_hist->offset);
++			snprintf(avgstr, sizeof(avgstr), "%ld",
++			    (long) avg - my_hist->offset);
++			snprintf(maxstr, sizeof(maxstr), "%ld",
++			    (long) my_hist->max_lat - my_hist->offset);
++		} else {
++			strcpy(minstr, "<undef>");
++			strcpy(avgstr, minstr);
++			strcpy(maxstr, minstr);
++		}
++
++		seq_printf(m, "#Minimum latency: %s microseconds\n"
++			   "#Average latency: %s microseconds\n"
++			   "#Maximum latency: %s microseconds\n"
++			   "#Total samples: %llu\n"
++			   "#There are %llu samples lower than %ld"
++			   " microseconds.\n"
++			   "#There are %llu samples greater or equal"
++			   " than %ld microseconds.\n"
++			   "#usecs\t%16s\n",
++			   minstr, avgstr, maxstr,
++			   my_hist->total_samples,
++			   my_hist->below_hist_bound_samples,
++			   -my_hist->offset,
++			   my_hist->above_hist_bound_samples,
++			   MAX_ENTRY_NUM - my_hist->offset,
++			   "samples");
++	}
++	if (index < MAX_ENTRY_NUM) {
++		index_ptr = kmalloc(sizeof(loff_t), GFP_KERNEL);
++		if (index_ptr)
++			*index_ptr = index;
++	}
++
++	return index_ptr;
++}
++
++static void *l_next(struct seq_file *m, void *p, loff_t *pos)
++{
++	loff_t *index_ptr = p;
++	struct hist_data *my_hist = m->private;
++
++	if (++*pos >= MAX_ENTRY_NUM) {
++		atomic_inc(&my_hist->hist_mode);
++		return NULL;
++	}
++	*index_ptr = *pos;
++	return index_ptr;
++}
++
++static void l_stop(struct seq_file *m, void *p)
++{
++	kfree(p);
++}
++
++static int l_show(struct seq_file *m, void *p)
++{
++	int index = *(loff_t *) p;
++	struct hist_data *my_hist = m->private;
++
++	seq_printf(m, "%6ld\t%16llu\n", index - my_hist->offset,
++	    my_hist->hist_array[index]);
++	return 0;
++}
++
++static struct seq_operations latency_hist_seq_op = {
++	.start = l_start,
++	.next  = l_next,
++	.stop  = l_stop,
++	.show  = l_show
++};
++
++static int latency_hist_open(struct inode *inode, struct file *file)
++{
++	int ret;
++
++	ret = seq_open(file, &latency_hist_seq_op);
++	if (!ret) {
++		struct seq_file *seq = file->private_data;
++		seq->private = inode->i_private;
++	}
++	return ret;
++}
++
++static struct file_operations latency_hist_fops = {
++	.open = latency_hist_open,
++	.read = seq_read,
++	.llseek = seq_lseek,
++	.release = seq_release,
++};
++
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++static void clear_maxlatprocdata(struct maxlatproc_data *mp)
++{
++	mp->comm[0] = mp->current_comm[0] = '\0';
++	mp->prio = mp->current_prio = mp->pid = mp->current_pid =
++	    mp->latency = mp->timeroffset = -1;
++	mp->timestamp = 0;
++}
++#endif
++
++static void hist_reset(struct hist_data *hist)
++{
++	atomic_dec(&hist->hist_mode);
++
++	memset(hist->hist_array, 0, sizeof(hist->hist_array));
++	hist->below_hist_bound_samples = 0ULL;
++	hist->above_hist_bound_samples = 0ULL;
++	hist->min_lat = ULONG_MAX;
++	hist->max_lat = 0UL;
++	hist->total_samples = 0ULL;
++	hist->accumulate_lat = 0ULL;
++
++	atomic_inc(&hist->hist_mode);
++}
++
++static ssize_t
++latency_hist_reset(struct file *file, const char __user *a,
++		   size_t size, loff_t *off)
++{
++	int cpu;
++	struct hist_data *hist = NULL;
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++	struct maxlatproc_data *mp = NULL;
++#endif
++	off_t latency_type = (off_t) file->private_data;
++
++	for_each_online_cpu(cpu) {
++
++		switch (latency_type) {
++#ifdef CONFIG_PREEMPT_OFF_HIST
++		case PREEMPTOFF_LATENCY:
++			hist = &per_cpu(preemptoff_hist, cpu);
++			break;
++#endif
++#ifdef CONFIG_INTERRUPT_OFF_HIST
++		case IRQSOFF_LATENCY:
++			hist = &per_cpu(irqsoff_hist, cpu);
++			break;
++#endif
++#if defined(CONFIG_INTERRUPT_OFF_HIST) && defined(CONFIG_PREEMPT_OFF_HIST)
++		case PREEMPTIRQSOFF_LATENCY:
++			hist = &per_cpu(preemptirqsoff_hist, cpu);
++			break;
++#endif
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++		case WAKEUP_LATENCY:
++			hist = &per_cpu(wakeup_latency_hist, cpu);
++			mp = &per_cpu(wakeup_maxlatproc, cpu);
++			break;
++		case WAKEUP_LATENCY_SHAREDPRIO:
++			hist = &per_cpu(wakeup_latency_hist_sharedprio, cpu);
++			mp = &per_cpu(wakeup_maxlatproc_sharedprio, cpu);
++			break;
++#endif
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++		case MISSED_TIMER_OFFSETS:
++			hist = &per_cpu(missed_timer_offsets, cpu);
++			mp = &per_cpu(missed_timer_offsets_maxlatproc, cpu);
++			break;
++#endif
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) && \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++		case TIMERANDWAKEUP_LATENCY:
++			hist = &per_cpu(timerandwakeup_latency_hist, cpu);
++			mp = &per_cpu(timerandwakeup_maxlatproc, cpu);
++			break;
++#endif
++		}
++
++		hist_reset(hist);
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++		if (latency_type == WAKEUP_LATENCY ||
++		    latency_type == WAKEUP_LATENCY_SHAREDPRIO ||
++		    latency_type == MISSED_TIMER_OFFSETS ||
++		    latency_type == TIMERANDWAKEUP_LATENCY)
++			clear_maxlatprocdata(mp);
++#endif
++	}
++
++	return size;
++}
++
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++static ssize_t
++show_pid(struct file *file, char __user *ubuf, size_t cnt, loff_t *ppos)
++{
++	char buf[64];
++	int r;
++	unsigned long *this_pid = file->private_data;
++
++	r = snprintf(buf, sizeof(buf), "%lu\n", *this_pid);
++	return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
++}
++
++static ssize_t do_pid(struct file *file, const char __user *ubuf,
++		      size_t cnt, loff_t *ppos)
++{
++	char buf[64];
++	unsigned long pid;
++	unsigned long *this_pid = file->private_data;
++
++	if (cnt >= sizeof(buf))
++		return -EINVAL;
++
++	if (copy_from_user(&buf, ubuf, cnt))
++		return -EFAULT;
++
++	buf[cnt] = '\0';
++
++	if (strict_strtoul(buf, 10, &pid))
++		return(-EINVAL);
++
++	*this_pid = pid;
++
++	return cnt;
++}
++#endif
++
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++static ssize_t
++show_maxlatproc(struct file *file, char __user *ubuf, size_t cnt, loff_t *ppos)
++{
++	int r;
++	struct maxlatproc_data *mp = file->private_data;
++	int strmaxlen = (TASK_COMM_LEN * 2) + (8 * 8);
++	unsigned long long t;
++	unsigned long usecs, secs;
++	char *buf;
++
++	if (mp->pid == -1 || mp->current_pid == -1) {
++		buf = "(none)\n";
++		return simple_read_from_buffer(ubuf, cnt, ppos, buf,
++		    strlen(buf));
++	}
++
++	buf = kmalloc(strmaxlen, GFP_KERNEL);
++	if (buf == NULL)
++		return -ENOMEM;
++
++	t = ns2usecs(mp->timestamp);
++	usecs = do_div(t, USEC_PER_SEC);
++	secs = (unsigned long) t;
++	r = snprintf(buf, strmaxlen,
++	    "%d %d %ld (%ld) %s <- %d %d %s %lu.%06lu\n", mp->pid,
++	    MAX_RT_PRIO-1 - mp->prio, mp->latency, mp->timeroffset, mp->comm,
++	    mp->current_pid, MAX_RT_PRIO-1 - mp->current_prio, mp->current_comm,
++	    secs, usecs);
++	r = simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
++	kfree(buf);
++	return r;
++}
++#endif
++
++static ssize_t
++show_enable(struct file *file, char __user *ubuf, size_t cnt, loff_t *ppos)
++{
++	char buf[64];
++	struct enable_data *ed = file->private_data;
++	int r;
++
++	r = snprintf(buf, sizeof(buf), "%d\n", ed->enabled);
++	return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
++}
++
++static ssize_t
++do_enable(struct file *file, const char __user *ubuf, size_t cnt, loff_t *ppos)
++{
++	char buf[64];
++	long enable;
++	struct enable_data *ed = file->private_data;
++
++	if (cnt >= sizeof(buf))
++		return -EINVAL;
++
++	if (copy_from_user(&buf, ubuf, cnt))
++		return -EFAULT;
++
++	buf[cnt] = 0;
++
++	if (strict_strtol(buf, 10, &enable))
++		return(-EINVAL);
++
++	if ((enable && ed->enabled) || (!enable && !ed->enabled))
++		return cnt;
++
++	if (enable) {
++		int ret;
++
++		switch (ed->latency_type) {
++#if defined(CONFIG_INTERRUPT_OFF_HIST) || defined(CONFIG_PREEMPT_OFF_HIST)
++		case PREEMPTIRQSOFF_LATENCY:
++			ret = register_trace_preemptirqsoff_hist(
++			    probe_preemptirqsoff_hist, NULL);
++			if (ret) {
++				pr_info("wakeup trace: Couldn't assign "
++				    "probe_preemptirqsoff_hist "
++				    "to trace_preemptirqsoff_hist\n");
++				return ret;
++			}
++			break;
++#endif
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++		case WAKEUP_LATENCY:
++			ret = register_trace_sched_wakeup(
++			    probe_wakeup_latency_hist_start, NULL);
++			if (ret) {
++				pr_info("wakeup trace: Couldn't assign "
++				    "probe_wakeup_latency_hist_start "
++				    "to trace_sched_wakeup\n");
++				return ret;
++			}
++			ret = register_trace_sched_wakeup_new(
++			    probe_wakeup_latency_hist_start, NULL);
++			if (ret) {
++				pr_info("wakeup trace: Couldn't assign "
++				    "probe_wakeup_latency_hist_start "
++				    "to trace_sched_wakeup_new\n");
++				unregister_trace_sched_wakeup(
++				    probe_wakeup_latency_hist_start, NULL);
++				return ret;
++			}
++			ret = register_trace_sched_switch(
++			    probe_wakeup_latency_hist_stop, NULL);
++			if (ret) {
++				pr_info("wakeup trace: Couldn't assign "
++				    "probe_wakeup_latency_hist_stop "
++				    "to trace_sched_switch\n");
++				unregister_trace_sched_wakeup(
++				    probe_wakeup_latency_hist_start, NULL);
++				unregister_trace_sched_wakeup_new(
++				    probe_wakeup_latency_hist_start, NULL);
++				return ret;
++			}
++			ret = register_trace_sched_migrate_task(
++			    probe_sched_migrate_task, NULL);
++			if (ret) {
++				pr_info("wakeup trace: Couldn't assign "
++				    "probe_sched_migrate_task "
++				    "to trace_sched_migrate_task\n");
++				unregister_trace_sched_wakeup(
++				    probe_wakeup_latency_hist_start, NULL);
++				unregister_trace_sched_wakeup_new(
++				    probe_wakeup_latency_hist_start, NULL);
++				unregister_trace_sched_switch(
++				    probe_wakeup_latency_hist_stop, NULL);
++				return ret;
++			}
++			break;
++#endif
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++		case MISSED_TIMER_OFFSETS:
++			ret = register_trace_hrtimer_interrupt(
++			    probe_hrtimer_interrupt, NULL);
++			if (ret) {
++				pr_info("wakeup trace: Couldn't assign "
++				    "probe_hrtimer_interrupt "
++				    "to trace_hrtimer_interrupt\n");
++				return ret;
++			}
++			break;
++#endif
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) && \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++		case TIMERANDWAKEUP_LATENCY:
++			if (!wakeup_latency_enabled_data.enabled ||
++			    !missed_timer_offsets_enabled_data.enabled)
++				return -EINVAL;
++			break;
++#endif
++		default:
++			break;
++		}
++	} else {
++		switch (ed->latency_type) {
++#if defined(CONFIG_INTERRUPT_OFF_HIST) || defined(CONFIG_PREEMPT_OFF_HIST)
++		case PREEMPTIRQSOFF_LATENCY:
++			{
++				int cpu;
++
++				unregister_trace_preemptirqsoff_hist(
++				    probe_preemptirqsoff_hist, NULL);
++				for_each_online_cpu(cpu) {
++#ifdef CONFIG_INTERRUPT_OFF_HIST
++					per_cpu(hist_irqsoff_counting,
++					    cpu) = 0;
++#endif
++#ifdef CONFIG_PREEMPT_OFF_HIST
++					per_cpu(hist_preemptoff_counting,
++					    cpu) = 0;
++#endif
++#if defined(CONFIG_INTERRUPT_OFF_HIST) && defined(CONFIG_PREEMPT_OFF_HIST)
++					per_cpu(hist_preemptirqsoff_counting,
++					    cpu) = 0;
++#endif
++				}
++			}
++			break;
++#endif
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++		case WAKEUP_LATENCY:
++			{
++				int cpu;
++
++				unregister_trace_sched_wakeup(
++				    probe_wakeup_latency_hist_start, NULL);
++				unregister_trace_sched_wakeup_new(
++				    probe_wakeup_latency_hist_start, NULL);
++				unregister_trace_sched_switch(
++				    probe_wakeup_latency_hist_stop, NULL);
++				unregister_trace_sched_migrate_task(
++				    probe_sched_migrate_task, NULL);
++
++				for_each_online_cpu(cpu) {
++					per_cpu(wakeup_task, cpu) = NULL;
++					per_cpu(wakeup_sharedprio, cpu) = 0;
++				}
++			}
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++			timerandwakeup_enabled_data.enabled = 0;
++#endif
++			break;
++#endif
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++		case MISSED_TIMER_OFFSETS:
++			unregister_trace_hrtimer_interrupt(
++			    probe_hrtimer_interrupt, NULL);
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++			timerandwakeup_enabled_data.enabled = 0;
++#endif
++			break;
++#endif
++		default:
++			break;
++		}
++	}
++	ed->enabled = enable;
++	return cnt;
++}
++
++static const struct file_operations latency_hist_reset_fops = {
++	.open = tracing_open_generic,
++	.write = latency_hist_reset,
++};
++
++static const struct file_operations enable_fops = {
++	.open = tracing_open_generic,
++	.read = show_enable,
++	.write = do_enable,
++};
++
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++static const struct file_operations pid_fops = {
++	.open = tracing_open_generic,
++	.read = show_pid,
++	.write = do_pid,
++};
++
++static const struct file_operations maxlatproc_fops = {
++	.open = tracing_open_generic,
++	.read = show_maxlatproc,
++};
++#endif
++
++#if defined(CONFIG_INTERRUPT_OFF_HIST) || defined(CONFIG_PREEMPT_OFF_HIST)
++static notrace void probe_preemptirqsoff_hist(void *v, int reason,
++    int starthist)
++{
++	int cpu = raw_smp_processor_id();
++	int time_set = 0;
++
++	if (starthist) {
++		cycle_t uninitialized_var(start);
++
++		if (!preempt_count() && !irqs_disabled())
++			return;
++
++#ifdef CONFIG_INTERRUPT_OFF_HIST
++		if ((reason == IRQS_OFF || reason == TRACE_START) &&
++		    !per_cpu(hist_irqsoff_counting, cpu)) {
++			per_cpu(hist_irqsoff_counting, cpu) = 1;
++			start = ftrace_now(cpu);
++			time_set++;
++			per_cpu(hist_irqsoff_start, cpu) = start;
++		}
++#endif
++
++#ifdef CONFIG_PREEMPT_OFF_HIST
++		if ((reason == PREEMPT_OFF || reason == TRACE_START) &&
++		    !per_cpu(hist_preemptoff_counting, cpu)) {
++			per_cpu(hist_preemptoff_counting, cpu) = 1;
++			if (!(time_set++))
++				start = ftrace_now(cpu);
++			per_cpu(hist_preemptoff_start, cpu) = start;
++		}
++#endif
++
++#if defined(CONFIG_INTERRUPT_OFF_HIST) && defined(CONFIG_PREEMPT_OFF_HIST)
++		if (per_cpu(hist_irqsoff_counting, cpu) &&
++		    per_cpu(hist_preemptoff_counting, cpu) &&
++		    !per_cpu(hist_preemptirqsoff_counting, cpu)) {
++			per_cpu(hist_preemptirqsoff_counting, cpu) = 1;
++			if (!time_set)
++				start = ftrace_now(cpu);
++			per_cpu(hist_preemptirqsoff_start, cpu) = start;
++		}
++#endif
++	} else {
++		cycle_t uninitialized_var(stop);
++
++#ifdef CONFIG_INTERRUPT_OFF_HIST
++		if ((reason == IRQS_ON || reason == TRACE_STOP) &&
++		    per_cpu(hist_irqsoff_counting, cpu)) {
++			cycle_t start = per_cpu(hist_irqsoff_start, cpu);
++
++			stop = ftrace_now(cpu);
++			time_set++;
++			if (start && stop >= start) {
++				unsigned long latency =
++				    nsecs_to_usecs(stop - start);
++
++				latency_hist(IRQSOFF_LATENCY, cpu, latency, 0,
++				    stop, NULL);
++			}
++			per_cpu(hist_irqsoff_counting, cpu) = 0;
++		}
++#endif
++
++#ifdef CONFIG_PREEMPT_OFF_HIST
++		if ((reason == PREEMPT_ON || reason == TRACE_STOP) &&
++		    per_cpu(hist_preemptoff_counting, cpu)) {
++			cycle_t start = per_cpu(hist_preemptoff_start, cpu);
++
++			if (!(time_set++))
++				stop = ftrace_now(cpu);
++			if (start && stop >= start) {
++				unsigned long latency =
++				    nsecs_to_usecs(stop - start);
++
++				latency_hist(PREEMPTOFF_LATENCY, cpu, latency,
++				    0, stop, NULL);
++			}
++			per_cpu(hist_preemptoff_counting, cpu) = 0;
++		}
++#endif
++
++#if defined(CONFIG_INTERRUPT_OFF_HIST) && defined(CONFIG_PREEMPT_OFF_HIST)
++		if ((!per_cpu(hist_irqsoff_counting, cpu) ||
++		     !per_cpu(hist_preemptoff_counting, cpu)) &&
++		   per_cpu(hist_preemptirqsoff_counting, cpu)) {
++			cycle_t start = per_cpu(hist_preemptirqsoff_start, cpu);
++
++			if (!time_set)
++				stop = ftrace_now(cpu);
++			if (start && stop >= start) {
++				unsigned long latency =
++				    nsecs_to_usecs(stop - start);
++				latency_hist(PREEMPTIRQSOFF_LATENCY, cpu,
++				    latency, 0, stop, NULL);
++			}
++			per_cpu(hist_preemptirqsoff_counting, cpu) = 0;
++		}
++#endif
++	}
++}
++#endif
++
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++static DEFINE_RAW_SPINLOCK(wakeup_lock);
++static notrace void probe_sched_migrate_task(void *v, struct task_struct *task,
++    int cpu)
++{
++	int old_cpu = task_cpu(task);
++
++	if (cpu != old_cpu) {
++		unsigned long flags;
++		struct task_struct *cpu_wakeup_task;
++
++		raw_spin_lock_irqsave(&wakeup_lock, flags);
++
++		cpu_wakeup_task = per_cpu(wakeup_task, old_cpu);
++		if (task == cpu_wakeup_task) {
++			put_task_struct(cpu_wakeup_task);
++			per_cpu(wakeup_task, old_cpu) = NULL;
++			cpu_wakeup_task = per_cpu(wakeup_task, cpu) = task;
++			get_task_struct(cpu_wakeup_task);
++		}
++
++		raw_spin_unlock_irqrestore(&wakeup_lock, flags);
++	}
++}
++
++static notrace void probe_wakeup_latency_hist_start(void *v,
++    struct task_struct *p, int success)
++{
++	unsigned long flags;
++	struct task_struct *curr = current;
++	int cpu = task_cpu(p);
++	struct task_struct *cpu_wakeup_task;
++
++	raw_spin_lock_irqsave(&wakeup_lock, flags);
++
++	cpu_wakeup_task = per_cpu(wakeup_task, cpu);
++
++	if (wakeup_pid) {
++		if ((cpu_wakeup_task && p->prio == cpu_wakeup_task->prio) ||
++		    p->prio == curr->prio)
++			per_cpu(wakeup_sharedprio, cpu) = 1;
++		if (likely(wakeup_pid != task_pid_nr(p)))
++			goto out;
++	} else {
++		if (likely(!rt_task(p)) ||
++		    (cpu_wakeup_task && p->prio > cpu_wakeup_task->prio) ||
++		    p->prio > curr->prio)
++			goto out;
++		if ((cpu_wakeup_task && p->prio == cpu_wakeup_task->prio) ||
++		    p->prio == curr->prio)
++			per_cpu(wakeup_sharedprio, cpu) = 1;
++	}
++
++	if (cpu_wakeup_task)
++		put_task_struct(cpu_wakeup_task);
++	cpu_wakeup_task = per_cpu(wakeup_task, cpu) = p;
++	get_task_struct(cpu_wakeup_task);
++	cpu_wakeup_task->preempt_timestamp_hist =
++		ftrace_now(raw_smp_processor_id());
++out:
++	raw_spin_unlock_irqrestore(&wakeup_lock, flags);
++}
++
++static notrace void probe_wakeup_latency_hist_stop(void *v,
++    struct task_struct *prev, struct task_struct *next)
++{
++	unsigned long flags;
++	int cpu = task_cpu(next);
++	unsigned long latency;
++	cycle_t stop;
++	struct task_struct *cpu_wakeup_task;
++
++	raw_spin_lock_irqsave(&wakeup_lock, flags);
++
++	cpu_wakeup_task = per_cpu(wakeup_task, cpu);
++
++	if (cpu_wakeup_task == NULL)
++		goto out;
++
++	/* Already running? */
++	if (unlikely(current == cpu_wakeup_task))
++		goto out_reset;
++
++	if (next != cpu_wakeup_task) {
++		if (next->prio < cpu_wakeup_task->prio)
++			goto out_reset;
++
++		if (next->prio == cpu_wakeup_task->prio)
++			per_cpu(wakeup_sharedprio, cpu) = 1;
++
++		goto out;
++	}
++
++	/*
++	 * The task we are waiting for is about to be switched to.
++	 * Calculate latency and store it in histogram.
++	 */
++	stop = ftrace_now(raw_smp_processor_id());
++
++	latency = nsecs_to_usecs(stop - next->preempt_timestamp_hist);
++
++	if (per_cpu(wakeup_sharedprio, cpu)) {
++		latency_hist(WAKEUP_LATENCY_SHAREDPRIO, cpu, latency, 0, stop,
++		    next);
++		per_cpu(wakeup_sharedprio, cpu) = 0;
++	} else {
++		latency_hist(WAKEUP_LATENCY, cpu, latency, 0, stop, next);
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++		if (timerandwakeup_enabled_data.enabled) {
++			latency_hist(TIMERANDWAKEUP_LATENCY, cpu,
++			    next->timer_offset + latency, next->timer_offset,
++			    stop, next);
++		}
++#endif
++	}
++
++out_reset:
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++	next->timer_offset = 0;
++#endif
++	put_task_struct(cpu_wakeup_task);
++	per_cpu(wakeup_task, cpu) = NULL;
++out:
++	raw_spin_unlock_irqrestore(&wakeup_lock, flags);
++}
++#endif
++
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++static notrace void probe_hrtimer_interrupt(void *v, int cpu,
++    long long latency_ns, struct task_struct *curr, struct task_struct *task)
++{
++	if (latency_ns <= 0 && task != NULL && rt_task(task) &&
++	    (task->prio < curr->prio ||
++	    (task->prio == curr->prio &&
++	    !cpumask_test_cpu(cpu, &task->cpus_allowed)))) {
++		unsigned long latency;
++		cycle_t now;
++
++		if (missed_timer_offsets_pid) {
++			if (likely(missed_timer_offsets_pid !=
++			    task_pid_nr(task)))
++				return;
++		}
++
++		now = ftrace_now(cpu);
++		latency = (unsigned long) div_s64(-latency_ns, 1000);
++		latency_hist(MISSED_TIMER_OFFSETS, cpu, latency, latency, now,
++		    task);
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++		task->timer_offset = latency;
++#endif
++	}
++}
++#endif
++
++static __init int latency_hist_init(void)
++{
++	struct dentry *latency_hist_root = NULL;
++	struct dentry *dentry;
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++	struct dentry *dentry_sharedprio;
++#endif
++	struct dentry *entry;
++	struct dentry *enable_root;
++	int i = 0;
++	struct hist_data *my_hist;
++	char name[64];
++	char *cpufmt = "CPU%d";
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++	char *cpufmt_maxlatproc = "max_latency-CPU%d";
++	struct maxlatproc_data *mp = NULL;
++#endif
++
++	dentry = tracing_init_dentry();
++	latency_hist_root = debugfs_create_dir(latency_hist_dir_root, dentry);
++	enable_root = debugfs_create_dir("enable", latency_hist_root);
++
++#ifdef CONFIG_INTERRUPT_OFF_HIST
++	dentry = debugfs_create_dir(irqsoff_hist_dir, latency_hist_root);
++	for_each_possible_cpu(i) {
++		sprintf(name, cpufmt, i);
++		entry = debugfs_create_file(name, 0444, dentry,
++		    &per_cpu(irqsoff_hist, i), &latency_hist_fops);
++		my_hist = &per_cpu(irqsoff_hist, i);
++		atomic_set(&my_hist->hist_mode, 1);
++		my_hist->min_lat = 0xFFFFFFFFUL;
++	}
++	entry = debugfs_create_file("reset", 0644, dentry,
++	    (void *)IRQSOFF_LATENCY, &latency_hist_reset_fops);
++#endif
++
++#ifdef CONFIG_PREEMPT_OFF_HIST
++	dentry = debugfs_create_dir(preemptoff_hist_dir,
++	    latency_hist_root);
++	for_each_possible_cpu(i) {
++		sprintf(name, cpufmt, i);
++		entry = debugfs_create_file(name, 0444, dentry,
++		    &per_cpu(preemptoff_hist, i), &latency_hist_fops);
++		my_hist = &per_cpu(preemptoff_hist, i);
++		atomic_set(&my_hist->hist_mode, 1);
++		my_hist->min_lat = 0xFFFFFFFFUL;
++	}
++	entry = debugfs_create_file("reset", 0644, dentry,
++	    (void *)PREEMPTOFF_LATENCY, &latency_hist_reset_fops);
++#endif
++
++#if defined(CONFIG_INTERRUPT_OFF_HIST) && defined(CONFIG_PREEMPT_OFF_HIST)
++	dentry = debugfs_create_dir(preemptirqsoff_hist_dir,
++	    latency_hist_root);
++	for_each_possible_cpu(i) {
++		sprintf(name, cpufmt, i);
++		entry = debugfs_create_file(name, 0444, dentry,
++		    &per_cpu(preemptirqsoff_hist, i), &latency_hist_fops);
++		my_hist = &per_cpu(preemptirqsoff_hist, i);
++		atomic_set(&my_hist->hist_mode, 1);
++		my_hist->min_lat = 0xFFFFFFFFUL;
++	}
++	entry = debugfs_create_file("reset", 0644, dentry,
++	    (void *)PREEMPTIRQSOFF_LATENCY, &latency_hist_reset_fops);
++#endif
++
++#if defined(CONFIG_INTERRUPT_OFF_HIST) || defined(CONFIG_PREEMPT_OFF_HIST)
++	entry = debugfs_create_file("preemptirqsoff", 0644,
++	    enable_root, (void *)&preemptirqsoff_enabled_data,
++	    &enable_fops);
++#endif
++
++#ifdef CONFIG_WAKEUP_LATENCY_HIST
++	dentry = debugfs_create_dir(wakeup_latency_hist_dir,
++	    latency_hist_root);
++	dentry_sharedprio = debugfs_create_dir(
++	    wakeup_latency_hist_dir_sharedprio, dentry);
++	for_each_possible_cpu(i) {
++		sprintf(name, cpufmt, i);
++
++		entry = debugfs_create_file(name, 0444, dentry,
++		    &per_cpu(wakeup_latency_hist, i),
++		    &latency_hist_fops);
++		my_hist = &per_cpu(wakeup_latency_hist, i);
++		atomic_set(&my_hist->hist_mode, 1);
++		my_hist->min_lat = 0xFFFFFFFFUL;
++
++		entry = debugfs_create_file(name, 0444, dentry_sharedprio,
++		    &per_cpu(wakeup_latency_hist_sharedprio, i),
++		    &latency_hist_fops);
++		my_hist = &per_cpu(wakeup_latency_hist_sharedprio, i);
++		atomic_set(&my_hist->hist_mode, 1);
++		my_hist->min_lat = 0xFFFFFFFFUL;
++
++		sprintf(name, cpufmt_maxlatproc, i);
++
++		mp = &per_cpu(wakeup_maxlatproc, i);
++		entry = debugfs_create_file(name, 0444, dentry, mp,
++		    &maxlatproc_fops);
++		clear_maxlatprocdata(mp);
++
++		mp = &per_cpu(wakeup_maxlatproc_sharedprio, i);
++		entry = debugfs_create_file(name, 0444, dentry_sharedprio, mp,
++		    &maxlatproc_fops);
++		clear_maxlatprocdata(mp);
++	}
++	entry = debugfs_create_file("pid", 0644, dentry,
++	    (void *)&wakeup_pid, &pid_fops);
++	entry = debugfs_create_file("reset", 0644, dentry,
++	    (void *)WAKEUP_LATENCY, &latency_hist_reset_fops);
++	entry = debugfs_create_file("reset", 0644, dentry_sharedprio,
++	    (void *)WAKEUP_LATENCY_SHAREDPRIO, &latency_hist_reset_fops);
++	entry = debugfs_create_file("wakeup", 0644,
++	    enable_root, (void *)&wakeup_latency_enabled_data,
++	    &enable_fops);
++#endif
++
++#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
++	dentry = debugfs_create_dir(missed_timer_offsets_dir,
++	    latency_hist_root);
++	for_each_possible_cpu(i) {
++		sprintf(name, cpufmt, i);
++		entry = debugfs_create_file(name, 0444, dentry,
++		    &per_cpu(missed_timer_offsets, i), &latency_hist_fops);
++		my_hist = &per_cpu(missed_timer_offsets, i);
++		atomic_set(&my_hist->hist_mode, 1);
++		my_hist->min_lat = 0xFFFFFFFFUL;
++
++		sprintf(name, cpufmt_maxlatproc, i);
++		mp = &per_cpu(missed_timer_offsets_maxlatproc, i);
++		entry = debugfs_create_file(name, 0444, dentry, mp,
++		    &maxlatproc_fops);
++		clear_maxlatprocdata(mp);
++	}
++	entry = debugfs_create_file("pid", 0644, dentry,
++	    (void *)&missed_timer_offsets_pid, &pid_fops);
++	entry = debugfs_create_file("reset", 0644, dentry,
++	    (void *)MISSED_TIMER_OFFSETS, &latency_hist_reset_fops);
++	entry = debugfs_create_file("missed_timer_offsets", 0644,
++	    enable_root, (void *)&missed_timer_offsets_enabled_data,
++	    &enable_fops);
++#endif
++
++#if defined(CONFIG_WAKEUP_LATENCY_HIST) && \
++    defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
++	dentry = debugfs_create_dir(timerandwakeup_latency_hist_dir,
++	    latency_hist_root);
++	for_each_possible_cpu(i) {
++		sprintf(name, cpufmt, i);
++		entry = debugfs_create_file(name, 0444, dentry,
++		    &per_cpu(timerandwakeup_latency_hist, i),
++		    &latency_hist_fops);
++		my_hist = &per_cpu(timerandwakeup_latency_hist, i);
++		atomic_set(&my_hist->hist_mode, 1);
++		my_hist->min_lat = 0xFFFFFFFFUL;
++
++		sprintf(name, cpufmt_maxlatproc, i);
++		mp = &per_cpu(timerandwakeup_maxlatproc, i);
++		entry = debugfs_create_file(name, 0444, dentry, mp,
++		    &maxlatproc_fops);
++		clear_maxlatprocdata(mp);
++	}
++	entry = debugfs_create_file("reset", 0644, dentry,
++	    (void *)TIMERANDWAKEUP_LATENCY, &latency_hist_reset_fops);
++	entry = debugfs_create_file("timerandwakeup", 0644,
++	    enable_root, (void *)&timerandwakeup_enabled_data,
++	    &enable_fops);
++#endif
++	return 0;
++}
++
++__initcall(latency_hist_init);
+diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
+index 20dad0d..1e6c33e 100644
+--- a/kernel/trace/trace_irqsoff.c
++++ b/kernel/trace/trace_irqsoff.c
+@@ -17,6 +17,7 @@
+ #include <linux/fs.h>
+ 
+ #include "trace.h"
++#include <trace/events/hist.h>
+ 
+ static struct trace_array		*irqsoff_trace __read_mostly;
+ static int				tracer_enabled __read_mostly;
+@@ -426,11 +427,13 @@ void start_critical_timings(void)
+ {
+ 	if (preempt_trace() || irq_trace())
+ 		start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
++	trace_preemptirqsoff_hist(TRACE_START, 1);
+ }
+ EXPORT_SYMBOL_GPL(start_critical_timings);
+ 
+ void stop_critical_timings(void)
+ {
++	trace_preemptirqsoff_hist(TRACE_STOP, 0);
+ 	if (preempt_trace() || irq_trace())
+ 		stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
+ }
+@@ -440,6 +443,7 @@ EXPORT_SYMBOL_GPL(stop_critical_timings);
+ #ifdef CONFIG_PROVE_LOCKING
+ void time_hardirqs_on(unsigned long a0, unsigned long a1)
+ {
++	trace_preemptirqsoff_hist(IRQS_ON, 0);
+ 	if (!preempt_trace() && irq_trace())
+ 		stop_critical_timing(a0, a1);
+ }
+@@ -448,6 +452,7 @@ void time_hardirqs_off(unsigned long a0, unsigned long a1)
+ {
+ 	if (!preempt_trace() && irq_trace())
+ 		start_critical_timing(a0, a1);
++	trace_preemptirqsoff_hist(IRQS_OFF, 1);
+ }
+ 
+ #else /* !CONFIG_PROVE_LOCKING */
+@@ -473,6 +478,7 @@ inline void print_irqtrace_events(struct task_struct *curr)
+  */
+ void trace_hardirqs_on(void)
+ {
++	trace_preemptirqsoff_hist(IRQS_ON, 0);
+ 	if (!preempt_trace() && irq_trace())
+ 		stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
+ }
+@@ -482,11 +488,13 @@ void trace_hardirqs_off(void)
+ {
+ 	if (!preempt_trace() && irq_trace())
+ 		start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
++	trace_preemptirqsoff_hist(IRQS_OFF, 1);
+ }
+ EXPORT_SYMBOL(trace_hardirqs_off);
+ 
+ void trace_hardirqs_on_caller(unsigned long caller_addr)
+ {
++	trace_preemptirqsoff_hist(IRQS_ON, 0);
+ 	if (!preempt_trace() && irq_trace())
+ 		stop_critical_timing(CALLER_ADDR0, caller_addr);
+ }
+@@ -496,6 +504,7 @@ void trace_hardirqs_off_caller(unsigned long caller_addr)
+ {
+ 	if (!preempt_trace() && irq_trace())
+ 		start_critical_timing(CALLER_ADDR0, caller_addr);
++	trace_preemptirqsoff_hist(IRQS_OFF, 1);
+ }
+ EXPORT_SYMBOL(trace_hardirqs_off_caller);
+ 
+@@ -505,12 +514,14 @@ EXPORT_SYMBOL(trace_hardirqs_off_caller);
+ #ifdef CONFIG_PREEMPT_TRACER
+ void trace_preempt_on(unsigned long a0, unsigned long a1)
+ {
++	trace_preemptirqsoff_hist(PREEMPT_ON, 0);
+ 	if (preempt_trace() && !irq_trace())
+ 		stop_critical_timing(a0, a1);
+ }
+ 
+ void trace_preempt_off(unsigned long a0, unsigned long a1)
+ {
++	trace_preemptirqsoff_hist(PREEMPT_ON, 1);
+ 	if (preempt_trace() && !irq_trace())
+ 		start_critical_timing(a0, a1);
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0072-hwlatdetect.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0072-hwlatdetect.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0072-hwlatdetect.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0072-hwlatdetect.patch.patch)
@@ -0,0 +1,1380 @@
+From 24568f806f7af94794ae9601650ca10ada5092b4 Mon Sep 17 00:00:00 2001
+From: Carsten Emde <C.Emde at osadl.org>
+Date: Tue, 19 Jul 2011 13:53:12 +0100
+Subject: [PATCH 072/271] hwlatdetect.patch
+
+Jon Masters developed this wonderful SMI detector. For details please
+consult Documentation/hwlat_detector.txt. It could be ported to Linux
+3.0 RT without any major change.
+
+Signed-off-by: Carsten Emde <C.Emde at osadl.org>
+---
+ Documentation/hwlat_detector.txt |   64 ++
+ MAINTAINERS                      |    9 +
+ drivers/misc/Kconfig             |   29 +
+ drivers/misc/Makefile            |    1 +
+ drivers/misc/hwlat_detector.c    | 1212 ++++++++++++++++++++++++++++++++++++++
+ 5 files changed, 1315 insertions(+)
+ create mode 100644 Documentation/hwlat_detector.txt
+ create mode 100644 drivers/misc/hwlat_detector.c
+
+diff --git a/Documentation/hwlat_detector.txt b/Documentation/hwlat_detector.txt
+new file mode 100644
+index 0000000..cb61516
+--- /dev/null
++++ b/Documentation/hwlat_detector.txt
+@@ -0,0 +1,64 @@
++Introduction:
++-------------
++
++The module hwlat_detector is a special purpose kernel module that is used to
++detect large system latencies induced by the behavior of certain underlying
++hardware or firmware, independent of Linux itself. The code was developed
++originally to detect SMIs (System Management Interrupts) on x86 systems,
++however there is nothing x86 specific about this patchset. It was
++originally written for use by the "RT" patch since the Real Time
++kernel is highly latency sensitive.
++
++SMIs are usually not serviced by the Linux kernel, which typically does not
++even know that they are occuring. SMIs are instead are set up by BIOS code
++and are serviced by BIOS code, usually for "critical" events such as
++management of thermal sensors and fans. Sometimes though, SMIs are used for
++other tasks and those tasks can spend an inordinate amount of time in the
++handler (sometimes measured in milliseconds). Obviously this is a problem if
++you are trying to keep event service latencies down in the microsecond range.
++
++The hardware latency detector works by hogging all of the cpus for configurable
++amounts of time (by calling stop_machine()), polling the CPU Time Stamp Counter
++for some period, then looking for gaps in the TSC data. Any gap indicates a
++time when the polling was interrupted and since the machine is stopped and
++interrupts turned off the only thing that could do that would be an SMI.
++
++Note that the SMI detector should *NEVER* be used in a production environment.
++It is intended to be run manually to determine if the hardware platform has a
++problem with long system firmware service routines.
++
++Usage:
++------
++
++Loading the module hwlat_detector passing the parameter "enabled=1" (or by
++setting the "enable" entry in "hwlat_detector" debugfs toggled on) is the only
++step required to start the hwlat_detector. It is possible to redefine the
++threshold in microseconds (us) above which latency spikes will be taken
++into account (parameter "threshold=").
++
++Example:
++
++	# modprobe hwlat_detector enabled=1 threshold=100
++
++After the module is loaded, it creates a directory named "hwlat_detector" under
++the debugfs mountpoint, "/debug/hwlat_detector" for this text. It is necessary
++to have debugfs mounted, which might be on /sys/debug on your system.
++
++The /debug/hwlat_detector interface contains the following files:
++
++count			- number of latency spikes observed since last reset
++enable			- a global enable/disable toggle (0/1), resets count
++max			- maximum hardware latency actually observed (usecs)
++sample			- a pipe from which to read current raw sample data
++			  in the format <timestamp> <latency observed usecs>
++			  (can be opened O_NONBLOCK for a single sample)
++threshold		- minimum latency value to be considered (usecs)
++width			- time period to sample with CPUs held (usecs)
++			  must be less than the total window size (enforced)
++window			- total period of sampling, width being inside (usecs)
++
++By default we will set width to 500,000 and window to 1,000,000, meaning that
++we will sample every 1,000,000 usecs (1s) for 500,000 usecs (0.5s). If we
++observe any latencies that exceed the threshold (initially 100 usecs),
++then we write to a global sample ring buffer of 8K samples, which is
++consumed by reading from the "sample" (pipe) debugfs file interface.
+diff --git a/MAINTAINERS b/MAINTAINERS
+index f986e7d..b257477 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -3008,6 +3008,15 @@ L:	linuxppc-dev at lists.ozlabs.org
+ S:	Odd Fixes
+ F:	drivers/tty/hvc/
+ 
++HARDWARE LATENCY DETECTOR
++P:	Jon Masters
++M:	jcm at jonmasters.org
++W:	http://www.kernel.org/pub/linux/kernel/people/jcm/hwlat_detector/
++S:	Supported
++L:	linux-kernel at vger.kernel.org
++F:	Documentation/hwlat_detector.txt
++F:	drivers/misc/hwlat_detector.c
++
+ HARDWARE MONITORING
+ M:	Jean Delvare <khali at linux-fr.org>
+ M:	Guenter Roeck <guenter.roeck at ericsson.com>
+diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
+index f3031a4..1cb530c 100644
+--- a/drivers/misc/Kconfig
++++ b/drivers/misc/Kconfig
+@@ -140,6 +140,35 @@ config IBM_ASM
+ 	  for information on the specific driver level and support statement
+ 	  for your IBM server.
+ 
++config HWLAT_DETECTOR
++	tristate "Testing module to detect hardware-induced latencies"
++	depends on DEBUG_FS
++	depends on RING_BUFFER
++	default m
++	---help---
++	  A simple hardware latency detector. Use this module to detect
++	  large latencies introduced by the behavior of the underlying
++	  system firmware external to Linux. We do this using periodic
++	  use of stop_machine to grab all available CPUs and measure
++	  for unexplainable gaps in the CPU timestamp counter(s). By
++	  default, the module is not enabled until the "enable" file
++	  within the "hwlat_detector" debugfs directory is toggled.
++
++	  This module is often used to detect SMI (System Management
++	  Interrupts) on x86 systems, though is not x86 specific. To
++	  this end, we default to using a sample window of 1 second,
++	  during which we will sample for 0.5 seconds. If an SMI or
++	  similar event occurs during that time, it is recorded
++	  into an 8K samples global ring buffer until retreived.
++
++	  WARNING: This software should never be enabled (it can be built
++	  but should not be turned on after it is loaded) in a production
++	  environment where high latencies are a concern since the
++	  sampling mechanism actually introduces latencies for
++	  regular tasks while the CPU(s) are being held.
++
++	  If unsure, say N
++
+ config PHANTOM
+ 	tristate "Sensable PHANToM (PCI)"
+ 	depends on PCI
+diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
+index b26495a..84c4554 100644
+--- a/drivers/misc/Makefile
++++ b/drivers/misc/Makefile
+@@ -48,3 +48,4 @@ obj-y				+= lis3lv02d/
+ obj-y				+= carma/
+ obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o
+ obj-$(CONFIG_ALTERA_STAPL)	+=altera-stapl/
++obj-$(CONFIG_HWLAT_DETECTOR)	+= hwlat_detector.o
+diff --git a/drivers/misc/hwlat_detector.c b/drivers/misc/hwlat_detector.c
+new file mode 100644
+index 0000000..b7b7c90
+--- /dev/null
++++ b/drivers/misc/hwlat_detector.c
+@@ -0,0 +1,1212 @@
++/*
++ * hwlat_detector.c - A simple Hardware Latency detector.
++ *
++ * Use this module to detect large system latencies induced by the behavior of
++ * certain underlying system hardware or firmware, independent of Linux itself.
++ * The code was developed originally to detect the presence of SMIs on Intel
++ * and AMD systems, although there is no dependency upon x86 herein.
++ *
++ * The classical example usage of this module is in detecting the presence of
++ * SMIs or System Management Interrupts on Intel and AMD systems. An SMI is a
++ * somewhat special form of hardware interrupt spawned from earlier CPU debug
++ * modes in which the (BIOS/EFI/etc.) firmware arranges for the South Bridge
++ * LPC (or other device) to generate a special interrupt under certain
++ * circumstances, for example, upon expiration of a special SMI timer device,
++ * due to certain external thermal readings, on certain I/O address accesses,
++ * and other situations. An SMI hits a special CPU pin, triggers a special
++ * SMI mode (complete with special memory map), and the OS is unaware.
++ *
++ * Although certain hardware-inducing latencies are necessary (for example,
++ * a modern system often requires an SMI handler for correct thermal control
++ * and remote management) they can wreak havoc upon any OS-level performance
++ * guarantees toward low-latency, especially when the OS is not even made
++ * aware of the presence of these interrupts. For this reason, we need a
++ * somewhat brute force mechanism to detect these interrupts. In this case,
++ * we do it by hogging all of the CPU(s) for configurable timer intervals,
++ * sampling the built-in CPU timer, looking for discontiguous readings.
++ *
++ * WARNING: This implementation necessarily introduces latencies. Therefore,
++ *          you should NEVER use this module in a production environment
++ *          requiring any kind of low-latency performance guarantee(s).
++ *
++ * Copyright (C) 2008-2009 Jon Masters, Red Hat, Inc. <jcm at redhat.com>
++ *
++ * Includes useful feedback from Clark Williams <clark at redhat.com>
++ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2. This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
++ */
++
++#include <linux/module.h>
++#include <linux/init.h>
++#include <linux/ring_buffer.h>
++#include <linux/stop_machine.h>
++#include <linux/time.h>
++#include <linux/hrtimer.h>
++#include <linux/kthread.h>
++#include <linux/debugfs.h>
++#include <linux/seq_file.h>
++#include <linux/uaccess.h>
++#include <linux/version.h>
++#include <linux/delay.h>
++#include <linux/slab.h>
++
++#define BUF_SIZE_DEFAULT	262144UL		/* 8K*(sizeof(entry)) */
++#define BUF_FLAGS		(RB_FL_OVERWRITE)	/* no block on full */
++#define U64STR_SIZE		22			/* 20 digits max */
++
++#define VERSION			"1.0.0"
++#define BANNER			"hwlat_detector: "
++#define DRVNAME			"hwlat_detector"
++#define DEFAULT_SAMPLE_WINDOW	1000000			/* 1s */
++#define DEFAULT_SAMPLE_WIDTH	500000			/* 0.5s */
++#define DEFAULT_LAT_THRESHOLD	10			/* 10us */
++
++/* Module metadata */
++
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Jon Masters <jcm at redhat.com>");
++MODULE_DESCRIPTION("A simple hardware latency detector");
++MODULE_VERSION(VERSION);
++
++/* Module parameters */
++
++static int debug;
++static int enabled;
++static int threshold;
++
++module_param(debug, int, 0);			/* enable debug */
++module_param(enabled, int, 0);			/* enable detector */
++module_param(threshold, int, 0);		/* latency threshold */
++
++/* Buffering and sampling */
++
++static struct ring_buffer *ring_buffer;		/* sample buffer */
++static DEFINE_MUTEX(ring_buffer_mutex);		/* lock changes */
++static unsigned long buf_size = BUF_SIZE_DEFAULT;
++static struct task_struct *kthread;		/* sampling thread */
++
++/* DebugFS filesystem entries */
++
++static struct dentry *debug_dir;		/* debugfs directory */
++static struct dentry *debug_max;		/* maximum TSC delta */
++static struct dentry *debug_count;		/* total detect count */
++static struct dentry *debug_sample_width;	/* sample width us */
++static struct dentry *debug_sample_window;	/* sample window us */
++static struct dentry *debug_sample;		/* raw samples us */
++static struct dentry *debug_threshold;		/* threshold us */
++static struct dentry *debug_enable;         	/* enable/disable */
++
++/* Individual samples and global state */
++
++struct sample;					/* latency sample */
++struct data;					/* Global state */
++
++/* Sampling functions */
++static int __buffer_add_sample(struct sample *sample);
++static struct sample *buffer_get_sample(struct sample *sample);
++static int get_sample(void *unused);
++
++/* Threading and state */
++static int kthread_fn(void *unused);
++static int start_kthread(void);
++static int stop_kthread(void);
++static void __reset_stats(void);
++static int init_stats(void);
++
++/* Debugfs interface */
++static ssize_t simple_data_read(struct file *filp, char __user *ubuf,
++				size_t cnt, loff_t *ppos, const u64 *entry);
++static ssize_t simple_data_write(struct file *filp, const char __user *ubuf,
++				 size_t cnt, loff_t *ppos, u64 *entry);
++static int debug_sample_fopen(struct inode *inode, struct file *filp);
++static ssize_t debug_sample_fread(struct file *filp, char __user *ubuf,
++				  size_t cnt, loff_t *ppos);
++static int debug_sample_release(struct inode *inode, struct file *filp);
++static int debug_enable_fopen(struct inode *inode, struct file *filp);
++static ssize_t debug_enable_fread(struct file *filp, char __user *ubuf,
++				  size_t cnt, loff_t *ppos);
++static ssize_t debug_enable_fwrite(struct file *file,
++				   const char __user *user_buffer,
++				   size_t user_size, loff_t *offset);
++
++/* Initialization functions */
++static int init_debugfs(void);
++static void free_debugfs(void);
++static int detector_init(void);
++static void detector_exit(void);
++
++/* Individual latency samples are stored here when detected and packed into
++ * the ring_buffer circular buffer, where they are overwritten when
++ * more than buf_size/sizeof(sample) samples are received. */
++struct sample {
++	u64		seqnum;		/* unique sequence */
++	u64		duration;	/* ktime delta */
++	struct timespec	timestamp;	/* wall time */
++	unsigned long   lost;
++};
++
++/* keep the global state somewhere. Mostly used under stop_machine. */
++static struct data {
++
++	struct mutex lock;		/* protect changes */
++
++	u64	count;			/* total since reset */
++	u64	max_sample;		/* max hardware latency */
++	u64	threshold;		/* sample threshold level */
++
++	u64	sample_window;		/* total sampling window (on+off) */
++	u64	sample_width;		/* active sampling portion of window */
++
++	atomic_t sample_open;		/* whether the sample file is open */
++
++	wait_queue_head_t wq;		/* waitqeue for new sample values */
++
++} data;
++
++/**
++ * __buffer_add_sample - add a new latency sample recording to the ring buffer
++ * @sample: The new latency sample value
++ *
++ * This receives a new latency sample and records it in a global ring buffer.
++ * No additional locking is used in this case - suited for stop_machine use.
++ */
++static int __buffer_add_sample(struct sample *sample)
++{
++	return ring_buffer_write(ring_buffer,
++				 sizeof(struct sample), sample);
++}
++
++/**
++ * buffer_get_sample - remove a hardware latency sample from the ring buffer
++ * @sample: Pre-allocated storage for the sample
++ *
++ * This retrieves a hardware latency sample from the global circular buffer
++ */
++static struct sample *buffer_get_sample(struct sample *sample)
++{
++	struct ring_buffer_event *e = NULL;
++	struct sample *s = NULL;
++	unsigned int cpu = 0;
++
++	if (!sample)
++		return NULL;
++
++	mutex_lock(&ring_buffer_mutex);
++	for_each_online_cpu(cpu) {
++		e = ring_buffer_consume(ring_buffer, cpu, NULL, &sample->lost);
++		if (e)
++			break;
++	}
++
++	if (e) {
++		s = ring_buffer_event_data(e);
++		memcpy(sample, s, sizeof(struct sample));
++	} else
++		sample = NULL;
++	mutex_unlock(&ring_buffer_mutex);
++
++	return sample;
++}
++
++/**
++ * get_sample - sample the CPU TSC and look for likely hardware latencies
++ * @unused: This is not used but is a part of the stop_machine API
++ *
++ * Used to repeatedly capture the CPU TSC (or similar), looking for potential
++ * hardware-induced latency. Called under stop_machine, with data.lock held.
++ */
++static int get_sample(void *unused)
++{
++	ktime_t start, t1, t2;
++	s64 diff, total = 0;
++	u64 sample = 0;
++	int ret = 1;
++
++	start = ktime_get(); /* start timestamp */
++
++	do {
++
++		t1 = ktime_get();	/* we'll look for a discontinuity */
++		t2 = ktime_get();
++
++		total = ktime_to_us(ktime_sub(t2, start)); /* sample width */
++		diff = ktime_to_us(ktime_sub(t2, t1));     /* current diff */
++
++		/* This shouldn't happen */
++		if (diff < 0) {
++			printk(KERN_ERR BANNER "time running backwards\n");
++			goto out;
++		}
++
++		if (diff > sample)
++			sample = diff; /* only want highest value */
++
++	} while (total <= data.sample_width);
++
++	/* If we exceed the threshold value, we have found a hardware latency */
++	if (sample > data.threshold) {
++		struct sample s;
++
++		data.count++;
++		s.seqnum = data.count;
++		s.duration = sample;
++		s.timestamp = CURRENT_TIME;
++		__buffer_add_sample(&s);
++
++		/* Keep a running maximum ever recorded hardware latency */
++		if (sample > data.max_sample)
++			data.max_sample = sample;
++	}
++
++	ret = 0;
++out:
++	return ret;
++}
++
++/*
++ * kthread_fn - The CPU time sampling/hardware latency detection kernel thread
++ * @unused: A required part of the kthread API.
++ *
++ * Used to periodically sample the CPU TSC via a call to get_sample. We
++ * use stop_machine, whith does (intentionally) introduce latency since we
++ * need to ensure nothing else might be running (and thus pre-empting).
++ * Obviously this should never be used in production environments.
++ *
++ * stop_machine will schedule us typically only on CPU0 which is fine for
++ * almost every real-world hardware latency situation - but we might later
++ * generalize this if we find there are any actualy systems with alternate
++ * SMI delivery or other non CPU0 hardware latencies.
++ */
++static int kthread_fn(void *unused)
++{
++	int err = 0;
++	u64 interval = 0;
++
++	while (!kthread_should_stop()) {
++
++		mutex_lock(&data.lock);
++
++		err = stop_machine(get_sample, unused, 0);
++		if (err) {
++			/* Houston, we have a problem */
++			mutex_unlock(&data.lock);
++			goto err_out;
++		}
++
++		wake_up(&data.wq); /* wake up reader(s) */
++
++		interval = data.sample_window - data.sample_width;
++		do_div(interval, USEC_PER_MSEC); /* modifies interval value */
++
++		mutex_unlock(&data.lock);
++
++		if (msleep_interruptible(interval))
++			goto out;
++	}
++		goto out;
++err_out:
++	printk(KERN_ERR BANNER "could not call stop_machine, disabling\n");
++	enabled = 0;
++out:
++	return err;
++
++}
++
++/**
++ * start_kthread - Kick off the hardware latency sampling/detector kthread
++ *
++ * This starts a kernel thread that will sit and sample the CPU timestamp
++ * counter (TSC or similar) and look for potential hardware latencies.
++ */
++static int start_kthread(void)
++{
++	kthread = kthread_run(kthread_fn, NULL,
++					DRVNAME);
++	if (IS_ERR(kthread)) {
++		printk(KERN_ERR BANNER "could not start sampling thread\n");
++		enabled = 0;
++		return -ENOMEM;
++	}
++
++	return 0;
++}
++
++/**
++ * stop_kthread - Inform the hardware latency samping/detector kthread to stop
++ *
++ * This kicks the running hardware latency sampling/detector kernel thread and
++ * tells it to stop sampling now. Use this on unload and at system shutdown.
++ */
++static int stop_kthread(void)
++{
++	int ret;
++
++	ret = kthread_stop(kthread);
++
++	return ret;
++}
++
++/**
++ * __reset_stats - Reset statistics for the hardware latency detector
++ *
++ * We use data to store various statistics and global state. We call this
++ * function in order to reset those when "enable" is toggled on or off, and
++ * also at initialization. Should be called with data.lock held.
++ */
++static void __reset_stats(void)
++{
++	data.count = 0;
++	data.max_sample = 0;
++	ring_buffer_reset(ring_buffer); /* flush out old sample entries */
++}
++
++/**
++ * init_stats - Setup global state statistics for the hardware latency detector
++ *
++ * We use data to store various statistics and global state. We also use
++ * a global ring buffer (ring_buffer) to keep raw samples of detected hardware
++ * induced system latencies. This function initializes these structures and
++ * allocates the global ring buffer also.
++ */
++static int init_stats(void)
++{
++	int ret = -ENOMEM;
++
++	mutex_init(&data.lock);
++	init_waitqueue_head(&data.wq);
++	atomic_set(&data.sample_open, 0);
++
++	ring_buffer = ring_buffer_alloc(buf_size, BUF_FLAGS);
++
++	if (WARN(!ring_buffer, KERN_ERR BANNER
++			       "failed to allocate ring buffer!\n"))
++		goto out;
++
++	__reset_stats();
++	data.threshold = DEFAULT_LAT_THRESHOLD;	    /* threshold us */
++	data.sample_window = DEFAULT_SAMPLE_WINDOW; /* window us */
++	data.sample_width = DEFAULT_SAMPLE_WIDTH;   /* width us */
++
++	ret = 0;
++
++out:
++	return ret;
++
++}
++
++/*
++ * simple_data_read - Wrapper read function for global state debugfs entries
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The userspace provided buffer to read value into
++ * @cnt: The maximum number of bytes to read
++ * @ppos: The current "file" position
++ * @entry: The entry to read from
++ *
++ * This function provides a generic read implementation for the global state
++ * "data" structure debugfs filesystem entries. It would be nice to use
++ * simple_attr_read directly, but we need to make sure that the data.lock
++ * spinlock is held during the actual read (even though we likely won't ever
++ * actually race here as the updater runs under a stop_machine context).
++ */
++static ssize_t simple_data_read(struct file *filp, char __user *ubuf,
++				size_t cnt, loff_t *ppos, const u64 *entry)
++{
++	char buf[U64STR_SIZE];
++	u64 val = 0;
++	int len = 0;
++
++	memset(buf, 0, sizeof(buf));
++
++	if (!entry)
++		return -EFAULT;
++
++	mutex_lock(&data.lock);
++	val = *entry;
++	mutex_unlock(&data.lock);
++
++	len = snprintf(buf, sizeof(buf), "%llu\n", (unsigned long long)val);
++
++	return simple_read_from_buffer(ubuf, cnt, ppos, buf, len);
++
++}
++
++/*
++ * simple_data_write - Wrapper write function for global state debugfs entries
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The userspace provided buffer to write value from
++ * @cnt: The maximum number of bytes to write
++ * @ppos: The current "file" position
++ * @entry: The entry to write to
++ *
++ * This function provides a generic write implementation for the global state
++ * "data" structure debugfs filesystem entries. It would be nice to use
++ * simple_attr_write directly, but we need to make sure that the data.lock
++ * spinlock is held during the actual write (even though we likely won't ever
++ * actually race here as the updater runs under a stop_machine context).
++ */
++static ssize_t simple_data_write(struct file *filp, const char __user *ubuf,
++				 size_t cnt, loff_t *ppos, u64 *entry)
++{
++	char buf[U64STR_SIZE];
++	int csize = min(cnt, sizeof(buf));
++	u64 val = 0;
++	int err = 0;
++
++	memset(buf, '\0', sizeof(buf));
++	if (copy_from_user(buf, ubuf, csize))
++		return -EFAULT;
++
++	buf[U64STR_SIZE-1] = '\0';			/* just in case */
++	err = strict_strtoull(buf, 10, &val);
++	if (err)
++		return -EINVAL;
++
++	mutex_lock(&data.lock);
++	*entry = val;
++	mutex_unlock(&data.lock);
++
++	return csize;
++}
++
++/**
++ * debug_count_fopen - Open function for "count" debugfs entry
++ * @inode: The in-kernel inode representation of the debugfs "file"
++ * @filp: The active open file structure for the debugfs "file"
++ *
++ * This function provides an open implementation for the "count" debugfs
++ * interface to the hardware latency detector.
++ */
++static int debug_count_fopen(struct inode *inode, struct file *filp)
++{
++	return 0;
++}
++
++/**
++ * debug_count_fread - Read function for "count" debugfs entry
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The userspace provided buffer to read value into
++ * @cnt: The maximum number of bytes to read
++ * @ppos: The current "file" position
++ *
++ * This function provides a read implementation for the "count" debugfs
++ * interface to the hardware latency detector. Can be used to read the
++ * number of latency readings exceeding the configured threshold since
++ * the detector was last reset (e.g. by writing a zero into "count").
++ */
++static ssize_t debug_count_fread(struct file *filp, char __user *ubuf,
++				     size_t cnt, loff_t *ppos)
++{
++	return simple_data_read(filp, ubuf, cnt, ppos, &data.count);
++}
++
++/**
++ * debug_count_fwrite - Write function for "count" debugfs entry
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The user buffer that contains the value to write
++ * @cnt: The maximum number of bytes to write to "file"
++ * @ppos: The current position in the debugfs "file"
++ *
++ * This function provides a write implementation for the "count" debugfs
++ * interface to the hardware latency detector. Can be used to write a
++ * desired value, especially to zero the total count.
++ */
++static ssize_t  debug_count_fwrite(struct file *filp,
++				       const char __user *ubuf,
++				       size_t cnt,
++				       loff_t *ppos)
++{
++	return simple_data_write(filp, ubuf, cnt, ppos, &data.count);
++}
++
++/**
++ * debug_enable_fopen - Dummy open function for "enable" debugfs interface
++ * @inode: The in-kernel inode representation of the debugfs "file"
++ * @filp: The active open file structure for the debugfs "file"
++ *
++ * This function provides an open implementation for the "enable" debugfs
++ * interface to the hardware latency detector.
++ */
++static int debug_enable_fopen(struct inode *inode, struct file *filp)
++{
++	return 0;
++}
++
++/**
++ * debug_enable_fread - Read function for "enable" debugfs interface
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The userspace provided buffer to read value into
++ * @cnt: The maximum number of bytes to read
++ * @ppos: The current "file" position
++ *
++ * This function provides a read implementation for the "enable" debugfs
++ * interface to the hardware latency detector. Can be used to determine
++ * whether the detector is currently enabled ("0\n" or "1\n" returned).
++ */
++static ssize_t debug_enable_fread(struct file *filp, char __user *ubuf,
++				      size_t cnt, loff_t *ppos)
++{
++	char buf[4];
++
++	if ((cnt < sizeof(buf)) || (*ppos))
++		return 0;
++
++	buf[0] = enabled ? '1' : '0';
++	buf[1] = '\n';
++	buf[2] = '\0';
++	if (copy_to_user(ubuf, buf, strlen(buf)))
++		return -EFAULT;
++	return *ppos = strlen(buf);
++}
++
++/**
++ * debug_enable_fwrite - Write function for "enable" debugfs interface
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The user buffer that contains the value to write
++ * @cnt: The maximum number of bytes to write to "file"
++ * @ppos: The current position in the debugfs "file"
++ *
++ * This function provides a write implementation for the "enable" debugfs
++ * interface to the hardware latency detector. Can be used to enable or
++ * disable the detector, which will have the side-effect of possibly
++ * also resetting the global stats and kicking off the measuring
++ * kthread (on an enable) or the converse (upon a disable).
++ */
++static ssize_t  debug_enable_fwrite(struct file *filp,
++					const char __user *ubuf,
++					size_t cnt,
++					loff_t *ppos)
++{
++	char buf[4];
++	int csize = min(cnt, sizeof(buf));
++	long val = 0;
++	int err = 0;
++
++	memset(buf, '\0', sizeof(buf));
++	if (copy_from_user(buf, ubuf, csize))
++		return -EFAULT;
++
++	buf[sizeof(buf)-1] = '\0';			/* just in case */
++	err = strict_strtoul(buf, 10, &val);
++	if (0 != err)
++		return -EINVAL;
++
++	if (val) {
++		if (enabled)
++			goto unlock;
++		enabled = 1;
++		__reset_stats();
++		if (start_kthread())
++			return -EFAULT;
++	} else {
++		if (!enabled)
++			goto unlock;
++		enabled = 0;
++		err = stop_kthread();
++		if (err) {
++			printk(KERN_ERR BANNER "cannot stop kthread\n");
++			return -EFAULT;
++		}
++		wake_up(&data.wq);		/* reader(s) should return */
++	}
++unlock:
++	return csize;
++}
++
++/**
++ * debug_max_fopen - Open function for "max" debugfs entry
++ * @inode: The in-kernel inode representation of the debugfs "file"
++ * @filp: The active open file structure for the debugfs "file"
++ *
++ * This function provides an open implementation for the "max" debugfs
++ * interface to the hardware latency detector.
++ */
++static int debug_max_fopen(struct inode *inode, struct file *filp)
++{
++	return 0;
++}
++
++/**
++ * debug_max_fread - Read function for "max" debugfs entry
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The userspace provided buffer to read value into
++ * @cnt: The maximum number of bytes to read
++ * @ppos: The current "file" position
++ *
++ * This function provides a read implementation for the "max" debugfs
++ * interface to the hardware latency detector. Can be used to determine
++ * the maximum latency value observed since it was last reset.
++ */
++static ssize_t debug_max_fread(struct file *filp, char __user *ubuf,
++				   size_t cnt, loff_t *ppos)
++{
++	return simple_data_read(filp, ubuf, cnt, ppos, &data.max_sample);
++}
++
++/**
++ * debug_max_fwrite - Write function for "max" debugfs entry
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The user buffer that contains the value to write
++ * @cnt: The maximum number of bytes to write to "file"
++ * @ppos: The current position in the debugfs "file"
++ *
++ * This function provides a write implementation for the "max" debugfs
++ * interface to the hardware latency detector. Can be used to reset the
++ * maximum or set it to some other desired value - if, then, subsequent
++ * measurements exceed this value, the maximum will be updated.
++ */
++static ssize_t  debug_max_fwrite(struct file *filp,
++				     const char __user *ubuf,
++				     size_t cnt,
++				     loff_t *ppos)
++{
++	return simple_data_write(filp, ubuf, cnt, ppos, &data.max_sample);
++}
++
++
++/**
++ * debug_sample_fopen - An open function for "sample" debugfs interface
++ * @inode: The in-kernel inode representation of this debugfs "file"
++ * @filp: The active open file structure for the debugfs "file"
++ *
++ * This function handles opening the "sample" file within the hardware
++ * latency detector debugfs directory interface. This file is used to read
++ * raw samples from the global ring_buffer and allows the user to see a
++ * running latency history. Can be opened blocking or non-blocking,
++ * affecting whether it behaves as a buffer read pipe, or does not.
++ * Implements simple locking to prevent multiple simultaneous use.
++ */
++static int debug_sample_fopen(struct inode *inode, struct file *filp)
++{
++	if (!atomic_add_unless(&data.sample_open, 1, 1))
++		return -EBUSY;
++	else
++		return 0;
++}
++
++/**
++ * debug_sample_fread - A read function for "sample" debugfs interface
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The user buffer that will contain the samples read
++ * @cnt: The maximum bytes to read from the debugfs "file"
++ * @ppos: The current position in the debugfs "file"
++ *
++ * This function handles reading from the "sample" file within the hardware
++ * latency detector debugfs directory interface. This file is used to read
++ * raw samples from the global ring_buffer and allows the user to see a
++ * running latency history. By default this will block pending a new
++ * value written into the sample buffer, unless there are already a
++ * number of value(s) waiting in the buffer, or the sample file was
++ * previously opened in a non-blocking mode of operation.
++ */
++static ssize_t debug_sample_fread(struct file *filp, char __user *ubuf,
++					size_t cnt, loff_t *ppos)
++{
++	int len = 0;
++	char buf[64];
++	struct sample *sample = NULL;
++
++	if (!enabled)
++		return 0;
++
++	sample = kzalloc(sizeof(struct sample), GFP_KERNEL);
++	if (!sample)
++		return -ENOMEM;
++
++	while (!buffer_get_sample(sample)) {
++
++		DEFINE_WAIT(wait);
++
++		if (filp->f_flags & O_NONBLOCK) {
++			len = -EAGAIN;
++			goto out;
++		}
++
++		prepare_to_wait(&data.wq, &wait, TASK_INTERRUPTIBLE);
++		schedule();
++		finish_wait(&data.wq, &wait);
++
++		if (signal_pending(current)) {
++			len = -EINTR;
++			goto out;
++		}
++
++		if (!enabled) {			/* enable was toggled */
++			len = 0;
++			goto out;
++		}
++	}
++
++	len = snprintf(buf, sizeof(buf), "%010lu.%010lu\t%llu\n",
++		      sample->timestamp.tv_sec,
++		      sample->timestamp.tv_nsec,
++		      sample->duration);
++
++
++	/* handling partial reads is more trouble than it's worth */
++	if (len > cnt)
++		goto out;
++
++	if (copy_to_user(ubuf, buf, len))
++		len = -EFAULT;
++
++out:
++	kfree(sample);
++	return len;
++}
++
++/**
++ * debug_sample_release - Release function for "sample" debugfs interface
++ * @inode: The in-kernel inode represenation of the debugfs "file"
++ * @filp: The active open file structure for the debugfs "file"
++ *
++ * This function completes the close of the debugfs interface "sample" file.
++ * Frees the sample_open "lock" so that other users may open the interface.
++ */
++static int debug_sample_release(struct inode *inode, struct file *filp)
++{
++	atomic_dec(&data.sample_open);
++
++	return 0;
++}
++
++/**
++ * debug_threshold_fopen - Open function for "threshold" debugfs entry
++ * @inode: The in-kernel inode representation of the debugfs "file"
++ * @filp: The active open file structure for the debugfs "file"
++ *
++ * This function provides an open implementation for the "threshold" debugfs
++ * interface to the hardware latency detector.
++ */
++static int debug_threshold_fopen(struct inode *inode, struct file *filp)
++{
++	return 0;
++}
++
++/**
++ * debug_threshold_fread - Read function for "threshold" debugfs entry
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The userspace provided buffer to read value into
++ * @cnt: The maximum number of bytes to read
++ * @ppos: The current "file" position
++ *
++ * This function provides a read implementation for the "threshold" debugfs
++ * interface to the hardware latency detector. It can be used to determine
++ * the current threshold level at which a latency will be recorded in the
++ * global ring buffer, typically on the order of 10us.
++ */
++static ssize_t debug_threshold_fread(struct file *filp, char __user *ubuf,
++					 size_t cnt, loff_t *ppos)
++{
++	return simple_data_read(filp, ubuf, cnt, ppos, &data.threshold);
++}
++
++/**
++ * debug_threshold_fwrite - Write function for "threshold" debugfs entry
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The user buffer that contains the value to write
++ * @cnt: The maximum number of bytes to write to "file"
++ * @ppos: The current position in the debugfs "file"
++ *
++ * This function provides a write implementation for the "threshold" debugfs
++ * interface to the hardware latency detector. It can be used to configure
++ * the threshold level at which any subsequently detected latencies will
++ * be recorded into the global ring buffer.
++ */
++static ssize_t  debug_threshold_fwrite(struct file *filp,
++					const char __user *ubuf,
++					size_t cnt,
++					loff_t *ppos)
++{
++	int ret;
++
++	ret = simple_data_write(filp, ubuf, cnt, ppos, &data.threshold);
++
++	if (enabled)
++		wake_up_process(kthread);
++
++	return ret;
++}
++
++/**
++ * debug_width_fopen - Open function for "width" debugfs entry
++ * @inode: The in-kernel inode representation of the debugfs "file"
++ * @filp: The active open file structure for the debugfs "file"
++ *
++ * This function provides an open implementation for the "width" debugfs
++ * interface to the hardware latency detector.
++ */
++static int debug_width_fopen(struct inode *inode, struct file *filp)
++{
++	return 0;
++}
++
++/**
++ * debug_width_fread - Read function for "width" debugfs entry
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The userspace provided buffer to read value into
++ * @cnt: The maximum number of bytes to read
++ * @ppos: The current "file" position
++ *
++ * This function provides a read implementation for the "width" debugfs
++ * interface to the hardware latency detector. It can be used to determine
++ * for how many us of the total window us we will actively sample for any
++ * hardware-induced latecy periods. Obviously, it is not possible to
++ * sample constantly and have the system respond to a sample reader, or,
++ * worse, without having the system appear to have gone out to lunch.
++ */
++static ssize_t debug_width_fread(struct file *filp, char __user *ubuf,
++				     size_t cnt, loff_t *ppos)
++{
++	return simple_data_read(filp, ubuf, cnt, ppos, &data.sample_width);
++}
++
++/**
++ * debug_width_fwrite - Write function for "width" debugfs entry
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The user buffer that contains the value to write
++ * @cnt: The maximum number of bytes to write to "file"
++ * @ppos: The current position in the debugfs "file"
++ *
++ * This function provides a write implementation for the "width" debugfs
++ * interface to the hardware latency detector. It can be used to configure
++ * for how many us of the total window us we will actively sample for any
++ * hardware-induced latency periods. Obviously, it is not possible to
++ * sample constantly and have the system respond to a sample reader, or,
++ * worse, without having the system appear to have gone out to lunch. It
++ * is enforced that width is less that the total window size.
++ */
++static ssize_t  debug_width_fwrite(struct file *filp,
++				       const char __user *ubuf,
++				       size_t cnt,
++				       loff_t *ppos)
++{
++	char buf[U64STR_SIZE];
++	int csize = min(cnt, sizeof(buf));
++	u64 val = 0;
++	int err = 0;
++
++	memset(buf, '\0', sizeof(buf));
++	if (copy_from_user(buf, ubuf, csize))
++		return -EFAULT;
++
++	buf[U64STR_SIZE-1] = '\0';			/* just in case */
++	err = strict_strtoull(buf, 10, &val);
++	if (0 != err)
++		return -EINVAL;
++
++	mutex_lock(&data.lock);
++	if (val < data.sample_window)
++		data.sample_width = val;
++	else {
++		mutex_unlock(&data.lock);
++		return -EINVAL;
++	}
++	mutex_unlock(&data.lock);
++
++	if (enabled)
++		wake_up_process(kthread);
++
++	return csize;
++}
++
++/**
++ * debug_window_fopen - Open function for "window" debugfs entry
++ * @inode: The in-kernel inode representation of the debugfs "file"
++ * @filp: The active open file structure for the debugfs "file"
++ *
++ * This function provides an open implementation for the "window" debugfs
++ * interface to the hardware latency detector. The window is the total time
++ * in us that will be considered one sample period. Conceptually, windows
++ * occur back-to-back and contain a sample width period during which
++ * actual sampling occurs.
++ */
++static int debug_window_fopen(struct inode *inode, struct file *filp)
++{
++	return 0;
++}
++
++/**
++ * debug_window_fread - Read function for "window" debugfs entry
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The userspace provided buffer to read value into
++ * @cnt: The maximum number of bytes to read
++ * @ppos: The current "file" position
++ *
++ * This function provides a read implementation for the "window" debugfs
++ * interface to the hardware latency detector. The window is the total time
++ * in us that will be considered one sample period. Conceptually, windows
++ * occur back-to-back and contain a sample width period during which
++ * actual sampling occurs. Can be used to read the total window size.
++ */
++static ssize_t debug_window_fread(struct file *filp, char __user *ubuf,
++				      size_t cnt, loff_t *ppos)
++{
++	return simple_data_read(filp, ubuf, cnt, ppos, &data.sample_window);
++}
++
++/**
++ * debug_window_fwrite - Write function for "window" debugfs entry
++ * @filp: The active open file structure for the debugfs "file"
++ * @ubuf: The user buffer that contains the value to write
++ * @cnt: The maximum number of bytes to write to "file"
++ * @ppos: The current position in the debugfs "file"
++ *
++ * This function provides a write implementation for the "window" debufds
++ * interface to the hardware latency detetector. The window is the total time
++ * in us that will be considered one sample period. Conceptually, windows
++ * occur back-to-back and contain a sample width period during which
++ * actual sampling occurs. Can be used to write a new total window size. It
++ * is enfoced that any value written must be greater than the sample width
++ * size, or an error results.
++ */
++static ssize_t  debug_window_fwrite(struct file *filp,
++					const char __user *ubuf,
++					size_t cnt,
++					loff_t *ppos)
++{
++	char buf[U64STR_SIZE];
++	int csize = min(cnt, sizeof(buf));
++	u64 val = 0;
++	int err = 0;
++
++	memset(buf, '\0', sizeof(buf));
++	if (copy_from_user(buf, ubuf, csize))
++		return -EFAULT;
++
++	buf[U64STR_SIZE-1] = '\0';			/* just in case */
++	err = strict_strtoull(buf, 10, &val);
++	if (0 != err)
++		return -EINVAL;
++
++	mutex_lock(&data.lock);
++	if (data.sample_width < val)
++		data.sample_window = val;
++	else {
++		mutex_unlock(&data.lock);
++		return -EINVAL;
++	}
++	mutex_unlock(&data.lock);
++
++	return csize;
++}
++
++/*
++ * Function pointers for the "count" debugfs file operations
++ */
++static const struct file_operations count_fops = {
++	.open		= debug_count_fopen,
++	.read		= debug_count_fread,
++	.write		= debug_count_fwrite,
++	.owner		= THIS_MODULE,
++};
++
++/*
++ * Function pointers for the "enable" debugfs file operations
++ */
++static const struct file_operations enable_fops = {
++	.open		= debug_enable_fopen,
++	.read		= debug_enable_fread,
++	.write		= debug_enable_fwrite,
++	.owner		= THIS_MODULE,
++};
++
++/*
++ * Function pointers for the "max" debugfs file operations
++ */
++static const struct file_operations max_fops = {
++	.open		= debug_max_fopen,
++	.read		= debug_max_fread,
++	.write		= debug_max_fwrite,
++	.owner		= THIS_MODULE,
++};
++
++/*
++ * Function pointers for the "sample" debugfs file operations
++ */
++static const struct file_operations sample_fops = {
++	.open 		= debug_sample_fopen,
++	.read		= debug_sample_fread,
++	.release	= debug_sample_release,
++	.owner		= THIS_MODULE,
++};
++
++/*
++ * Function pointers for the "threshold" debugfs file operations
++ */
++static const struct file_operations threshold_fops = {
++	.open		= debug_threshold_fopen,
++	.read		= debug_threshold_fread,
++	.write		= debug_threshold_fwrite,
++	.owner		= THIS_MODULE,
++};
++
++/*
++ * Function pointers for the "width" debugfs file operations
++ */
++static const struct file_operations width_fops = {
++	.open		= debug_width_fopen,
++	.read		= debug_width_fread,
++	.write		= debug_width_fwrite,
++	.owner		= THIS_MODULE,
++};
++
++/*
++ * Function pointers for the "window" debugfs file operations
++ */
++static const struct file_operations window_fops = {
++	.open		= debug_window_fopen,
++	.read		= debug_window_fread,
++	.write		= debug_window_fwrite,
++	.owner		= THIS_MODULE,
++};
++
++/**
++ * init_debugfs - A function to initialize the debugfs interface files
++ *
++ * This function creates entries in debugfs for "hwlat_detector", including
++ * files to read values from the detector, current samples, and the
++ * maximum sample that has been captured since the hardware latency
++ * dectector was started.
++ */
++static int init_debugfs(void)
++{
++	int ret = -ENOMEM;
++
++	debug_dir = debugfs_create_dir(DRVNAME, NULL);
++	if (!debug_dir)
++		goto err_debug_dir;
++
++	debug_sample = debugfs_create_file("sample", 0444,
++					       debug_dir, NULL,
++					       &sample_fops);
++	if (!debug_sample)
++		goto err_sample;
++
++	debug_count = debugfs_create_file("count", 0444,
++					      debug_dir, NULL,
++					      &count_fops);
++	if (!debug_count)
++		goto err_count;
++
++	debug_max = debugfs_create_file("max", 0444,
++					    debug_dir, NULL,
++					    &max_fops);
++	if (!debug_max)
++		goto err_max;
++
++	debug_sample_window = debugfs_create_file("window", 0644,
++						      debug_dir, NULL,
++						      &window_fops);
++	if (!debug_sample_window)
++		goto err_window;
++
++	debug_sample_width = debugfs_create_file("width", 0644,
++						     debug_dir, NULL,
++						     &width_fops);
++	if (!debug_sample_width)
++		goto err_width;
++
++	debug_threshold = debugfs_create_file("threshold", 0644,
++						  debug_dir, NULL,
++						  &threshold_fops);
++	if (!debug_threshold)
++		goto err_threshold;
++
++	debug_enable = debugfs_create_file("enable", 0644,
++					       debug_dir, &enabled,
++					       &enable_fops);
++	if (!debug_enable)
++		goto err_enable;
++
++	else {
++		ret = 0;
++		goto out;
++	}
++
++err_enable:
++	debugfs_remove(debug_threshold);
++err_threshold:
++	debugfs_remove(debug_sample_width);
++err_width:
++	debugfs_remove(debug_sample_window);
++err_window:
++	debugfs_remove(debug_max);
++err_max:
++	debugfs_remove(debug_count);
++err_count:
++	debugfs_remove(debug_sample);
++err_sample:
++	debugfs_remove(debug_dir);
++err_debug_dir:
++out:
++	return ret;
++}
++
++/**
++ * free_debugfs - A function to cleanup the debugfs file interface
++ */
++static void free_debugfs(void)
++{
++	/* could also use a debugfs_remove_recursive */
++	debugfs_remove(debug_enable);
++	debugfs_remove(debug_threshold);
++	debugfs_remove(debug_sample_width);
++	debugfs_remove(debug_sample_window);
++	debugfs_remove(debug_max);
++	debugfs_remove(debug_count);
++	debugfs_remove(debug_sample);
++	debugfs_remove(debug_dir);
++}
++
++/**
++ * detector_init - Standard module initialization code
++ */
++static int detector_init(void)
++{
++	int ret = -ENOMEM;
++
++	printk(KERN_INFO BANNER "version %s\n", VERSION);
++
++	ret = init_stats();
++	if (0 != ret)
++		goto out;
++
++	ret = init_debugfs();
++	if (0 != ret)
++		goto err_stats;
++
++	if (enabled)
++		ret = start_kthread();
++
++	goto out;
++
++err_stats:
++	ring_buffer_free(ring_buffer);
++out:
++	return ret;
++
++}
++
++/**
++ * detector_exit - Standard module cleanup code
++ */
++static void detector_exit(void)
++{
++	int err;
++
++	if (enabled) {
++		enabled = 0;
++		err = stop_kthread();
++		if (err)
++			printk(KERN_ERR BANNER "cannot stop kthread\n");
++	}
++
++	free_debugfs();
++	ring_buffer_free(ring_buffer);	/* free up the ring buffer */
++
++}
++
++module_init(detector_init);
++module_exit(detector_exit);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0073-localversion.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0073-localversion.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0073-localversion.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0073-localversion.patch.patch)
@@ -0,0 +1,23 @@
+From b39fbed043fb8ab8674ac44ca67113138408e1d9 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 8 Jul 2011 20:25:16 +0200
+Subject: [PATCH 073/271] localversion.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/n/tip-8vdw4bfcsds27cvox6rpb334@git.kernel.org
+---
+ localversion-rt |    1 +
+ 1 file changed, 1 insertion(+)
+ create mode 100644 localversion-rt
+
+diff --git a/localversion-rt b/localversion-rt
+new file mode 100644
+index 0000000..b2111a2
+--- /dev/null
++++ b/localversion-rt
+@@ -0,0 +1 @@
++-rt24
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0074-early-printk-consolidate.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0074-early-printk-consolidate.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0074-early-printk-consolidate.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0074-early-printk-consolidate.patch.patch)
@@ -0,0 +1,499 @@
+From 017ce14bf2a1a02c30759b285bb80936d29f76e9 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 23 Jul 2011 11:04:08 +0200
+Subject: [PATCH 074/271] early-printk-consolidate.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/arm/kernel/early_printk.c        |   17 +++--------------
+ arch/blackfin/kernel/early_printk.c   |    2 --
+ arch/microblaze/kernel/early_printk.c |   26 ++++----------------------
+ arch/mips/kernel/early_printk.c       |   10 ++++------
+ arch/powerpc/kernel/udbg.c            |    6 ++----
+ arch/sh/kernel/sh_bios.c              |    2 --
+ arch/sparc/kernel/setup_32.c          |    1 +
+ arch/sparc/kernel/setup_64.c          |    8 +++++++-
+ arch/tile/kernel/early_printk.c       |   26 ++++----------------------
+ arch/um/kernel/early_printk.c         |    8 +++++---
+ arch/unicore32/kernel/early_printk.c  |   12 ++++--------
+ arch/x86/kernel/early_printk.c        |   21 ++-------------------
+ include/linux/console.h               |    1 +
+ include/linux/printk.h                |    5 +++++
+ kernel/printk.c                       |   27 ++++++++++++++++++++-------
+ 15 files changed, 62 insertions(+), 110 deletions(-)
+
+diff --git a/arch/arm/kernel/early_printk.c b/arch/arm/kernel/early_printk.c
+index 85aa2b2..4307653 100644
+--- a/arch/arm/kernel/early_printk.c
++++ b/arch/arm/kernel/early_printk.c
+@@ -29,28 +29,17 @@ static void early_console_write(struct console *con, const char *s, unsigned n)
+ 	early_write(s, n);
+ }
+ 
+-static struct console early_console = {
++static struct console early_console_dev = {
+ 	.name =		"earlycon",
+ 	.write =	early_console_write,
+ 	.flags =	CON_PRINTBUFFER | CON_BOOT,
+ 	.index =	-1,
+ };
+ 
+-asmlinkage void early_printk(const char *fmt, ...)
+-{
+-	char buf[512];
+-	int n;
+-	va_list ap;
+-
+-	va_start(ap, fmt);
+-	n = vscnprintf(buf, sizeof(buf), fmt, ap);
+-	early_write(buf, n);
+-	va_end(ap);
+-}
+-
+ static int __init setup_early_printk(char *buf)
+ {
+-	register_console(&early_console);
++	early_console = &early_console_dev;
++	register_console(&early_console_dev);
+ 	return 0;
+ }
+ 
+diff --git a/arch/blackfin/kernel/early_printk.c b/arch/blackfin/kernel/early_printk.c
+index 84ed837..61fbd2d 100644
+--- a/arch/blackfin/kernel/early_printk.c
++++ b/arch/blackfin/kernel/early_printk.c
+@@ -25,8 +25,6 @@ extern struct console *bfin_earlyserial_init(unsigned int port,
+ extern struct console *bfin_jc_early_init(void);
+ #endif
+ 
+-static struct console *early_console;
+-
+ /* Default console */
+ #define DEFAULT_PORT 0
+ #define DEFAULT_CFLAG CS8|B57600
+diff --git a/arch/microblaze/kernel/early_printk.c b/arch/microblaze/kernel/early_printk.c
+index d26d92d..0420624 100644
+--- a/arch/microblaze/kernel/early_printk.c
++++ b/arch/microblaze/kernel/early_printk.c
+@@ -21,7 +21,6 @@
+ #include <asm/setup.h>
+ #include <asm/prom.h>
+ 
+-static u32 early_console_initialized;
+ static u32 base_addr;
+ 
+ #ifdef CONFIG_SERIAL_UARTLITE_CONSOLE
+@@ -109,27 +108,11 @@ static struct console early_serial_uart16550_console = {
+ };
+ #endif /* CONFIG_SERIAL_8250_CONSOLE */
+ 
+-static struct console *early_console;
+-
+-void early_printk(const char *fmt, ...)
+-{
+-	char buf[512];
+-	int n;
+-	va_list ap;
+-
+-	if (early_console_initialized) {
+-		va_start(ap, fmt);
+-		n = vscnprintf(buf, 512, fmt, ap);
+-		early_console->write(early_console, buf, n);
+-		va_end(ap);
+-	}
+-}
+-
+ int __init setup_early_printk(char *opt)
+ {
+ 	int version = 0;
+ 
+-	if (early_console_initialized)
++	if (early_console)
+ 		return 1;
+ 
+ 	base_addr = of_early_console(&version);
+@@ -159,7 +142,6 @@ int __init setup_early_printk(char *opt)
+ 		}
+ 
+ 		register_console(early_console);
+-		early_console_initialized = 1;
+ 		return 0;
+ 	}
+ 	return 1;
+@@ -169,7 +151,7 @@ int __init setup_early_printk(char *opt)
+  * only for early console because of performance degression */
+ void __init remap_early_printk(void)
+ {
+-	if (!early_console_initialized || !early_console)
++	if (!early_console)
+ 		return;
+ 	printk(KERN_INFO "early_printk_console remaping from 0x%x to ",
+ 								base_addr);
+@@ -179,9 +161,9 @@ void __init remap_early_printk(void)
+ 
+ void __init disable_early_printk(void)
+ {
+-	if (!early_console_initialized || !early_console)
++	if (!early_console)
+ 		return;
+ 	printk(KERN_WARNING "disabling early console\n");
+ 	unregister_console(early_console);
+-	early_console_initialized = 0;
++	early_console = NULL;
+ }
+diff --git a/arch/mips/kernel/early_printk.c b/arch/mips/kernel/early_printk.c
+index 9ae813e..973c995 100644
+--- a/arch/mips/kernel/early_printk.c
++++ b/arch/mips/kernel/early_printk.c
+@@ -25,20 +25,18 @@ early_console_write(struct console *con, const char *s, unsigned n)
+ 	}
+ }
+ 
+-static struct console early_console __initdata = {
++static struct console early_console_prom = {
+ 	.name	= "early",
+ 	.write	= early_console_write,
+ 	.flags	= CON_PRINTBUFFER | CON_BOOT,
+ 	.index	= -1
+ };
+ 
+-static int early_console_initialized __initdata;
+-
+ void __init setup_early_printk(void)
+ {
+-	if (early_console_initialized)
++	if (early_console)
+ 		return;
+-	early_console_initialized = 1;
++	early_console = &early_console_prom;
+ 
+-	register_console(&early_console);
++	register_console(&early_console_prom);
+ }
+diff --git a/arch/powerpc/kernel/udbg.c b/arch/powerpc/kernel/udbg.c
+index 57fa2c0..1b9174d 100644
+--- a/arch/powerpc/kernel/udbg.c
++++ b/arch/powerpc/kernel/udbg.c
+@@ -182,15 +182,13 @@ static struct console udbg_console = {
+ 	.index	= 0,
+ };
+ 
+-static int early_console_initialized;
+-
+ /*
+  * Called by setup_system after ppc_md->probe and ppc_md->early_init.
+  * Call it again after setting udbg_putc in ppc_md->setup_arch.
+  */
+ void __init register_early_udbg_console(void)
+ {
+-	if (early_console_initialized)
++	if (early_console)
+ 		return;
+ 
+ 	if (!udbg_putc)
+@@ -200,7 +198,7 @@ void __init register_early_udbg_console(void)
+ 		printk(KERN_INFO "early console immortal !\n");
+ 		udbg_console.flags &= ~CON_BOOT;
+ 	}
+-	early_console_initialized = 1;
++	early_console = &udbg_console;
+ 	register_console(&udbg_console);
+ }
+ 
+diff --git a/arch/sh/kernel/sh_bios.c b/arch/sh/kernel/sh_bios.c
+index 47475cc..a5b51b9 100644
+--- a/arch/sh/kernel/sh_bios.c
++++ b/arch/sh/kernel/sh_bios.c
+@@ -144,8 +144,6 @@ static struct console bios_console = {
+ 	.index		= -1,
+ };
+ 
+-static struct console *early_console;
+-
+ static int __init setup_early_printk(char *buf)
+ {
+ 	int keep_early = 0;
+diff --git a/arch/sparc/kernel/setup_32.c b/arch/sparc/kernel/setup_32.c
+index fe1e3fc..e6475f0 100644
+--- a/arch/sparc/kernel/setup_32.c
++++ b/arch/sparc/kernel/setup_32.c
+@@ -221,6 +221,7 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	boot_flags_init(*cmdline_p);
+ 
++	early_console = &prom_early_console;
+ 	register_console(&prom_early_console);
+ 
+ 	/* Set sparc_cpu_model */
+diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
+index a854a1c..b85d039 100644
+--- a/arch/sparc/kernel/setup_64.c
++++ b/arch/sparc/kernel/setup_64.c
+@@ -487,6 +487,12 @@ static void __init init_sparc64_elf_hwcap(void)
+ 		popc_patch();
+ }
+ 
++static inline void register_prom_console(void)
++{
++	early_console = &prom_early_console;
++	register_console(&prom_early_console);
++}
++
+ void __init setup_arch(char **cmdline_p)
+ {
+ 	/* Initialize PROM console and command line. */
+@@ -498,7 +504,7 @@ void __init setup_arch(char **cmdline_p)
+ #ifdef CONFIG_EARLYFB
+ 	if (btext_find_display())
+ #endif
+-		register_console(&prom_early_console);
++		register_prom_console();
+ 
+ 	if (tlb_type == hypervisor)
+ 		printk("ARCH: SUN4V\n");
+diff --git a/arch/tile/kernel/early_printk.c b/arch/tile/kernel/early_printk.c
+index 493a0e6..ba2ac00 100644
+--- a/arch/tile/kernel/early_printk.c
++++ b/arch/tile/kernel/early_printk.c
+@@ -32,25 +32,8 @@ static struct console early_hv_console = {
+ };
+ 
+ /* Direct interface for emergencies */
+-static struct console *early_console = &early_hv_console;
+-static int early_console_initialized;
+ static int early_console_complete;
+ 
+-static void early_vprintk(const char *fmt, va_list ap)
+-{
+-	char buf[512];
+-	int n = vscnprintf(buf, sizeof(buf), fmt, ap);
+-	early_console->write(early_console, buf, n);
+-}
+-
+-void early_printk(const char *fmt, ...)
+-{
+-	va_list ap;
+-	va_start(ap, fmt);
+-	early_vprintk(fmt, ap);
+-	va_end(ap);
+-}
+-
+ void early_panic(const char *fmt, ...)
+ {
+ 	va_list ap;
+@@ -68,14 +51,13 @@ static int __initdata keep_early;
+ 
+ static int __init setup_early_printk(char *str)
+ {
+-	if (early_console_initialized)
++	if (early_console)
+ 		return 1;
+ 
+ 	if (str != NULL && strncmp(str, "keep", 4) == 0)
+ 		keep_early = 1;
+ 
+ 	early_console = &early_hv_console;
+-	early_console_initialized = 1;
+ 	register_console(early_console);
+ 
+ 	return 0;
+@@ -84,12 +66,12 @@ static int __init setup_early_printk(char *str)
+ void __init disable_early_printk(void)
+ {
+ 	early_console_complete = 1;
+-	if (!early_console_initialized || !early_console)
++	if (!early_console)
+ 		return;
+ 	if (!keep_early) {
+ 		early_printk("disabling early console\n");
+ 		unregister_console(early_console);
+-		early_console_initialized = 0;
++		early_console = NULL;
+ 	} else {
+ 		early_printk("keeping early console\n");
+ 	}
+@@ -97,7 +79,7 @@ void __init disable_early_printk(void)
+ 
+ void warn_early_printk(void)
+ {
+-	if (early_console_complete || early_console_initialized)
++	if (early_console_complete || early_console)
+ 		return;
+ 	early_printk("\
+ Machine shutting down before console output is fully initialized.\n\
+diff --git a/arch/um/kernel/early_printk.c b/arch/um/kernel/early_printk.c
+index ec649bf..183060f 100644
+--- a/arch/um/kernel/early_printk.c
++++ b/arch/um/kernel/early_printk.c
+@@ -16,7 +16,7 @@ static void early_console_write(struct console *con, const char *s, unsigned int
+ 	um_early_printk(s, n);
+ }
+ 
+-static struct console early_console = {
++static struct console early_console_dev = {
+ 	.name = "earlycon",
+ 	.write = early_console_write,
+ 	.flags = CON_BOOT,
+@@ -25,8 +25,10 @@ static struct console early_console = {
+ 
+ static int __init setup_early_printk(char *buf)
+ {
+-	register_console(&early_console);
+-
++	if (!early_console) {
++		early_console = &early_console_dev;
++		register_console(&early_console_dev);
++	}
+ 	return 0;
+ }
+ 
+diff --git a/arch/unicore32/kernel/early_printk.c b/arch/unicore32/kernel/early_printk.c
+index 3922255..9be0d5d 100644
+--- a/arch/unicore32/kernel/early_printk.c
++++ b/arch/unicore32/kernel/early_printk.c
+@@ -33,21 +33,17 @@ static struct console early_ocd_console = {
+ 	.index =	-1,
+ };
+ 
+-/* Direct interface for emergencies */
+-static struct console *early_console = &early_ocd_console;
+-
+-static int __initdata keep_early;
+-
+ static int __init setup_early_printk(char *buf)
+ {
+-	if (!buf)
++	int keep_early;
++
++	if (!buf || early_console)
+ 		return 0;
+ 
+ 	if (strstr(buf, "keep"))
+ 		keep_early = 1;
+ 
+-	if (!strncmp(buf, "ocd", 3))
+-		early_console = &early_ocd_console;
++	early_console = &early_ocd_console;
+ 
+ 	if (keep_early)
+ 		early_console->flags &= ~CON_BOOT;
+diff --git a/arch/x86/kernel/early_printk.c b/arch/x86/kernel/early_printk.c
+index cd28a35..5f3d9c5 100644
+--- a/arch/x86/kernel/early_printk.c
++++ b/arch/x86/kernel/early_printk.c
+@@ -169,25 +169,9 @@ static struct console early_serial_console = {
+ 	.index =	-1,
+ };
+ 
+-/* Direct interface for emergencies */
+-static struct console *early_console = &early_vga_console;
+-static int __initdata early_console_initialized;
+-
+-asmlinkage void early_printk(const char *fmt, ...)
+-{
+-	char buf[512];
+-	int n;
+-	va_list ap;
+-
+-	va_start(ap, fmt);
+-	n = vscnprintf(buf, sizeof(buf), fmt, ap);
+-	early_console->write(early_console, buf, n);
+-	va_end(ap);
+-}
+-
+ static inline void early_console_register(struct console *con, int keep_early)
+ {
+-	if (early_console->index != -1) {
++	if (con->index != -1) {
+ 		printk(KERN_CRIT "ERROR: earlyprintk= %s already used\n",
+ 		       con->name);
+ 		return;
+@@ -207,9 +191,8 @@ static int __init setup_early_printk(char *buf)
+ 	if (!buf)
+ 		return 0;
+ 
+-	if (early_console_initialized)
++	if (early_console)
+ 		return 0;
+-	early_console_initialized = 1;
+ 
+ 	keep = (strstr(buf, "keep") != NULL);
+ 
+diff --git a/include/linux/console.h b/include/linux/console.h
+index 7453cfd..e5b5dc0 100644
+--- a/include/linux/console.h
++++ b/include/linux/console.h
+@@ -133,6 +133,7 @@ struct console {
+ 	for (con = console_drivers; con != NULL; con = con->next)
+ 
+ extern int console_set_on_cmdline;
++extern struct console *early_console;
+ 
+ extern int add_preferred_console(char *name, int idx, char *options);
+ extern int update_console_cmdline(char *name, int idx, char *name_new, int idx_new, char *options);
+diff --git a/include/linux/printk.h b/include/linux/printk.h
+index f0e22f7..be10455 100644
+--- a/include/linux/printk.h
++++ b/include/linux/printk.h
+@@ -88,8 +88,13 @@ int no_printk(const char *fmt, ...)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_EARLY_PRINTK
+ extern asmlinkage __printf(1, 2)
+ void early_printk(const char *fmt, ...);
++#else
++static inline __printf(1, 2) __cold
++void early_printk(const char *s, ...) { }
++#endif
+ 
+ extern int printk_needs_cpu(int cpu);
+ extern void printk_tick(void);
+diff --git a/kernel/printk.c b/kernel/printk.c
+index 7982a0a..b7aa50e 100644
+--- a/kernel/printk.c
++++ b/kernel/printk.c
+@@ -44,13 +44,6 @@
+ 
+ #include <asm/uaccess.h>
+ 
+-/*
+- * Architectures can override it:
+- */
+-void asmlinkage __attribute__((weak)) early_printk(const char *fmt, ...)
+-{
+-}
+-
+ #define __LOG_BUF_LEN	(1 << CONFIG_LOG_BUF_SHIFT)
+ 
+ /* printk's without a loglevel use this.. */
+@@ -521,6 +514,26 @@ static void __call_console_drivers(unsigned start, unsigned end)
+ 	}
+ }
+ 
++#ifdef CONFIG_EARLY_PRINTK
++struct console *early_console;
++
++static void early_vprintk(const char *fmt, va_list ap)
++{
++	char buf[512];
++	int n = vscnprintf(buf, sizeof(buf), fmt, ap);
++	if (early_console)
++		early_console->write(early_console, buf, n);
++}
++
++asmlinkage void early_printk(const char *fmt, ...)
++{
++	va_list ap;
++	va_start(ap, fmt);
++	early_vprintk(fmt, ap);
++	va_end(ap);
++}
++#endif
++
+ static int __read_mostly ignore_loglevel;
+ 
+ static int __init ignore_loglevel_setup(char *str)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0075-printk-kill.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0075-printk-kill.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0075-printk-kill.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0075-printk-kill.patch.patch)
@@ -0,0 +1,125 @@
+From 58cd55a99f02b450aebc3f90b5822f61ba3785de Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 22 Jul 2011 17:58:40 +0200
+Subject: [PATCH 075/271] printk-kill.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/printk.h |    3 ++-
+ kernel/printk.c        |   33 +++++++++++++++++++++++++++++++++
+ kernel/watchdog.c      |   15 +++++++++++++--
+ 3 files changed, 48 insertions(+), 3 deletions(-)
+
+diff --git a/include/linux/printk.h b/include/linux/printk.h
+index be10455..a53adf6 100644
+--- a/include/linux/printk.h
++++ b/include/linux/printk.h
+@@ -91,9 +91,11 @@ int no_printk(const char *fmt, ...)
+ #ifdef CONFIG_EARLY_PRINTK
+ extern asmlinkage __printf(1, 2)
+ void early_printk(const char *fmt, ...);
++extern void printk_kill(void);
+ #else
+ static inline __printf(1, 2) __cold
+ void early_printk(const char *s, ...) { }
++static inline void printk_kill(void) { }
+ #endif
+ 
+ extern int printk_needs_cpu(int cpu);
+@@ -114,7 +116,6 @@ extern int __printk_ratelimit(const char *func);
+ #define printk_ratelimit() __printk_ratelimit(__func__)
+ extern bool printk_timed_ratelimit(unsigned long *caller_jiffies,
+ 				   unsigned int interval_msec);
+-
+ extern int printk_delay_msec;
+ extern int dmesg_restrict;
+ extern int kptr_restrict;
+diff --git a/kernel/printk.c b/kernel/printk.c
+index b7aa50e..96ee3cd 100644
+--- a/kernel/printk.c
++++ b/kernel/printk.c
+@@ -532,6 +532,32 @@ asmlinkage void early_printk(const char *fmt, ...)
+ 	early_vprintk(fmt, ap);
+ 	va_end(ap);
+ }
++
++/*
++ * This is independent of any log levels - a global
++ * kill switch that turns off all of printk.
++ *
++ * Used by the NMI watchdog if early-printk is enabled.
++ */
++static int __read_mostly printk_killswitch;
++
++void printk_kill(void)
++{
++	printk_killswitch = 1;
++}
++
++static int forced_early_printk(const char *fmt, va_list ap)
++{
++	if (!printk_killswitch)
++		return 0;
++	early_vprintk(fmt, ap);
++	return 1;
++}
++#else
++static inline int forced_early_printk(const char *fmt, va_list ap)
++{
++	return 0;
++}
+ #endif
+ 
+ static int __read_mostly ignore_loglevel;
+@@ -850,6 +876,13 @@ asmlinkage int vprintk(const char *fmt, va_list args)
+ 	size_t plen;
+ 	char special;
+ 
++	/*
++	 * Fall back to early_printk if a debugging subsystem has
++	 * killed printk output
++	 */
++	if (unlikely(forced_early_printk(fmt, args)))
++		return 1;
++
+ 	boot_delay_msec();
+ 	printk_delay();
+ 
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 1d7bca7..c7e2a2f 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -201,6 +201,8 @@ static int is_softlockup(unsigned long touch_ts)
+ 
+ #ifdef CONFIG_HARDLOCKUP_DETECTOR
+ 
++static DEFINE_RAW_SPINLOCK(watchdog_output_lock);
++
+ static struct perf_event_attr wd_hw_attr = {
+ 	.type		= PERF_TYPE_HARDWARE,
+ 	.config		= PERF_COUNT_HW_CPU_CYCLES,
+@@ -235,10 +237,19 @@ static void watchdog_overflow_callback(struct perf_event *event,
+ 		if (__this_cpu_read(hard_watchdog_warn) == true)
+ 			return;
+ 
+-		if (hardlockup_panic)
++		/*
++		 * If early-printk is enabled then make sure we do not
++		 * lock up in printk() and kill console logging:
++		 */
++		printk_kill();
++
++		if (hardlockup_panic) {
+ 			panic("Watchdog detected hard LOCKUP on cpu %d", this_cpu);
+-		else
++		} else {
++			raw_spin_lock(&watchdog_output_lock);
+ 			WARN(1, "Watchdog detected hard LOCKUP on cpu %d", this_cpu);
++			raw_spin_unlock(&watchdog_output_lock);
++		}
+ 
+ 		__this_cpu_write(hard_watchdog_warn, true);
+ 		return;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0076-printk-force_early_printk-boot-param-to-help-with-de.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0076-printk-force_early_printk-boot-param-to-help-with-de.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0076-printk-force_early_printk-boot-param-to-help-with-de.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0076-printk-force_early_printk-boot-param-to-help-with-de.patch)
@@ -0,0 +1,38 @@
+From 6f105f763b9a9cb3e0536dfce235d0dbf7164a0c Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Fri, 2 Sep 2011 14:29:33 +0200
+Subject: [PATCH 076/271] printk: 'force_early_printk' boot param to help with
+ debugging
+
+Gives me an option to screw printk and actually see what the machine
+says.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/r/1314967289.1301.11.camel@twins
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Link: http://lkml.kernel.org/n/tip-ykb97nsfmobq44xketrxs977@git.kernel.org
+---
+ kernel/printk.c |    7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/kernel/printk.c b/kernel/printk.c
+index 96ee3cd..1f06626 100644
+--- a/kernel/printk.c
++++ b/kernel/printk.c
+@@ -541,6 +541,13 @@ asmlinkage void early_printk(const char *fmt, ...)
+  */
+ static int __read_mostly printk_killswitch;
+ 
++static int __init force_early_printk_setup(char *str)
++{
++	printk_killswitch = 1;
++	return 0;
++}
++early_param("force_early_printk", force_early_printk_setup);
++
+ void printk_kill(void)
+ {
+ 	printk_killswitch = 1;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0077-rt-preempt-base-config.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0077-rt-preempt-base-config.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0077-rt-preempt-base-config.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0077-rt-preempt-base-config.patch.patch)
@@ -0,0 +1,55 @@
+From 9113b53e8be415daa5ea256197be21b9211d9719 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 17 Jun 2011 12:39:57 +0200
+Subject: [PATCH 077/271] rt-preempt-base-config.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/Kconfig.preempt |   19 +++++++++++++++++--
+ 1 file changed, 17 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index 24e7cb0..35c6f20 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -1,3 +1,10 @@
++config PREEMPT
++	bool
++	select PREEMPT_COUNT
++
++config PREEMPT_RT_BASE
++	bool
++	select PREEMPT
+ 
+ choice
+ 	prompt "Preemption Model"
+@@ -33,9 +40,9 @@ config PREEMPT_VOLUNTARY
+ 
+ 	  Select this if you are building a kernel for a desktop system.
+ 
+-config PREEMPT
++config PREEMPT__LL
+ 	bool "Preemptible Kernel (Low-Latency Desktop)"
+-	select PREEMPT_COUNT
++	select PREEMPT
+ 	help
+ 	  This option reduces the latency of the kernel by making
+ 	  all kernel code (that is not executing in a critical section)
+@@ -51,6 +58,14 @@ config PREEMPT
+ 	  embedded system with latency requirements in the milliseconds
+ 	  range.
+ 
++config PREEMPT_RTB
++	bool "Preemptible Kernel (Basic RT)"
++	select PREEMPT_RT_BASE
++	help
++	  This option is basically the same as (Low-Latency Desktop) but
++	  enables changes which are preliminary for the full preemptiple
++	  RT kernel.
++
+ endchoice
+ 
+ config PREEMPT_COUNT
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0078-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0078-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0078-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0078-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch)
@@ -0,0 +1,48 @@
+From 9729dd0a86e87c3fbee8795bee8c6617a428286b Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:58 -0500
+Subject: [PATCH 078/271] bug: BUG_ON/WARN_ON variants dependend on RT/!RT
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/asm-generic/bug.h |   18 ++++++++++++++++++
+ 1 file changed, 18 insertions(+)
+
+diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h
+index 84458b0..97c1eaf 100644
+--- a/include/asm-generic/bug.h
++++ b/include/asm-generic/bug.h
+@@ -3,6 +3,10 @@
+ 
+ #include <linux/compiler.h>
+ 
++#ifndef __ASSEMBLY__
++extern void __WARN_ON(const char *func, const char *file, const int line);
++#endif /* __ASSEMBLY__ */
++
+ #ifdef CONFIG_BUG
+ 
+ #ifdef CONFIG_GENERIC_BUG
+@@ -202,4 +206,18 @@ extern void warn_slowpath_null(const char *file, const int line);
+ # define WARN_ON_SMP(x)			({0;})
+ #endif
+ 
++#ifdef CONFIG_PREEMPT_RT_BASE
++# define BUG_ON_RT(c)			BUG_ON(c)
++# define BUG_ON_NONRT(c)		do { } while (0)
++# define WARN_ON_RT(condition)		WARN_ON(condition)
++# define WARN_ON_NONRT(condition)	do { } while (0)
++# define WARN_ON_ONCE_NONRT(condition)	do { } while (0)
++#else
++# define BUG_ON_RT(c)			do { } while (0)
++# define BUG_ON_NONRT(c)		BUG_ON(c)
++# define WARN_ON_RT(condition)		do { } while (0)
++# define WARN_ON_NONRT(condition)	WARN_ON(condition)
++# define WARN_ON_ONCE_NONRT(condition)	WARN_ON_ONCE(condition)
++#endif
++
+ #endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0079-rt-local_irq_-variants-depending-on-RT-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0079-rt-local_irq_-variants-depending-on-RT-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0079-rt-local_irq_-variants-depending-on-RT-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0079-rt-local_irq_-variants-depending-on-RT-RT.patch)
@@ -0,0 +1,59 @@
+From bd4c69742526cbd057c4034a101a4b1139501b21 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 21 Jul 2009 22:34:14 +0200
+Subject: [PATCH 079/271] rt: local_irq_* variants depending on RT/!RT
+
+Add local_irq_*_(no)rt variant which are mainly used to break
+interrupt disabled sections on PREEMPT_RT or to explicitely disable
+interrupts on PREEMPT_RT.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/interrupt.h |    2 +-
+ include/linux/irqflags.h  |   19 +++++++++++++++++++
+ 2 files changed, 20 insertions(+), 1 deletion(-)
+
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index 21b94de..ddd6b2a 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -219,7 +219,7 @@ extern void devm_free_irq(struct device *dev, unsigned int irq, void *dev_id);
+ #ifdef CONFIG_LOCKDEP
+ # define local_irq_enable_in_hardirq()	do { } while (0)
+ #else
+-# define local_irq_enable_in_hardirq()	local_irq_enable()
++# define local_irq_enable_in_hardirq()	local_irq_enable_nort()
+ #endif
+ 
+ extern void disable_irq_nosync(unsigned int irq);
+diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
+index d176d65..37b13c4 100644
+--- a/include/linux/irqflags.h
++++ b/include/linux/irqflags.h
+@@ -147,4 +147,23 @@
+ 
+ #endif /* CONFIG_TRACE_IRQFLAGS_SUPPORT */
+ 
++/*
++ * local_irq* variants depending on RT/!RT
++ */
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define local_irq_disable_nort()	do { } while (0)
++# define local_irq_enable_nort()	do { } while (0)
++# define local_irq_save_nort(flags)	do { local_save_flags(flags); } while (0)
++# define local_irq_restore_nort(flags)	do { (void)(flags); } while (0)
++# define local_irq_disable_rt()		local_irq_disable()
++# define local_irq_enable_rt()		local_irq_enable()
++#else
++# define local_irq_disable_nort()	local_irq_disable()
++# define local_irq_enable_nort()	local_irq_enable()
++# define local_irq_save_nort(flags)	local_irq_save(flags)
++# define local_irq_restore_nort(flags)	local_irq_restore(flags)
++# define local_irq_disable_rt()		do { } while (0)
++# define local_irq_enable_rt()		do { } while (0)
++#endif
++
+ #endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0080-preempt-Provide-preempt_-_-no-rt-variants.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0080-preempt-Provide-preempt_-_-no-rt-variants.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0080-preempt-Provide-preempt_-_-no-rt-variants.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0080-preempt-Provide-preempt_-_-no-rt-variants.patch)
@@ -0,0 +1,57 @@
+From 7b7525cf0311e388a50673a7f92e3a39aed2b73e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 24 Jul 2009 12:38:56 +0200
+Subject: [PATCH 080/271] preempt: Provide preempt_*_(no)rt variants
+
+RT needs a few preempt_disable/enable points which are not necessary
+otherwise. Implement variants to avoid #ifdeffery.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/preempt.h |   20 ++++++++++++++++++--
+ 1 file changed, 18 insertions(+), 2 deletions(-)
+
+diff --git a/include/linux/preempt.h b/include/linux/preempt.h
+index 227b0f5..29db25f 100644
+--- a/include/linux/preempt.h
++++ b/include/linux/preempt.h
+@@ -54,11 +54,15 @@ do { \
+ 	dec_preempt_count(); \
+ } while (0)
+ 
+-#define preempt_enable_no_resched()	__preempt_enable_no_resched()
++#ifndef CONFIG_PREEMPT_RT_BASE
++# define preempt_enable_no_resched()	__preempt_enable_no_resched()
++#else
++# define preempt_enable_no_resched()	preempt_enable()
++#endif
+ 
+ #define preempt_enable() \
+ do { \
+-	preempt_enable_no_resched(); \
++	__preempt_enable_no_resched(); \
+ 	barrier(); \
+ 	preempt_check_resched(); \
+ } while (0)
+@@ -104,6 +108,18 @@ do { \
+ 
+ #endif /* CONFIG_PREEMPT_COUNT */
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define preempt_disable_rt()		preempt_disable()
++# define preempt_enable_rt()		preempt_enable()
++# define preempt_disable_nort()		do { } while (0)
++# define preempt_enable_nort()		do { } while (0)
++#else
++# define preempt_disable_rt()		do { } while (0)
++# define preempt_enable_rt()		do { } while (0)
++# define preempt_disable_nort()		preempt_disable()
++# define preempt_enable_nort()		preempt_enable()
++#endif
++
+ #ifdef CONFIG_PREEMPT_NOTIFIERS
+ 
+ struct preempt_notifier;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0081-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0081-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0081-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0081-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch)
@@ -0,0 +1,70 @@
+From 284d23b160b640b8aa063dfd1184bb0b8a93988d Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <srostedt at redhat.com>
+Date: Fri, 3 Jul 2009 08:44:29 -0500
+Subject: [PATCH 081/271] ata: Do not disable interrupts in ide code for
+ preempt-rt
+
+Use the local_irq_*_nort variants.
+
+Signed-off-by: Steven Rostedt <srostedt at redhat.com>
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/ata/libata-sff.c |   12 ++++++------
+ 1 file changed, 6 insertions(+), 6 deletions(-)
+
+diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
+index 4cadfa2..100f85c 100644
+--- a/drivers/ata/libata-sff.c
++++ b/drivers/ata/libata-sff.c
+@@ -678,9 +678,9 @@ unsigned int ata_sff_data_xfer_noirq(struct ata_device *dev, unsigned char *buf,
+ 	unsigned long flags;
+ 	unsigned int consumed;
+ 
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	consumed = ata_sff_data_xfer32(dev, buf, buflen, rw);
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ 
+ 	return consumed;
+ }
+@@ -719,7 +719,7 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
+ 		unsigned long flags;
+ 
+ 		/* FIXME: use a bounce buffer */
+-		local_irq_save(flags);
++		local_irq_save_nort(flags);
+ 		buf = kmap_atomic(page, KM_IRQ0);
+ 
+ 		/* do the actual data transfer */
+@@ -727,7 +727,7 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
+ 				       do_write);
+ 
+ 		kunmap_atomic(buf, KM_IRQ0);
+-		local_irq_restore(flags);
++		local_irq_restore_nort(flags);
+ 	} else {
+ 		buf = page_address(page);
+ 		ap->ops->sff_data_xfer(qc->dev, buf + offset, qc->sect_size,
+@@ -864,7 +864,7 @@ next_sg:
+ 		unsigned long flags;
+ 
+ 		/* FIXME: use bounce buffer */
+-		local_irq_save(flags);
++		local_irq_save_nort(flags);
+ 		buf = kmap_atomic(page, KM_IRQ0);
+ 
+ 		/* do the actual data transfer */
+@@ -872,7 +872,7 @@ next_sg:
+ 								count, rw);
+ 
+ 		kunmap_atomic(buf, KM_IRQ0);
+-		local_irq_restore(flags);
++		local_irq_restore_nort(flags);
+ 	} else {
+ 		buf = page_address(page);
+ 		consumed = ap->ops->sff_data_xfer(dev,  buf + offset,
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0082-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0082-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0082-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0082-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch)
@@ -0,0 +1,186 @@
+From 096886104439db1f9afb4a8d9f6c564fe6fe8054 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:16 -0500
+Subject: [PATCH 082/271] ide: Do not disable interrupts for PREEMPT-RT
+
+Use the local_irq_*_nort variants.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/ide/alim15x3.c     |    4 ++--
+ drivers/ide/hpt366.c       |    4 ++--
+ drivers/ide/ide-io-std.c   |    8 ++++----
+ drivers/ide/ide-io.c       |    2 +-
+ drivers/ide/ide-iops.c     |    4 ++--
+ drivers/ide/ide-probe.c    |    4 ++--
+ drivers/ide/ide-taskfile.c |    6 +++---
+ 7 files changed, 16 insertions(+), 16 deletions(-)
+
+diff --git a/drivers/ide/alim15x3.c b/drivers/ide/alim15x3.c
+index 2c8016a..6fd6037 100644
+--- a/drivers/ide/alim15x3.c
++++ b/drivers/ide/alim15x3.c
+@@ -234,7 +234,7 @@ static int init_chipset_ali15x3(struct pci_dev *dev)
+ 
+ 	isa_dev = pci_get_device(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, NULL);
+ 
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 
+ 	if (m5229_revision < 0xC2) {
+ 		/*
+@@ -325,7 +325,7 @@ out:
+ 	}
+ 	pci_dev_put(north);
+ 	pci_dev_put(isa_dev);
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/ide/hpt366.c b/drivers/ide/hpt366.c
+index 58c51cd..d2a4059 100644
+--- a/drivers/ide/hpt366.c
++++ b/drivers/ide/hpt366.c
+@@ -1241,7 +1241,7 @@ static int __devinit init_dma_hpt366(ide_hwif_t *hwif,
+ 
+ 	dma_old = inb(base + 2);
+ 
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 
+ 	dma_new = dma_old;
+ 	pci_read_config_byte(dev, hwif->channel ? 0x4b : 0x43, &masterdma);
+@@ -1252,7 +1252,7 @@ static int __devinit init_dma_hpt366(ide_hwif_t *hwif,
+ 	if (dma_new != dma_old)
+ 		outb(dma_new, base + 2);
+ 
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ 
+ 	printk(KERN_INFO "    %s: BM-DMA at 0x%04lx-0x%04lx\n",
+ 			 hwif->name, base, base + 7);
+diff --git a/drivers/ide/ide-io-std.c b/drivers/ide/ide-io-std.c
+index 1976397..4169433 100644
+--- a/drivers/ide/ide-io-std.c
++++ b/drivers/ide/ide-io-std.c
+@@ -175,7 +175,7 @@ void ide_input_data(ide_drive_t *drive, struct ide_cmd *cmd, void *buf,
+ 		unsigned long uninitialized_var(flags);
+ 
+ 		if ((io_32bit & 2) && !mmio) {
+-			local_irq_save(flags);
++			local_irq_save_nort(flags);
+ 			ata_vlb_sync(io_ports->nsect_addr);
+ 		}
+ 
+@@ -186,7 +186,7 @@ void ide_input_data(ide_drive_t *drive, struct ide_cmd *cmd, void *buf,
+ 			insl(data_addr, buf, words);
+ 
+ 		if ((io_32bit & 2) && !mmio)
+-			local_irq_restore(flags);
++			local_irq_restore_nort(flags);
+ 
+ 		if (((len + 1) & 3) < 2)
+ 			return;
+@@ -219,7 +219,7 @@ void ide_output_data(ide_drive_t *drive, struct ide_cmd *cmd, void *buf,
+ 		unsigned long uninitialized_var(flags);
+ 
+ 		if ((io_32bit & 2) && !mmio) {
+-			local_irq_save(flags);
++			local_irq_save_nort(flags);
+ 			ata_vlb_sync(io_ports->nsect_addr);
+ 		}
+ 
+@@ -230,7 +230,7 @@ void ide_output_data(ide_drive_t *drive, struct ide_cmd *cmd, void *buf,
+ 			outsl(data_addr, buf, words);
+ 
+ 		if ((io_32bit & 2) && !mmio)
+-			local_irq_restore(flags);
++			local_irq_restore_nort(flags);
+ 
+ 		if (((len + 1) & 3) < 2)
+ 			return;
+diff --git a/drivers/ide/ide-io.c b/drivers/ide/ide-io.c
+index 177db6d..079ae6b 100644
+--- a/drivers/ide/ide-io.c
++++ b/drivers/ide/ide-io.c
+@@ -659,7 +659,7 @@ void ide_timer_expiry (unsigned long data)
+ 		/* disable_irq_nosync ?? */
+ 		disable_irq(hwif->irq);
+ 		/* local CPU only, as if we were handling an interrupt */
+-		local_irq_disable();
++		local_irq_disable_nort();
+ 		if (hwif->polling) {
+ 			startstop = handler(drive);
+ 		} else if (drive_is_ready(drive)) {
+diff --git a/drivers/ide/ide-iops.c b/drivers/ide/ide-iops.c
+index 376f2dc..f014dd1 100644
+--- a/drivers/ide/ide-iops.c
++++ b/drivers/ide/ide-iops.c
+@@ -129,12 +129,12 @@ int __ide_wait_stat(ide_drive_t *drive, u8 good, u8 bad,
+ 				if ((stat & ATA_BUSY) == 0)
+ 					break;
+ 
+-				local_irq_restore(flags);
++				local_irq_restore_nort(flags);
+ 				*rstat = stat;
+ 				return -EBUSY;
+ 			}
+ 		}
+-		local_irq_restore(flags);
++		local_irq_restore_nort(flags);
+ 	}
+ 	/*
+ 	 * Allow status to settle, then read it again.
+diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
+index 068cef0..38e69e1 100644
+--- a/drivers/ide/ide-probe.c
++++ b/drivers/ide/ide-probe.c
+@@ -196,10 +196,10 @@ static void do_identify(ide_drive_t *drive, u8 cmd, u16 *id)
+ 	int bswap = 1;
+ 
+ 	/* local CPU only; some systems need this */
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	/* read 512 bytes of id info */
+ 	hwif->tp_ops->input_data(drive, NULL, id, SECTOR_SIZE);
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ 
+ 	drive->dev_flags |= IDE_DFLAG_ID_READ;
+ #ifdef DEBUG
+diff --git a/drivers/ide/ide-taskfile.c b/drivers/ide/ide-taskfile.c
+index 5bc2839..da861a6 100644
+--- a/drivers/ide/ide-taskfile.c
++++ b/drivers/ide/ide-taskfile.c
+@@ -251,7 +251,7 @@ void ide_pio_bytes(ide_drive_t *drive, struct ide_cmd *cmd,
+ 
+ 		page_is_high = PageHighMem(page);
+ 		if (page_is_high)
+-			local_irq_save(flags);
++			local_irq_save_nort(flags);
+ 
+ 		buf = kmap_atomic(page, KM_BIO_SRC_IRQ) + offset;
+ 
+@@ -272,7 +272,7 @@ void ide_pio_bytes(ide_drive_t *drive, struct ide_cmd *cmd,
+ 		kunmap_atomic(buf, KM_BIO_SRC_IRQ);
+ 
+ 		if (page_is_high)
+-			local_irq_restore(flags);
++			local_irq_restore_nort(flags);
+ 
+ 		len -= nr_bytes;
+ 	}
+@@ -415,7 +415,7 @@ static ide_startstop_t pre_task_out_intr(ide_drive_t *drive,
+ 	}
+ 
+ 	if ((drive->dev_flags & IDE_DFLAG_UNMASK) == 0)
+-		local_irq_disable();
++		local_irq_disable_nort();
+ 
+ 	ide_set_handler(drive, &task_pio_intr, WAIT_WORSTCASE);
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0083-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0083-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0083-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0083-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch)
@@ -0,0 +1,46 @@
+From 6f4faa5f3d64c50255aa8c80591e21cc4477bb3a Mon Sep 17 00:00:00 2001
+From: Sven-Thorsten Dietrich <sdietrich at novell.com>
+Date: Fri, 3 Jul 2009 08:30:35 -0500
+Subject: [PATCH 083/271] infiniband: Mellanox IB driver patch use _nort()
+ primitives
+
+Fixes in_atomic stack-dump, when Mellanox module is loaded into the RT
+Kernel.
+
+Michael S. Tsirkin <mst at dev.mellanox.co.il> sayeth:
+"Basically, if you just make spin_lock_irqsave (and spin_lock_irq) not disable
+interrupts for non-raw spinlocks, I think all of infiniband will be fine without
+changes."
+
+Signed-off-by: Sven-Thorsten Dietrich <sven at thebigcorporation.com>
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/infiniband/ulp/ipoib/ipoib_multicast.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+index e5069b4..2683192 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+@@ -799,7 +799,7 @@ void ipoib_mcast_restart_task(struct work_struct *work)
+ 
+ 	ipoib_mcast_stop_thread(dev, 0);
+ 
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	netif_addr_lock(dev);
+ 	spin_lock(&priv->lock);
+ 
+@@ -881,7 +881,7 @@ void ipoib_mcast_restart_task(struct work_struct *work)
+ 
+ 	spin_unlock(&priv->lock);
+ 	netif_addr_unlock(dev);
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ 
+ 	/* We have to cancel outside of the spinlock */
+ 	list_for_each_entry_safe(mcast, tmcast, &remove_list, list) {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0084-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0084-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0084-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0084-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch)
@@ -0,0 +1,50 @@
+From b819d26eb58e5f85b12a837596a64b84f983f815 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:16 -0500
+Subject: [PATCH 084/271] input: gameport: Do not disable interrupts on
+ PREEMPT_RT
+
+Use the _nort() primitives.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/input/gameport/gameport.c |    8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/input/gameport/gameport.c b/drivers/input/gameport/gameport.c
+index c351aa4..1ecaf60 100644
+--- a/drivers/input/gameport/gameport.c
++++ b/drivers/input/gameport/gameport.c
+@@ -87,12 +87,12 @@ static int gameport_measure_speed(struct gameport *gameport)
+ 	tx = 1 << 30;
+ 
+ 	for(i = 0; i < 50; i++) {
+-		local_irq_save(flags);
++		local_irq_save_nort(flags);
+ 		GET_TIME(t1);
+ 		for (t = 0; t < 50; t++) gameport_read(gameport);
+ 		GET_TIME(t2);
+ 		GET_TIME(t3);
+-		local_irq_restore(flags);
++		local_irq_restore_nort(flags);
+ 		udelay(i * 10);
+ 		if ((t = DELTA(t2,t1) - DELTA(t3,t2)) < tx) tx = t;
+ 	}
+@@ -111,11 +111,11 @@ static int gameport_measure_speed(struct gameport *gameport)
+ 	tx = 1 << 30;
+ 
+ 	for(i = 0; i < 50; i++) {
+-		local_irq_save(flags);
++		local_irq_save_nort(flags);
+ 		rdtscl(t1);
+ 		for (t = 0; t < 50; t++) gameport_read(gameport);
+ 		rdtscl(t2);
+-		local_irq_restore(flags);
++		local_irq_restore_nort(flags);
+ 		udelay(i * 10);
+ 		if (t2 - t1 < tx) tx = t2 - t1;
+ 	}
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0085-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0085-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0085-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0085-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch)
@@ -0,0 +1,30 @@
+From e62ea5ff16b12cb3667f22fda730eff8d3b3c622 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 21 Jul 2009 22:54:51 +0200
+Subject: [PATCH 085/271] acpi: Do not disable interrupts on PREEMPT_RT
+
+Use the local_irq_*_nort() variants.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/include/asm/acpi.h |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
+index 610001d..c1c23d2 100644
+--- a/arch/x86/include/asm/acpi.h
++++ b/arch/x86/include/asm/acpi.h
+@@ -51,8 +51,8 @@
+ 
+ #define ACPI_ASM_MACROS
+ #define BREAKPOINT3
+-#define ACPI_DISABLE_IRQS() local_irq_disable()
+-#define ACPI_ENABLE_IRQS()  local_irq_enable()
++#define ACPI_DISABLE_IRQS() local_irq_disable_nort()
++#define ACPI_ENABLE_IRQS()  local_irq_enable_nort()
+ #define ACPI_FLUSH_CPU_CACHE()	wbinvd()
+ 
+ int __acpi_acquire_global_lock(unsigned int *lock);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0086-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0086-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0086-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0086-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch)
@@ -0,0 +1,35 @@
+From 273e93964bd77279c00d19dd3e7cad068ef84c18 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 21 Jul 2009 23:06:05 +0200
+Subject: [PATCH 086/271] core: Do not disable interrupts on RT in
+ kernel/users.c
+
+Use the local_irq_*_nort variants to reduce latencies in RT. The code
+is serialized by the locks. No need to disable interrupts.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/user.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/user.c b/kernel/user.c
+index 71dd236..b831e51 100644
+--- a/kernel/user.c
++++ b/kernel/user.c
+@@ -129,11 +129,11 @@ void free_uid(struct user_struct *up)
+ 	if (!up)
+ 		return;
+ 
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	if (atomic_dec_and_lock(&up->__count, &uidhash_lock))
+ 		free_user(up, flags);
+ 	else
+-		local_irq_restore(flags);
++		local_irq_restore_nort(flags);
+ }
+ 
+ struct user_struct *alloc_uid(struct user_namespace *ns, uid_t uid)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0087-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0087-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0087-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0087-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch)
@@ -0,0 +1,90 @@
+From 53360b4cb71ae3fa820b75eabde85c09714990e2 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:44:33 -0500
+Subject: [PATCH 087/271] core: Do not disable interrupts on RT in
+ res_counter.c
+
+Frederic Weisbecker reported this warning:
+
+[   45.228562] BUG: sleeping function called from invalid context at kernel/rtmutex.c:683
+[   45.228571] in_atomic(): 0, irqs_disabled(): 1, pid: 4290, name: ntpdate
+[   45.228576] INFO: lockdep is turned off.
+[   45.228580] irq event stamp: 0
+[   45.228583] hardirqs last  enabled at (0): [<(null)>] (null)
+[   45.228589] hardirqs last disabled at (0): [<ffffffff8025449d>] copy_process+0x68d/0x1500
+[   45.228602] softirqs last  enabled at (0): [<ffffffff8025449d>] copy_process+0x68d/0x1500
+[   45.228609] softirqs last disabled at (0): [<(null)>] (null)
+[   45.228617] Pid: 4290, comm: ntpdate Tainted: G        W  2.6.29-rc4-rt1-tip #1
+[   45.228622] Call Trace:
+[   45.228632]  [<ffffffff8027dfb0>] ? print_irqtrace_events+0xd0/0xe0
+[   45.228639]  [<ffffffff8024cd73>] __might_sleep+0x113/0x130
+[   45.228646]  [<ffffffff8077c811>] rt_spin_lock+0xa1/0xb0
+[   45.228653]  [<ffffffff80296a3d>] res_counter_charge+0x5d/0x130
+[   45.228660]  [<ffffffff802fb67f>] __mem_cgroup_try_charge+0x7f/0x180
+[   45.228667]  [<ffffffff802fc407>] mem_cgroup_charge_common+0x57/0x90
+[   45.228674]  [<ffffffff80212096>] ? ftrace_call+0x5/0x2b
+[   45.228680]  [<ffffffff802fc49d>] mem_cgroup_newpage_charge+0x5d/0x60
+[   45.228688]  [<ffffffff802d94ce>] __do_fault+0x29e/0x4c0
+[   45.228694]  [<ffffffff8077c843>] ? rt_spin_unlock+0x23/0x80
+[   45.228700]  [<ffffffff802db8b5>] handle_mm_fault+0x205/0x890
+[   45.228707]  [<ffffffff80212096>] ? ftrace_call+0x5/0x2b
+[   45.228714]  [<ffffffff8023495e>] do_page_fault+0x11e/0x2a0
+[   45.228720]  [<ffffffff8077e5a5>] page_fault+0x25/0x30
+[   45.228727]  [<ffffffff8043e1ed>] ? __clear_user+0x3d/0x70
+[   45.228733]  [<ffffffff8043e1d1>] ? __clear_user+0x21/0x70
+
+The reason is the raw IRQ flag use of kernel/res_counter.c.
+
+The irq flags tricks there seem a bit pointless: it cannot protect the
+c->parent linkage because local_irq_save() is only per CPU.
+
+So replace it with _nort(). This code needs a second look.
+
+Reported-by: Frederic Weisbecker <fweisbec at gmail.com>
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/res_counter.c |    8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/kernel/res_counter.c b/kernel/res_counter.c
+index 34683ef..21e9ec4 100644
+--- a/kernel/res_counter.c
++++ b/kernel/res_counter.c
+@@ -43,7 +43,7 @@ int res_counter_charge(struct res_counter *counter, unsigned long val,
+ 	struct res_counter *c, *u;
+ 
+ 	*limit_fail_at = NULL;
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	for (c = counter; c != NULL; c = c->parent) {
+ 		spin_lock(&c->lock);
+ 		ret = res_counter_charge_locked(c, val);
+@@ -62,7 +62,7 @@ undo:
+ 		spin_unlock(&u->lock);
+ 	}
+ done:
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ 	return ret;
+ }
+ 
+@@ -79,13 +79,13 @@ void res_counter_uncharge(struct res_counter *counter, unsigned long val)
+ 	unsigned long flags;
+ 	struct res_counter *c;
+ 
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	for (c = counter; c != NULL; c = c->parent) {
+ 		spin_lock(&c->lock);
+ 		res_counter_uncharge_locked(c, val);
+ 		spin_unlock(&c->lock);
+ 	}
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ }
+ 
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0088-usb-Use-local_irq_-_nort-variants.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0088-usb-Use-local_irq_-_nort-variants.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0088-usb-Use-local_irq_-_nort-variants.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0088-usb-Use-local_irq_-_nort-variants.patch)
@@ -0,0 +1,39 @@
+From 60cef4fea189e4489cf1b2660ee5d2ba4cc29355 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <srostedt at redhat.com>
+Date: Fri, 3 Jul 2009 08:44:26 -0500
+Subject: [PATCH 088/271] usb: Use local_irq_*_nort() variants
+
+[ tglx: Now that irqf_disabled is dead we should kill that ]
+
+Signed-off-by: Steven Rostedt <srostedt at redhat.com>
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/usb/core/hcd.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 8cb9304..32dfd76 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2145,7 +2145,7 @@ irqreturn_t usb_hcd_irq (int irq, void *__hcd)
+ 	 * when the first handler doesn't use it.  So let's just
+ 	 * assume it's never used.
+ 	 */
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 
+ 	if (unlikely(HCD_DEAD(hcd) || !HCD_HW_ACCESSIBLE(hcd))) {
+ 		rc = IRQ_NONE;
+@@ -2158,7 +2158,7 @@ irqreturn_t usb_hcd_irq (int irq, void *__hcd)
+ 		rc = IRQ_HANDLED;
+ 	}
+ 
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ 	return rc;
+ }
+ EXPORT_SYMBOL_GPL(usb_hcd_irq);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0089-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0089-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0089-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0089-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch)
@@ -0,0 +1,52 @@
+From aa62959af4605a57f9972797ce76afb8f11d43bf Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 17 Aug 2009 19:49:19 +0200
+Subject: [PATCH 089/271] tty: Do not disable interrupts in put_ldisc on -rt
+
+Fixes the following on PREEMPT_RT:
+
+BUG: sleeping function called from invalid context at kernel/rtmutex.c:684
+in_atomic(): 0, irqs_disabled(): 1, pid: 9116, name: sshd
+Pid: 9116, comm: sshd Not tainted 2.6.31-rc6-rt2 #6
+Call Trace:
+[<ffffffff81034a4f>] __might_sleep+0xec/0xee
+[<ffffffff812fbc6d>] rt_spin_lock+0x34/0x75
+[ffffffff81064a83>] atomic_dec_and_spin_lock+0x36/0x54
+[<ffffffff811df7c7>] put_ldisc+0x57/0xa6
+[<ffffffff811dfb87>] tty_ldisc_hangup+0xe7/0x19f
+[<ffffffff811d9224>] do_tty_hangup+0xff/0x319
+[<ffffffff811d9453>] tty_vhangup+0x15/0x17
+[<ffffffff811e1263>] pty_close+0x127/0x12b
+[<ffffffff811dac41>] tty_release_dev+0x1ad/0x4c0
+....
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/tty/tty_ldisc.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c
+index 8e0924f..820f7dc 100644
+--- a/drivers/tty/tty_ldisc.c
++++ b/drivers/tty/tty_ldisc.c
+@@ -71,7 +71,7 @@ static void put_ldisc(struct tty_ldisc *ld)
+ 	 * We really want an "atomic_dec_and_lock_irqsave()",
+ 	 * but we don't have it, so this does it by hand.
+ 	 */
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	if (atomic_dec_and_lock(&ld->users, &tty_ldisc_lock)) {
+ 		struct tty_ldisc_ops *ldo = ld->ops;
+ 
+@@ -82,7 +82,7 @@ static void put_ldisc(struct tty_ldisc *ld)
+ 		kfree(ld);
+ 		return;
+ 	}
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ 	wake_up(&tty_ldisc_idle);
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0090-mm-scatterlist-dont-disable-irqs-on-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0090-mm-scatterlist-dont-disable-irqs-on-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0090-mm-scatterlist-dont-disable-irqs-on-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0090-mm-scatterlist-dont-disable-irqs-on-RT.patch)
@@ -0,0 +1,44 @@
+From 4598c7b576dbb428050b3502887a6d90c94b3e40 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 3 Jul 2009 08:44:34 -0500
+Subject: [PATCH 090/271] mm: scatterlist dont disable irqs on RT
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ lib/scatterlist.c |    6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/lib/scatterlist.c b/lib/scatterlist.c
+index 4ceb05d..181f364 100644
+--- a/lib/scatterlist.c
++++ b/lib/scatterlist.c
+@@ -423,7 +423,7 @@ void sg_miter_stop(struct sg_mapping_iter *miter)
+ 			flush_kernel_dcache_page(miter->page);
+ 
+ 		if (miter->__flags & SG_MITER_ATOMIC) {
+-			WARN_ON(!irqs_disabled());
++			WARN_ON_NONRT(!irqs_disabled());
+ 			kunmap_atomic(miter->addr, KM_BIO_SRC_IRQ);
+ 		} else
+ 			kunmap(miter->page);
+@@ -463,7 +463,7 @@ static size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents,
+ 
+ 	sg_miter_start(&miter, sgl, nents, sg_flags);
+ 
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 
+ 	while (sg_miter_next(&miter) && offset < buflen) {
+ 		unsigned int len;
+@@ -480,7 +480,7 @@ static size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents,
+ 
+ 	sg_miter_stop(&miter);
+ 
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ 	return offset;
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0091-signal-fix-up-rcu-wreckage.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0091-signal-fix-up-rcu-wreckage.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0091-signal-fix-up-rcu-wreckage.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0091-signal-fix-up-rcu-wreckage.patch.patch)
@@ -0,0 +1,41 @@
+From 51098fa95b231d64448d31e7301123e51b616fd2 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 22 Jul 2011 08:07:08 +0200
+Subject: [PATCH 091/271] signal-fix-up-rcu-wreckage.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/signal.c |    6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 385d137..6b744cb 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1362,12 +1362,12 @@ struct sighand_struct *__lock_task_sighand(struct task_struct *tsk,
+ 	struct sighand_struct *sighand;
+ 
+ 	for (;;) {
+-		local_irq_save(*flags);
++		local_irq_save_nort(*flags);
+ 		rcu_read_lock();
+ 		sighand = rcu_dereference(tsk->sighand);
+ 		if (unlikely(sighand == NULL)) {
+ 			rcu_read_unlock();
+-			local_irq_restore(*flags);
++			local_irq_restore_nort(*flags);
+ 			break;
+ 		}
+ 
+@@ -1378,7 +1378,7 @@ struct sighand_struct *__lock_task_sighand(struct task_struct *tsk,
+ 		}
+ 		spin_unlock(&sighand->siglock);
+ 		rcu_read_unlock();
+-		local_irq_restore(*flags);
++		local_irq_restore_nort(*flags);
+ 	}
+ 
+ 	return sighand;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0092-net-wireless-warn-nort.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0092-net-wireless-warn-nort.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0092-net-wireless-warn-nort.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0092-net-wireless-warn-nort.patch.patch)
@@ -0,0 +1,26 @@
+From afd78055655a279c7a80a7a5939e85328bb95258 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 21 Jul 2011 21:05:33 +0200
+Subject: [PATCH 092/271] net-wireless-warn-nort.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ net/mac80211/rx.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 064d20f..642d96c 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2958,7 +2958,7 @@ void ieee80211_rx(struct ieee80211_hw *hw, struct sk_buff *skb)
+ 	struct ieee80211_supported_band *sband;
+ 	struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+ 
+-	WARN_ON_ONCE(softirq_count() == 0);
++	WARN_ON_ONCE_NONRT(softirq_count() == 0);
+ 
+ 	if (WARN_ON(status->band < 0 ||
+ 		    status->band >= IEEE80211_NUM_BANDS))
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0093-mm-Replace-cgroup_page-bit-spinlock.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0093-mm-Replace-cgroup_page-bit-spinlock.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0093-mm-Replace-cgroup_page-bit-spinlock.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0093-mm-Replace-cgroup_page-bit-spinlock.patch)
@@ -0,0 +1,98 @@
+From c7d566a2a74ce0de490459ea52804d85a1e09e9d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 19 Aug 2009 09:56:42 +0200
+Subject: [PATCH 093/271] mm: Replace cgroup_page bit spinlock
+
+Bit spinlocks are not working on RT. Replace them.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/page_cgroup.h |   28 ++++++++++++++++++++++++++++
+ mm/page_cgroup.c            |    1 +
+ 2 files changed, 29 insertions(+)
+
+diff --git a/include/linux/page_cgroup.h b/include/linux/page_cgroup.h
+index 961ecc7..2927c08 100644
+--- a/include/linux/page_cgroup.h
++++ b/include/linux/page_cgroup.h
+@@ -30,6 +30,10 @@ enum {
+  */
+ struct page_cgroup {
+ 	unsigned long flags;
++#ifdef CONFIG_PREEMPT_RT_BASE
++	spinlock_t pcg_lock;
++	spinlock_t pcm_lock;
++#endif
+ 	struct mem_cgroup *mem_cgroup;
+ 	struct list_head lru;		/* per cgroup LRU list */
+ };
+@@ -96,30 +100,54 @@ static inline void lock_page_cgroup(struct page_cgroup *pc)
+ 	 * Don't take this lock in IRQ context.
+ 	 * This lock is for pc->mem_cgroup, USED, CACHE, MIGRATION
+ 	 */
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	bit_spin_lock(PCG_LOCK, &pc->flags);
++#else
++	spin_lock(&pc->pcg_lock);
++#endif
+ }
+ 
+ static inline void unlock_page_cgroup(struct page_cgroup *pc)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	bit_spin_unlock(PCG_LOCK, &pc->flags);
++#else
++	spin_unlock(&pc->pcg_lock);
++#endif
+ }
+ 
+ static inline void move_lock_page_cgroup(struct page_cgroup *pc,
+ 	unsigned long *flags)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	/*
+ 	 * We know updates to pc->flags of page cache's stats are from both of
+ 	 * usual context or IRQ context. Disable IRQ to avoid deadlock.
+ 	 */
+ 	local_irq_save(*flags);
+ 	bit_spin_lock(PCG_MOVE_LOCK, &pc->flags);
++#else
++	spin_lock_irqsave(&pc->pcm_lock, *flags);
++#endif
+ }
+ 
+ static inline void move_unlock_page_cgroup(struct page_cgroup *pc,
+ 	unsigned long *flags)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	bit_spin_unlock(PCG_MOVE_LOCK, &pc->flags);
+ 	local_irq_restore(*flags);
++#else
++	spin_unlock_irqrestore(&pc->pcm_lock, *flags);
++#endif
++}
++
++static inline void page_cgroup_lock_init(struct page_cgroup *pc)
++{
++#ifdef CONFIG_PREEMPT_RT_BASE
++	spin_lock_init(&pc->pcg_lock);
++	spin_lock_init(&pc->pcm_lock);
++#endif
+ }
+ 
+ #ifdef CONFIG_SPARSEMEM
+diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
+index 2d123f9..2e0d18d 100644
+--- a/mm/page_cgroup.c
++++ b/mm/page_cgroup.c
+@@ -17,6 +17,7 @@ static void __meminit init_page_cgroup(struct page_cgroup *pc, unsigned long id)
+ 	set_page_cgroup_array_id(pc, id);
+ 	pc->mem_cgroup = NULL;
+ 	INIT_LIST_HEAD(&pc->lru);
++	page_cgroup_lock_init(pc);
+ }
+ static unsigned long total_usage;
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0094-buffer_head-Replace-bh_uptodate_lock-for-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0094-buffer_head-Replace-bh_uptodate_lock-for-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0094-buffer_head-Replace-bh_uptodate_lock-for-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0094-buffer_head-Replace-bh_uptodate_lock-for-rt.patch)
@@ -0,0 +1,171 @@
+From 50e92b8acdddd7c2ac49988c5820847f6006e60a Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 18 Mar 2011 09:18:52 +0100
+Subject: [PATCH 094/271] buffer_head: Replace bh_uptodate_lock for -rt
+
+Wrap the bit_spin_lock calls into a separate inline and add the RT
+replacements with a real spinlock.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ fs/buffer.c                 |   21 +++++++--------------
+ fs/ntfs/aops.c              |   10 +++-------
+ include/linux/buffer_head.h |   34 ++++++++++++++++++++++++++++++++++
+ 3 files changed, 44 insertions(+), 21 deletions(-)
+
+diff --git a/fs/buffer.c b/fs/buffer.c
+index c807931..10e5ea2 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -331,8 +331,7 @@ static void end_buffer_async_read(struct buffer_head *bh, int uptodate)
+ 	 * decide that the page is now completely done.
+ 	 */
+ 	first = page_buffers(page);
+-	local_irq_save(flags);
+-	bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
++	flags = bh_uptodate_lock_irqsave(first);
+ 	clear_buffer_async_read(bh);
+ 	unlock_buffer(bh);
+ 	tmp = bh;
+@@ -345,8 +344,7 @@ static void end_buffer_async_read(struct buffer_head *bh, int uptodate)
+ 		}
+ 		tmp = tmp->b_this_page;
+ 	} while (tmp != bh);
+-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+-	local_irq_restore(flags);
++	bh_uptodate_unlock_irqrestore(first, flags);
+ 
+ 	/*
+ 	 * If none of the buffers had errors and they are all
+@@ -358,9 +356,7 @@ static void end_buffer_async_read(struct buffer_head *bh, int uptodate)
+ 	return;
+ 
+ still_busy:
+-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+-	local_irq_restore(flags);
+-	return;
++	bh_uptodate_unlock_irqrestore(first, flags);
+ }
+ 
+ /*
+@@ -394,8 +390,7 @@ void end_buffer_async_write(struct buffer_head *bh, int uptodate)
+ 	}
+ 
+ 	first = page_buffers(page);
+-	local_irq_save(flags);
+-	bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
++	flags = bh_uptodate_lock_irqsave(first);
+ 
+ 	clear_buffer_async_write(bh);
+ 	unlock_buffer(bh);
+@@ -407,15 +402,12 @@ void end_buffer_async_write(struct buffer_head *bh, int uptodate)
+ 		}
+ 		tmp = tmp->b_this_page;
+ 	}
+-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+-	local_irq_restore(flags);
++	bh_uptodate_unlock_irqrestore(first, flags);
+ 	end_page_writeback(page);
+ 	return;
+ 
+ still_busy:
+-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+-	local_irq_restore(flags);
+-	return;
++	bh_uptodate_unlock_irqrestore(first, flags);
+ }
+ EXPORT_SYMBOL(end_buffer_async_write);
+ 
+@@ -3225,6 +3217,7 @@ struct buffer_head *alloc_buffer_head(gfp_t gfp_flags)
+ 	struct buffer_head *ret = kmem_cache_zalloc(bh_cachep, gfp_flags);
+ 	if (ret) {
+ 		INIT_LIST_HEAD(&ret->b_assoc_buffers);
++		buffer_head_init_locks(ret);
+ 		preempt_disable();
+ 		__this_cpu_inc(bh_accounting.nr);
+ 		recalc_bh_state();
+diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c
+index 0b1e885b..7fb7f1b 100644
+--- a/fs/ntfs/aops.c
++++ b/fs/ntfs/aops.c
+@@ -108,8 +108,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate)
+ 				"0x%llx.", (unsigned long long)bh->b_blocknr);
+ 	}
+ 	first = page_buffers(page);
+-	local_irq_save(flags);
+-	bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
++	flags = bh_uptodate_lock_irqsave(first);
+ 	clear_buffer_async_read(bh);
+ 	unlock_buffer(bh);
+ 	tmp = bh;
+@@ -124,8 +123,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate)
+ 		}
+ 		tmp = tmp->b_this_page;
+ 	} while (tmp != bh);
+-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+-	local_irq_restore(flags);
++	bh_uptodate_unlock_irqrestore(first, flags);
+ 	/*
+ 	 * If none of the buffers had errors then we can set the page uptodate,
+ 	 * but we first have to perform the post read mst fixups, if the
+@@ -160,9 +158,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate)
+ 	unlock_page(page);
+ 	return;
+ still_busy:
+-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+-	local_irq_restore(flags);
+-	return;
++	bh_uptodate_unlock_irqrestore(first, flags);
+ }
+ 
+ /**
+diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
+index 458f497..5c16cf1 100644
+--- a/include/linux/buffer_head.h
++++ b/include/linux/buffer_head.h
+@@ -72,8 +72,42 @@ struct buffer_head {
+ 	struct address_space *b_assoc_map;	/* mapping this buffer is
+ 						   associated with */
+ 	atomic_t b_count;		/* users using this buffer_head */
++#ifdef CONFIG_PREEMPT_RT_BASE
++	spinlock_t b_uptodate_lock;
++#endif
+ };
+ 
++static inline unsigned long bh_uptodate_lock_irqsave(struct buffer_head *bh)
++{
++	unsigned long flags;
++
++#ifndef CONFIG_PREEMPT_RT_BASE
++	local_irq_save(flags);
++	bit_spin_lock(BH_Uptodate_Lock, &bh->b_state);
++#else
++	spin_lock_irqsave(&bh->b_uptodate_lock, flags);
++#endif
++	return flags;
++}
++
++static inline void
++bh_uptodate_unlock_irqrestore(struct buffer_head *bh, unsigned long flags)
++{
++#ifndef CONFIG_PREEMPT_RT_BASE
++	bit_spin_unlock(BH_Uptodate_Lock, &bh->b_state);
++	local_irq_restore(flags);
++#else
++	spin_unlock_irqrestore(&bh->b_uptodate_lock, flags);
++#endif
++}
++
++static inline void buffer_head_init_locks(struct buffer_head *bh)
++{
++#ifdef CONFIG_PREEMPT_RT_BASE
++	spin_lock_init(&bh->b_uptodate_lock);
++#endif
++}
++
+ /*
+  * macro tricks to expand the set_buffer_foo(), clear_buffer_foo()
+  * and buffer_foo() functions.
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0095-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0095-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0095-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0095-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch)
@@ -0,0 +1,113 @@
+From 43b438bb16c45a9258a5e016f7033bc7e1e2303b Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 18 Mar 2011 10:11:25 +0100
+Subject: [PATCH 095/271] fs: jbd/jbd2: Make state lock and journal head lock
+ rt safe
+
+bit_spin_locks break under RT.
+
+Based on a previous patch from Steven Rostedt <rostedt at goodmis.org>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+
+--
+
+ include/linux/buffer_head.h |   10 ++++++++++
+ include/linux/jbd_common.h  |   24 ++++++++++++++++++++++++
+ 2 files changed, 34 insertions(+)
+---
+ include/linux/buffer_head.h |   10 ++++++++++
+ include/linux/jbd_common.h  |   24 ++++++++++++++++++++++++
+ 2 files changed, 34 insertions(+)
+
+diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
+index 5c16cf1..3f8e27b 100644
+--- a/include/linux/buffer_head.h
++++ b/include/linux/buffer_head.h
+@@ -74,6 +74,11 @@ struct buffer_head {
+ 	atomic_t b_count;		/* users using this buffer_head */
+ #ifdef CONFIG_PREEMPT_RT_BASE
+ 	spinlock_t b_uptodate_lock;
++#if defined(CONFIG_JBD) || defined(CONFIG_JBD_MODULE) || \
++    defined(CONFIG_JBD2) || defined(CONFIG_JBD2_MODULE)
++	spinlock_t b_state_lock;
++	spinlock_t b_journal_head_lock;
++#endif
+ #endif
+ };
+ 
+@@ -105,6 +110,11 @@ static inline void buffer_head_init_locks(struct buffer_head *bh)
+ {
+ #ifdef CONFIG_PREEMPT_RT_BASE
+ 	spin_lock_init(&bh->b_uptodate_lock);
++#if defined(CONFIG_JBD) || defined(CONFIG_JBD_MODULE) || \
++    defined(CONFIG_JBD2) || defined(CONFIG_JBD2_MODULE)
++	spin_lock_init(&bh->b_state_lock);
++	spin_lock_init(&bh->b_journal_head_lock);
++#endif
+ #endif
+ }
+ 
+diff --git a/include/linux/jbd_common.h b/include/linux/jbd_common.h
+index 6230f85..11c313e 100644
+--- a/include/linux/jbd_common.h
++++ b/include/linux/jbd_common.h
+@@ -37,32 +37,56 @@ static inline struct journal_head *bh2jh(struct buffer_head *bh)
+ 
+ static inline void jbd_lock_bh_state(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	bit_spin_lock(BH_State, &bh->b_state);
++#else
++	spin_lock(&bh->b_state_lock);
++#endif
+ }
+ 
+ static inline int jbd_trylock_bh_state(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	return bit_spin_trylock(BH_State, &bh->b_state);
++#else
++	return spin_trylock(&bh->b_state_lock);
++#endif
+ }
+ 
+ static inline int jbd_is_locked_bh_state(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	return bit_spin_is_locked(BH_State, &bh->b_state);
++#else
++	return spin_is_locked(&bh->b_state_lock);
++#endif
+ }
+ 
+ static inline void jbd_unlock_bh_state(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	bit_spin_unlock(BH_State, &bh->b_state);
++#else
++	spin_unlock(&bh->b_state_lock);
++#endif
+ }
+ 
+ static inline void jbd_lock_bh_journal_head(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	bit_spin_lock(BH_JournalHead, &bh->b_state);
++#else
++	spin_lock(&bh->b_journal_head_lock);
++#endif
+ }
+ 
+ static inline void jbd_unlock_bh_journal_head(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	bit_spin_unlock(BH_JournalHead, &bh->b_state);
++#else
++	spin_unlock(&bh->b_journal_head_lock);
++#endif
+ }
+ 
+ #endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0096-genirq-Disable-DEBUG_SHIRQ-for-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0096-genirq-Disable-DEBUG_SHIRQ-for-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0096-genirq-Disable-DEBUG_SHIRQ-for-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0096-genirq-Disable-DEBUG_SHIRQ-for-rt.patch)
@@ -0,0 +1,26 @@
+From 237b1f26fdb29f4bafb5bb38caaf2f789014bfb7 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 18 Mar 2011 10:22:04 +0100
+Subject: [PATCH 096/271] genirq: Disable DEBUG_SHIRQ for rt
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ lib/Kconfig.debug |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 82928f5..c347db3 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -151,7 +151,7 @@ config DEBUG_KERNEL
+ 
+ config DEBUG_SHIRQ
+ 	bool "Debug shared IRQ handlers"
+-	depends on DEBUG_KERNEL && GENERIC_HARDIRQS
++	depends on DEBUG_KERNEL && GENERIC_HARDIRQS && !PREEMPT_RT_BASE
+ 	help
+ 	  Enable this to generate a spurious interrupt as soon as a shared
+ 	  interrupt handler is registered, and just before one is deregistered.
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0097-genirq-Disable-random-call-on-preempt-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0097-genirq-Disable-random-call-on-preempt-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0097-genirq-Disable-random-call-on-preempt-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0097-genirq-Disable-random-call-on-preempt-rt.patch)
@@ -0,0 +1,32 @@
+From 4149711fe08f0340a4e090c29902bed82d01d708 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 21 Jul 2009 16:07:37 +0200
+Subject: [PATCH 097/271] genirq: Disable random call on preempt-rt
+
+The random call introduces high latencies and is almost
+unused. Disable it for -rt.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/irq/handle.c |    3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
+index 470d08c..634620c 100644
+--- a/kernel/irq/handle.c
++++ b/kernel/irq/handle.c
+@@ -156,8 +156,11 @@ handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action)
+ 		action = action->next;
+ 	} while (action);
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
++	/* FIXME: Can we unbreak that ? */
+ 	if (random & IRQF_SAMPLE_RANDOM)
+ 		add_interrupt_randomness(irq);
++#endif
+ 
+ 	if (!noirqdebug)
+ 		note_interrupt(irq, desc, retval);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0098-genirq-disable-irqpoll-on-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0098-genirq-disable-irqpoll-on-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0098-genirq-disable-irqpoll-on-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0098-genirq-disable-irqpoll-on-rt.patch)
@@ -0,0 +1,44 @@
+From 495b2823649a551c9267609edb1e71aefbcfdbc9 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:57 -0500
+Subject: [PATCH 098/271] genirq: disable irqpoll on -rt
+
+Creates long latencies for no value
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/irq/spurious.c |   10 ++++++++++
+ 1 file changed, 10 insertions(+)
+
+diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c
+index dc813a9..d09e0f5 100644
+--- a/kernel/irq/spurious.c
++++ b/kernel/irq/spurious.c
+@@ -341,6 +341,11 @@ MODULE_PARM_DESC(noirqdebug, "Disable irq lockup detection when true");
+ 
+ static int __init irqfixup_setup(char *str)
+ {
++#ifdef CONFIG_PREEMPT_RT_BASE
++	printk(KERN_WARNING "irqfixup boot option not supported "
++		"w/ CONFIG_PREEMPT_RT\n");
++	return 1;
++#endif
+ 	irqfixup = 1;
+ 	printk(KERN_WARNING "Misrouted IRQ fixup support enabled.\n");
+ 	printk(KERN_WARNING "This may impact system performance.\n");
+@@ -353,6 +358,11 @@ module_param(irqfixup, int, 0644);
+ 
+ static int __init irqpoll_setup(char *str)
+ {
++#ifdef CONFIG_PREEMPT_RT_BASE
++	printk(KERN_WARNING "irqpoll boot option not supported "
++		"w/ CONFIG_PREEMPT_RT\n");
++	return 1;
++#endif
+ 	irqfixup = 2;
+ 	printk(KERN_WARNING "Misrouted IRQ fixup and polling support "
+ 				"enabled\n");
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0099-genirq-force-threading.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0099-genirq-force-threading.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0099-genirq-force-threading.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0099-genirq-force-threading.patch.patch)
@@ -0,0 +1,54 @@
+From b502e77a5117fe7ee42aab041fa2411e93c6a542 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 3 Apr 2011 11:57:29 +0200
+Subject: [PATCH 099/271] genirq-force-threading.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/interrupt.h |    8 ++++++--
+ kernel/irq/manage.c       |    2 ++
+ 2 files changed, 8 insertions(+), 2 deletions(-)
+
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index ddd6b2a..b9162dc 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -396,9 +396,13 @@ static inline int disable_irq_wake(unsigned int irq)
+ 
+ 
+ #ifdef CONFIG_IRQ_FORCED_THREADING
+-extern bool force_irqthreads;
++# ifndef CONFIG_PREEMPT_RT_BASE
++   extern bool force_irqthreads;
++# else
++#  define force_irqthreads	(true)
++# endif
+ #else
+-#define force_irqthreads	(0)
++#define force_irqthreads	(false)
+ #endif
+ 
+ #ifndef __ARCH_SET_SOFTIRQ_PENDING
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 7600092..b3e6228 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -18,6 +18,7 @@
+ #include "internals.h"
+ 
+ #ifdef CONFIG_IRQ_FORCED_THREADING
++# ifndef CONFIG_PREEMPT_RT_BASE
+ __read_mostly bool force_irqthreads;
+ 
+ static int __init setup_forced_irqthreads(char *arg)
+@@ -26,6 +27,7 @@ static int __init setup_forced_irqthreads(char *arg)
+ 	return 0;
+ }
+ early_param("threadirqs", setup_forced_irqthreads);
++# endif
+ #endif
+ 
+ /**
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0100-drivers-net-fix-livelock-issues.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0100-drivers-net-fix-livelock-issues.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0100-drivers-net-fix-livelock-issues.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0100-drivers-net-fix-livelock-issues.patch)
@@ -0,0 +1,144 @@
+From fb73e20b1d7e7333a7c5de9f85749b210792c13d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 20 Jun 2009 11:36:54 +0200
+Subject: [PATCH 100/271] drivers/net: fix livelock issues
+
+Preempt-RT runs into a live lock issue with the NETDEV_TX_LOCKED micro
+optimization. The reason is that the softirq thread is rescheduling
+itself on that return value. Depending on priorities it starts to
+monoplize the CPU and livelock on UP systems.
+
+Remove it.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/net/ethernet/atheros/atl1c/atl1c_main.c      |    6 +-----
+ drivers/net/ethernet/atheros/atl1e/atl1e_main.c      |    3 +--
+ drivers/net/ethernet/chelsio/cxgb/sge.c              |    3 +--
+ drivers/net/ethernet/neterion/s2io.c                 |    7 +------
+ drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c |    7 +++----
+ drivers/net/ethernet/tehuti/tehuti.c                 |    9 ++-------
+ drivers/net/rionet.c                                 |    6 +-----
+ 7 files changed, 10 insertions(+), 31 deletions(-)
+
+diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+index eccdcff..ee8d8a2 100644
+--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
++++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+@@ -2236,11 +2236,7 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
+ 	}
+ 
+ 	tpd_req = atl1c_cal_tpd_req(skb);
+-	if (!spin_trylock_irqsave(&adapter->tx_lock, flags)) {
+-		if (netif_msg_pktdata(adapter))
+-			dev_info(&adapter->pdev->dev, "tx locked\n");
+-		return NETDEV_TX_LOCKED;
+-	}
++	spin_lock_irqsave(&adapter->tx_lock, flags);
+ 
+ 	if (atl1c_tpd_avail(adapter, type) < tpd_req) {
+ 		/* no enough descriptor, just stop queue */
+diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
+index 95483bc..eaf84e9 100644
+--- a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
++++ b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
+@@ -1819,8 +1819,7 @@ static netdev_tx_t atl1e_xmit_frame(struct sk_buff *skb,
+ 		return NETDEV_TX_OK;
+ 	}
+ 	tpd_req = atl1e_cal_tdp_req(skb);
+-	if (!spin_trylock_irqsave(&adapter->tx_lock, flags))
+-		return NETDEV_TX_LOCKED;
++	spin_lock_irqsave(&adapter->tx_lock, flags);
+ 
+ 	if (atl1e_tpd_avail(adapter) < tpd_req) {
+ 		/* no enough descriptor, just stop queue */
+diff --git a/drivers/net/ethernet/chelsio/cxgb/sge.c b/drivers/net/ethernet/chelsio/cxgb/sge.c
+index f9b6023..6d7412a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb/sge.c
+@@ -1678,8 +1678,7 @@ static int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,
+ 	struct cmdQ *q = &sge->cmdQ[qid];
+ 	unsigned int credits, pidx, genbit, count, use_sched_skb = 0;
+ 
+-	if (!spin_trylock(&q->lock))
+-		return NETDEV_TX_LOCKED;
++	spin_lock(&q->lock);
+ 
+ 	reclaim_completed_tx(sge, q);
+ 
+diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c
+index c27fb3d..4624278 100644
+--- a/drivers/net/ethernet/neterion/s2io.c
++++ b/drivers/net/ethernet/neterion/s2io.c
+@@ -4090,12 +4090,7 @@ static netdev_tx_t s2io_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			[skb->priority & (MAX_TX_FIFOS - 1)];
+ 	fifo = &mac_control->fifos[queue];
+ 
+-	if (do_spin_lock)
+-		spin_lock_irqsave(&fifo->tx_lock, flags);
+-	else {
+-		if (unlikely(!spin_trylock_irqsave(&fifo->tx_lock, flags)))
+-			return NETDEV_TX_LOCKED;
+-	}
++	spin_lock_irqsave(&fifo->tx_lock, flags);
+ 
+ 	if (sp->config.multiq) {
+ 		if (__netif_subqueue_stopped(dev, fifo->fifo_no)) {
+diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+index 43c7b25..c084bea 100644
+--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
++++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+@@ -1937,10 +1937,9 @@ static int pch_gbe_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
+ 		adapter->stats.tx_length_errors++;
+ 		return NETDEV_TX_OK;
+ 	}
+-	if (!spin_trylock_irqsave(&tx_ring->tx_lock, flags)) {
+-		/* Collision - tell upper layer to requeue */
+-		return NETDEV_TX_LOCKED;
+-	}
++
++	spin_lock_irqsave(&tx_ring->tx_lock, flags);
++
+ 	if (unlikely(!PCH_GBE_DESC_UNUSED(tx_ring))) {
+ 		netif_stop_queue(netdev);
+ 		spin_unlock_irqrestore(&tx_ring->tx_lock, flags);
+diff --git a/drivers/net/ethernet/tehuti/tehuti.c b/drivers/net/ethernet/tehuti/tehuti.c
+index 3a90af6..e2e930e 100644
+--- a/drivers/net/ethernet/tehuti/tehuti.c
++++ b/drivers/net/ethernet/tehuti/tehuti.c
+@@ -1605,13 +1605,8 @@ static netdev_tx_t bdx_tx_transmit(struct sk_buff *skb,
+ 	unsigned long flags;
+ 
+ 	ENTER;
+-	local_irq_save(flags);
+-	if (!spin_trylock(&priv->tx_lock)) {
+-		local_irq_restore(flags);
+-		DBG("%s[%s]: TX locked, returning NETDEV_TX_LOCKED\n",
+-		    BDX_DRV_NAME, ndev->name);
+-		return NETDEV_TX_LOCKED;
+-	}
++
++	spin_lock_irqsave(&priv->tx_lock, flags);
+ 
+ 	/* build tx descriptor */
+ 	BDX_ASSERT(f->m.wptr >= f->m.memsz);	/* started with valid wptr */
+diff --git a/drivers/net/rionet.c b/drivers/net/rionet.c
+index 7145714..2a1ed18 100644
+--- a/drivers/net/rionet.c
++++ b/drivers/net/rionet.c
+@@ -176,11 +176,7 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	u16 destid;
+ 	unsigned long flags;
+ 
+-	local_irq_save(flags);
+-	if (!spin_trylock(&rnet->tx_lock)) {
+-		local_irq_restore(flags);
+-		return NETDEV_TX_LOCKED;
+-	}
++	spin_lock_irqsave(&rnet->tx_lock, flags);
+ 
+ 	if ((rnet->tx_cnt + 1) > RIONET_TX_RING_SIZE) {
+ 		netif_stop_queue(ndev);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0101-drivers-net-vortex-fix-locking-issues.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0101-drivers-net-vortex-fix-locking-issues.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0101-drivers-net-vortex-fix-locking-issues.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0101-drivers-net-vortex-fix-locking-issues.patch)
@@ -0,0 +1,57 @@
+From 3431e5abf7b21e512a4e881fd62de81171cd0345 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Fri, 3 Jul 2009 08:30:00 -0500
+Subject: [PATCH 101/271] drivers/net: vortex fix locking issues
+
+Argh, cut and paste wasn't enough...
+
+Use this patch instead.  It needs an irq disable.  But, believe it or not,
+on SMP this is actually better.  If the irq is shared (as it is in Mark's
+case), we don't stop the irq of other devices from being handled on
+another CPU (unfortunately for Mark, he pinned all interrupts to one CPU).
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+
+ drivers/net/ethernet/3com/3c59x.c |    8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+---
+ drivers/net/ethernet/3com/3c59x.c |    8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/net/ethernet/3com/3c59x.c b/drivers/net/ethernet/3com/3c59x.c
+index e0c5529..af31580 100644
+--- a/drivers/net/ethernet/3com/3c59x.c
++++ b/drivers/net/ethernet/3com/3c59x.c
+@@ -843,9 +843,9 @@ static void poll_vortex(struct net_device *dev)
+ {
+ 	struct vortex_private *vp = netdev_priv(dev);
+ 	unsigned long flags;
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	(vp->full_bus_master_rx ? boomerang_interrupt:vortex_interrupt)(dev->irq,dev);
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ }
+ #endif
+ 
+@@ -1921,12 +1921,12 @@ static void vortex_tx_timeout(struct net_device *dev)
+ 			 * Block interrupts because vortex_interrupt does a bare spin_lock()
+ 			 */
+ 			unsigned long flags;
+-			local_irq_save(flags);
++			local_irq_save_nort(flags);
+ 			if (vp->full_bus_master_tx)
+ 				boomerang_interrupt(dev->irq, dev);
+ 			else
+ 				vortex_interrupt(dev->irq, dev);
+-			local_irq_restore(flags);
++			local_irq_restore_nort(flags);
+ 		}
+ 	}
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0102-drivers-net-gianfar-Make-RT-aware.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0102-drivers-net-gianfar-Make-RT-aware.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0102-drivers-net-gianfar-Make-RT-aware.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0102-drivers-net-gianfar-Make-RT-aware.patch)
@@ -0,0 +1,60 @@
+From f2143bd89a36fc19294bf0888f69e055db19150e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 1 Apr 2010 20:20:57 +0200
+Subject: [PATCH 102/271] drivers: net: gianfar: Make RT aware
+
+The adjust_link() disables interrupts before taking the queue
+locks. On RT those locks are converted to "sleeping" locks and
+therefor the local_irq_save/restore must be converted to
+local_irq_save/restore_nort.
+
+Reported-by: Xianghua Xiao <xiaoxianghua at gmail.com>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Tested-by: Xianghua Xiao <xiaoxianghua at gmail.com>
+---
+ drivers/net/ethernet/freescale/gianfar.c |    8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c
+index 83199fd..97d238c 100644
+--- a/drivers/net/ethernet/freescale/gianfar.c
++++ b/drivers/net/ethernet/freescale/gianfar.c
+@@ -1671,7 +1671,7 @@ void stop_gfar(struct net_device *dev)
+ 
+ 
+ 	/* Lock it down */
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	lock_tx_qs(priv);
+ 	lock_rx_qs(priv);
+ 
+@@ -1679,7 +1679,7 @@ void stop_gfar(struct net_device *dev)
+ 
+ 	unlock_rx_qs(priv);
+ 	unlock_tx_qs(priv);
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ 
+ 	/* Free the IRQs */
+ 	if (priv->device_flags & FSL_GIANFAR_DEV_HAS_MULTI_INTR) {
+@@ -2949,7 +2949,7 @@ static void adjust_link(struct net_device *dev)
+ 	struct phy_device *phydev = priv->phydev;
+ 	int new_state = 0;
+ 
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	lock_tx_qs(priv);
+ 
+ 	if (phydev->link) {
+@@ -3016,7 +3016,7 @@ static void adjust_link(struct net_device *dev)
+ 	if (new_state && netif_msg_link(priv))
+ 		phy_print_status(phydev);
+ 	unlock_tx_qs(priv);
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ }
+ 
+ /* Update the hash table based on the current list of multicast
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0103-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0103-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0103-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0103-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch)
@@ -0,0 +1,42 @@
+From b4cadcb970f09b29bc22b63733153dd998a7c8a4 Mon Sep 17 00:00:00 2001
+From: Wu Zhangjin <wuzj at lemote.com>
+Date: Mon, 4 Jan 2010 11:33:02 +0800
+Subject: [PATCH 103/271] USB: Fix the mouse problem when copying large
+ amounts of data
+
+When copying large amounts of data between the USB storage devices and
+the hard disk, the USB mouse will not work, this patch fixes it.
+
+[NOTE: This problem have been found in the Loongson family machines, not
+sure whether it is producible on other platforms]
+
+Signed-off-by: Hu Hongbing <huhb at lemote.com>
+Signed-off-by: Wu Zhangjin <wuzhangjin at gmail.com>
+---
+ drivers/usb/host/ohci-hcd.c |   10 +++++++---
+ 1 file changed, 7 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c
+index b263919..d4b05d1 100644
+--- a/drivers/usb/host/ohci-hcd.c
++++ b/drivers/usb/host/ohci-hcd.c
+@@ -830,9 +830,13 @@ static irqreturn_t ohci_irq (struct usb_hcd *hcd)
+ 	}
+ 
+ 	if (ints & OHCI_INTR_WDH) {
+-		spin_lock (&ohci->lock);
+-		dl_done_list (ohci);
+-		spin_unlock (&ohci->lock);
++		if (ohci->hcca->done_head == 0) {
++			ints &= ~OHCI_INTR_WDH;
++		} else {
++			spin_lock (&ohci->lock);
++			dl_done_list (ohci);
++			spin_unlock (&ohci->lock);
++		}
+ 	}
+ 
+ 	if (quirk_zfmicro(ohci) && (ints & OHCI_INTR_SF)) {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0104-local-var.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0104-local-var.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0104-local-var.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0104-local-var.patch.patch)
@@ -0,0 +1,29 @@
+From 4503940a082ae0386fe109726d2eec9197669dd7 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 24 Jun 2011 18:40:37 +0200
+Subject: [PATCH 104/271] local-var.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/percpu.h |    5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/include/linux/percpu.h b/include/linux/percpu.h
+index 9ca008f..3941ea4 100644
+--- a/include/linux/percpu.h
++++ b/include/linux/percpu.h
+@@ -48,6 +48,11 @@
+ 	preempt_enable();				\
+ } while (0)
+ 
++#define get_local_var(var)	get_cpu_var(var)
++#define put_local_var(var)	put_cpu_var(var)
++#define get_local_ptr(var)	get_cpu_ptr(var)
++#define put_local_ptr(var)	put_cpu_ptr(var)
++
+ /* minimum unit size, also is the maximum supported allocation size */
+ #define PCPU_MIN_UNIT_SIZE		PFN_ALIGN(32 << 10)
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0105-rt-local-irq-lock.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0105-rt-local-irq-lock.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0105-rt-local-irq-lock.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0105-rt-local-irq-lock.patch.patch)
@@ -0,0 +1,250 @@
+From 621ee289179945850cb2878d7c0bc65169d1f275 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 20 Jun 2011 09:03:47 +0200
+Subject: [PATCH 105/271] rt-local-irq-lock.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/locallock.h |  230 +++++++++++++++++++++++++++++++++++++++++++++
+ 1 file changed, 230 insertions(+)
+ create mode 100644 include/linux/locallock.h
+
+diff --git a/include/linux/locallock.h b/include/linux/locallock.h
+new file mode 100644
+index 0000000..8fbc393
+--- /dev/null
++++ b/include/linux/locallock.h
+@@ -0,0 +1,230 @@
++#ifndef _LINUX_LOCALLOCK_H
++#define _LINUX_LOCALLOCK_H
++
++#include <linux/spinlock.h>
++
++#ifdef CONFIG_PREEMPT_RT_BASE
++
++#ifdef CONFIG_DEBUG_SPINLOCK
++# define LL_WARN(cond)	WARN_ON(cond)
++#else
++# define LL_WARN(cond)	do { } while (0)
++#endif
++
++/*
++ * per cpu lock based substitute for local_irq_*()
++ */
++struct local_irq_lock {
++	spinlock_t		lock;
++	struct task_struct	*owner;
++	int			nestcnt;
++	unsigned long		flags;
++};
++
++#define DEFINE_LOCAL_IRQ_LOCK(lvar)					\
++	DEFINE_PER_CPU(struct local_irq_lock, lvar) = {			\
++		.lock = __SPIN_LOCK_UNLOCKED((lvar).lock) }
++
++#define local_irq_lock_init(lvar)					\
++	do {								\
++		int __cpu;						\
++		for_each_possible_cpu(__cpu)				\
++			spin_lock_init(&per_cpu(lvar, __cpu).lock);	\
++	} while (0)
++
++static inline void __local_lock(struct local_irq_lock *lv)
++{
++	if (lv->owner != current) {
++		spin_lock(&lv->lock);
++		LL_WARN(lv->owner);
++		LL_WARN(lv->nestcnt);
++		lv->owner = current;
++	}
++	lv->nestcnt++;
++}
++
++#define local_lock(lvar)					\
++	do { __local_lock(&get_local_var(lvar)); } while (0)
++
++static inline int __local_trylock(struct local_irq_lock *lv)
++{
++	if (lv->owner != current && spin_trylock(&lv->lock)) {
++		LL_WARN(lv->owner);
++		LL_WARN(lv->nestcnt);
++		lv->owner = current;
++		lv->nestcnt = 1;
++		return 1;
++	}
++	return 0;
++}
++
++#define local_trylock(lvar)						\
++	({								\
++		int __locked;						\
++		__locked = __local_trylock(&get_local_var(lvar));	\
++		if (!__locked)						\
++			put_local_var(lvar);				\
++		__locked;						\
++	})
++
++static inline void __local_unlock(struct local_irq_lock *lv)
++{
++	LL_WARN(lv->nestcnt == 0);
++	LL_WARN(lv->owner != current);
++	if (--lv->nestcnt)
++		return;
++
++	lv->owner = NULL;
++	spin_unlock(&lv->lock);
++}
++
++#define local_unlock(lvar)					\
++	do {							\
++		__local_unlock(&__get_cpu_var(lvar));		\
++		put_local_var(lvar);				\
++	} while (0)
++
++static inline void __local_lock_irq(struct local_irq_lock *lv)
++{
++	spin_lock_irqsave(&lv->lock, lv->flags);
++	LL_WARN(lv->owner);
++	LL_WARN(lv->nestcnt);
++	lv->owner = current;
++	lv->nestcnt = 1;
++}
++
++#define local_lock_irq(lvar)						\
++	do { __local_lock_irq(&get_local_var(lvar)); } while (0)
++
++static inline void __local_unlock_irq(struct local_irq_lock *lv)
++{
++	LL_WARN(!lv->nestcnt);
++	LL_WARN(lv->owner != current);
++	lv->owner = NULL;
++	lv->nestcnt = 0;
++	spin_unlock_irq(&lv->lock);
++}
++
++#define local_unlock_irq(lvar)						\
++	do {								\
++		__local_unlock_irq(&__get_cpu_var(lvar));		\
++		put_local_var(lvar);					\
++	} while (0)
++
++static inline int __local_lock_irqsave(struct local_irq_lock *lv)
++{
++	if (lv->owner != current) {
++		__local_lock_irq(lv);
++		return 0;
++	} else {
++		lv->nestcnt++;
++		return 1;
++	}
++}
++
++#define local_lock_irqsave(lvar, _flags)				\
++	do {								\
++		if (__local_lock_irqsave(&get_local_var(lvar)))		\
++			put_local_var(lvar);				\
++		_flags = __get_cpu_var(lvar).flags;			\
++	} while (0)
++
++static inline int __local_unlock_irqrestore(struct local_irq_lock *lv,
++					    unsigned long flags)
++{
++	LL_WARN(!lv->nestcnt);
++	LL_WARN(lv->owner != current);
++	if (--lv->nestcnt)
++		return 0;
++
++	lv->owner = NULL;
++	spin_unlock_irqrestore(&lv->lock, lv->flags);
++	return 1;
++}
++
++#define local_unlock_irqrestore(lvar, flags)				\
++	do {								\
++		if (__local_unlock_irqrestore(&__get_cpu_var(lvar), flags)) \
++			put_local_var(lvar);				\
++	} while (0)
++
++#define local_spin_trylock_irq(lvar, lock)				\
++	({								\
++		int __locked;						\
++		local_lock_irq(lvar);					\
++		__locked = spin_trylock(lock);				\
++		if (!__locked)						\
++			local_unlock_irq(lvar);				\
++		__locked;						\
++	})
++
++#define local_spin_lock_irq(lvar, lock)					\
++	do {								\
++		local_lock_irq(lvar);					\
++		spin_lock(lock);					\
++	} while (0)
++
++#define local_spin_unlock_irq(lvar, lock)				\
++	do {								\
++		spin_unlock(lock);					\
++		local_unlock_irq(lvar);					\
++	} while (0)
++
++#define local_spin_lock_irqsave(lvar, lock, flags)			\
++	do {								\
++		local_lock_irqsave(lvar, flags);			\
++		spin_lock(lock);					\
++	} while (0)
++
++#define local_spin_unlock_irqrestore(lvar, lock, flags)			\
++	do {								\
++		spin_unlock(lock);					\
++		local_unlock_irqrestore(lvar, flags);			\
++	} while (0)
++
++#define get_locked_var(lvar, var)					\
++	(*({								\
++		local_lock(lvar);					\
++		&__get_cpu_var(var);					\
++	}))
++
++#define put_locked_var(lvar, var)		local_unlock(lvar)
++
++#define local_lock_cpu(lvar)						\
++	({								\
++		local_lock(lvar);					\
++		smp_processor_id();					\
++	})
++
++#define local_unlock_cpu(lvar)			local_unlock(lvar)
++
++#else /* PREEMPT_RT_BASE */
++
++#define DEFINE_LOCAL_IRQ_LOCK(lvar)		__typeof__(const int) lvar
++
++static inline void local_irq_lock_init(int lvar) { }
++
++#define local_lock(lvar)			preempt_disable()
++#define local_unlock(lvar)			preempt_enable()
++#define local_lock_irq(lvar)			local_irq_disable()
++#define local_unlock_irq(lvar)			local_irq_enable()
++#define local_lock_irqsave(lvar, flags)		local_irq_save(flags)
++#define local_unlock_irqrestore(lvar, flags)	local_irq_restore(flags)
++
++#define local_spin_trylock_irq(lvar, lock)	spin_trylock_irq(lock)
++#define local_spin_lock_irq(lvar, lock)		spin_lock_irq(lock)
++#define local_spin_unlock_irq(lvar, lock)	spin_unlock_irq(lock)
++#define local_spin_lock_irqsave(lvar, lock, flags)	\
++	spin_lock_irqsave(lock, flags)
++#define local_spin_unlock_irqrestore(lvar, lock, flags)	\
++	spin_unlock_irqrestore(lock, flags)
++
++#define get_locked_var(lvar, var)		get_cpu_var(var)
++#define put_locked_var(lvar, var)		put_cpu_var(var)
++
++#define local_lock_cpu(lvar)			get_cpu()
++#define local_unlock_cpu(lvar)			put_cpu()
++
++#endif
++
++#endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0106-cpu-rt-variants.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0106-cpu-rt-variants.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0106-cpu-rt-variants.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0106-cpu-rt-variants.patch.patch)
@@ -0,0 +1,32 @@
+From 24dfb357f232fbda6d144093f8eac4038287b01b Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 17 Jun 2011 15:42:38 +0200
+Subject: [PATCH 106/271] cpu-rt-variants.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/smp.h |    8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/include/linux/smp.h b/include/linux/smp.h
+index 8cc38d3..e6c58d8 100644
+--- a/include/linux/smp.h
++++ b/include/linux/smp.h
+@@ -173,6 +173,14 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func,
+ #define get_cpu()		({ preempt_disable(); smp_processor_id(); })
+ #define put_cpu()		preempt_enable()
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
++# define get_cpu_light()	get_cpu()
++# define put_cpu_light()	put_cpu()
++#else
++# define get_cpu_light()	({ migrate_disable(); smp_processor_id(); })
++# define put_cpu_light()	migrate_enable()
++#endif
++
+ /*
+  * Callback to arch code if there's nosmp or maxcpus=0 on the
+  * boot command line:
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0107-mm-slab-wrap-functions.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0107-mm-slab-wrap-functions.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0107-mm-slab-wrap-functions.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0107-mm-slab-wrap-functions.patch.patch)
@@ -0,0 +1,454 @@
+From e501e07e51837018820a9b02957ce7e43d902171 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 18 Jun 2011 19:44:43 +0200
+Subject: [PATCH 107/271] mm-slab-wrap-functions.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/slab.c |  152 ++++++++++++++++++++++++++++++++++++++++++-------------------
+ 1 file changed, 104 insertions(+), 48 deletions(-)
+
+diff --git a/mm/slab.c b/mm/slab.c
+index 1fd9983..38575a8 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -116,6 +116,7 @@
+ #include	<linux/kmemcheck.h>
+ #include	<linux/memory.h>
+ #include	<linux/prefetch.h>
++#include	<linux/locallock.h>
+ 
+ #include	<asm/cacheflush.h>
+ #include	<asm/tlbflush.h>
+@@ -722,12 +723,40 @@ static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep)
+ #endif
+ 
+ static DEFINE_PER_CPU(struct delayed_work, slab_reap_work);
++static DEFINE_LOCAL_IRQ_LOCK(slab_lock);
++
++#ifndef CONFIG_PREEMPT_RT_BASE
++# define slab_on_each_cpu(func, cp)	on_each_cpu(func, cp, 1)
++#else
++/*
++ * execute func() for all CPUs. On PREEMPT_RT we dont actually have
++ * to run on the remote CPUs - we only have to take their CPU-locks.
++ * (This is a rare operation, so cacheline bouncing is not an issue.)
++ */
++static void
++slab_on_each_cpu(void (*func)(void *arg, int this_cpu), void *arg)
++{
++	unsigned int i;
++
++	for_each_online_cpu(i) {
++		spin_lock_irq(&per_cpu(slab_lock, i).lock);
++		func(arg, i);
++		spin_unlock_irq(&per_cpu(slab_lock, i).lock);
++	}
++}
++#endif
+ 
+ static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep)
+ {
+ 	return cachep->array[smp_processor_id()];
+ }
+ 
++static inline struct array_cache *cpu_cache_get_on_cpu(struct kmem_cache *cachep,
++						       int cpu)
++{
++	return cachep->array[cpu];
++}
++
+ static inline struct kmem_cache *__find_general_cachep(size_t size,
+ 							gfp_t gfpflags)
+ {
+@@ -1065,9 +1094,10 @@ static void reap_alien(struct kmem_cache *cachep, struct kmem_list3 *l3)
+ 	if (l3->alien) {
+ 		struct array_cache *ac = l3->alien[node];
+ 
+-		if (ac && ac->avail && spin_trylock_irq(&ac->lock)) {
++		if (ac && ac->avail &&
++		    local_spin_trylock_irq(slab_lock, &ac->lock)) {
+ 			__drain_alien_cache(cachep, ac, node);
+-			spin_unlock_irq(&ac->lock);
++			local_spin_unlock_irq(slab_lock, &ac->lock);
+ 		}
+ 	}
+ }
+@@ -1082,9 +1112,9 @@ static void drain_alien_cache(struct kmem_cache *cachep,
+ 	for_each_online_node(i) {
+ 		ac = alien[i];
+ 		if (ac) {
+-			spin_lock_irqsave(&ac->lock, flags);
++			local_spin_lock_irqsave(slab_lock, &ac->lock, flags);
+ 			__drain_alien_cache(cachep, ac, i);
+-			spin_unlock_irqrestore(&ac->lock, flags);
++			local_spin_unlock_irqrestore(slab_lock, &ac->lock, flags);
+ 		}
+ 	}
+ }
+@@ -1163,11 +1193,11 @@ static int init_cache_nodelists_node(int node)
+ 			cachep->nodelists[node] = l3;
+ 		}
+ 
+-		spin_lock_irq(&cachep->nodelists[node]->list_lock);
++		local_spin_lock_irq(slab_lock, &cachep->nodelists[node]->list_lock);
+ 		cachep->nodelists[node]->free_limit =
+ 			(1 + nr_cpus_node(node)) *
+ 			cachep->batchcount + cachep->num;
+-		spin_unlock_irq(&cachep->nodelists[node]->list_lock);
++		local_spin_unlock_irq(slab_lock, &cachep->nodelists[node]->list_lock);
+ 	}
+ 	return 0;
+ }
+@@ -1192,7 +1222,7 @@ static void __cpuinit cpuup_canceled(long cpu)
+ 		if (!l3)
+ 			goto free_array_cache;
+ 
+-		spin_lock_irq(&l3->list_lock);
++		local_spin_lock_irq(slab_lock, &l3->list_lock);
+ 
+ 		/* Free limit for this kmem_list3 */
+ 		l3->free_limit -= cachep->batchcount;
+@@ -1200,7 +1230,7 @@ static void __cpuinit cpuup_canceled(long cpu)
+ 			free_block(cachep, nc->entry, nc->avail, node);
+ 
+ 		if (!cpumask_empty(mask)) {
+-			spin_unlock_irq(&l3->list_lock);
++			local_spin_unlock_irq(slab_lock, &l3->list_lock);
+ 			goto free_array_cache;
+ 		}
+ 
+@@ -1214,7 +1244,7 @@ static void __cpuinit cpuup_canceled(long cpu)
+ 		alien = l3->alien;
+ 		l3->alien = NULL;
+ 
+-		spin_unlock_irq(&l3->list_lock);
++		local_spin_unlock_irq(slab_lock, &l3->list_lock);
+ 
+ 		kfree(shared);
+ 		if (alien) {
+@@ -1288,7 +1318,7 @@ static int __cpuinit cpuup_prepare(long cpu)
+ 		l3 = cachep->nodelists[node];
+ 		BUG_ON(!l3);
+ 
+-		spin_lock_irq(&l3->list_lock);
++		local_spin_lock_irq(slab_lock, &l3->list_lock);
+ 		if (!l3->shared) {
+ 			/*
+ 			 * We are serialised from CPU_DEAD or
+@@ -1303,7 +1333,7 @@ static int __cpuinit cpuup_prepare(long cpu)
+ 			alien = NULL;
+ 		}
+ #endif
+-		spin_unlock_irq(&l3->list_lock);
++		local_spin_unlock_irq(slab_lock, &l3->list_lock);
+ 		kfree(shared);
+ 		free_alien_cache(alien);
+ 		if (cachep->flags & SLAB_DEBUG_OBJECTS)
+@@ -1494,6 +1524,8 @@ void __init kmem_cache_init(void)
+ 	if (num_possible_nodes() == 1)
+ 		use_alien_caches = 0;
+ 
++	local_irq_lock_init(slab_lock);
++
+ 	for (i = 0; i < NUM_INIT_LISTS; i++) {
+ 		kmem_list3_init(&initkmem_list3[i]);
+ 		if (i < MAX_NUMNODES)
+@@ -2500,7 +2532,7 @@ EXPORT_SYMBOL(kmem_cache_create);
+ #if DEBUG
+ static void check_irq_off(void)
+ {
+-	BUG_ON(!irqs_disabled());
++	BUG_ON_NONRT(!irqs_disabled());
+ }
+ 
+ static void check_irq_on(void)
+@@ -2535,13 +2567,12 @@ static void drain_array(struct kmem_cache *cachep, struct kmem_list3 *l3,
+ 			struct array_cache *ac,
+ 			int force, int node);
+ 
+-static void do_drain(void *arg)
++static void __do_drain(void *arg, unsigned int cpu)
+ {
+ 	struct kmem_cache *cachep = arg;
+ 	struct array_cache *ac;
+-	int node = numa_mem_id();
++	int node = cpu_to_mem(cpu);
+ 
+-	check_irq_off();
+ 	ac = cpu_cache_get(cachep);
+ 	spin_lock(&cachep->nodelists[node]->list_lock);
+ 	free_block(cachep, ac->entry, ac->avail, node);
+@@ -2549,12 +2580,24 @@ static void do_drain(void *arg)
+ 	ac->avail = 0;
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_BASE
++static void do_drain(void *arg)
++{
++	__do_drain(arg, smp_processor_id());
++}
++#else
++static void do_drain(void *arg, int this_cpu)
++{
++	__do_drain(arg, this_cpu);
++}
++#endif
++
+ static void drain_cpu_caches(struct kmem_cache *cachep)
+ {
+ 	struct kmem_list3 *l3;
+ 	int node;
+ 
+-	on_each_cpu(do_drain, cachep, 1);
++	slab_on_each_cpu(do_drain, cachep);
+ 	check_irq_on();
+ 	for_each_online_node(node) {
+ 		l3 = cachep->nodelists[node];
+@@ -2585,10 +2628,10 @@ static int drain_freelist(struct kmem_cache *cache,
+ 	nr_freed = 0;
+ 	while (nr_freed < tofree && !list_empty(&l3->slabs_free)) {
+ 
+-		spin_lock_irq(&l3->list_lock);
++		local_spin_lock_irq(slab_lock, &l3->list_lock);
+ 		p = l3->slabs_free.prev;
+ 		if (p == &l3->slabs_free) {
+-			spin_unlock_irq(&l3->list_lock);
++			local_spin_unlock_irq(slab_lock, &l3->list_lock);
+ 			goto out;
+ 		}
+ 
+@@ -2602,7 +2645,7 @@ static int drain_freelist(struct kmem_cache *cache,
+ 		 * to the cache.
+ 		 */
+ 		l3->free_objects -= cache->num;
+-		spin_unlock_irq(&l3->list_lock);
++		local_spin_unlock_irq(slab_lock, &l3->list_lock);
+ 		slab_destroy(cache, slabp);
+ 		nr_freed++;
+ 	}
+@@ -2897,7 +2940,7 @@ static int cache_grow(struct kmem_cache *cachep,
+ 	offset *= cachep->colour_off;
+ 
+ 	if (local_flags & __GFP_WAIT)
+-		local_irq_enable();
++		local_unlock_irq(slab_lock);
+ 
+ 	/*
+ 	 * The test for missing atomic flag is performed here, rather than
+@@ -2927,7 +2970,7 @@ static int cache_grow(struct kmem_cache *cachep,
+ 	cache_init_objs(cachep, slabp);
+ 
+ 	if (local_flags & __GFP_WAIT)
+-		local_irq_disable();
++		local_lock_irq(slab_lock);
+ 	check_irq_off();
+ 	spin_lock(&l3->list_lock);
+ 
+@@ -2941,7 +2984,7 @@ opps1:
+ 	kmem_freepages(cachep, objp);
+ failed:
+ 	if (local_flags & __GFP_WAIT)
+-		local_irq_disable();
++		local_lock_irq(slab_lock);
+ 	return 0;
+ }
+ 
+@@ -3333,11 +3376,11 @@ retry:
+ 		 * set and go into memory reserves if necessary.
+ 		 */
+ 		if (local_flags & __GFP_WAIT)
+-			local_irq_enable();
++			local_unlock_irq(slab_lock);
+ 		kmem_flagcheck(cache, flags);
+ 		obj = kmem_getpages(cache, local_flags, numa_mem_id());
+ 		if (local_flags & __GFP_WAIT)
+-			local_irq_disable();
++			local_lock_irq(slab_lock);
+ 		if (obj) {
+ 			/*
+ 			 * Insert into the appropriate per node queues
+@@ -3453,7 +3496,7 @@ __cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
+ 		return NULL;
+ 
+ 	cache_alloc_debugcheck_before(cachep, flags);
+-	local_irq_save(save_flags);
++	local_lock_irqsave(slab_lock, save_flags);
+ 
+ 	if (nodeid == NUMA_NO_NODE)
+ 		nodeid = slab_node;
+@@ -3478,7 +3521,7 @@ __cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
+ 	/* ___cache_alloc_node can fall back to other nodes */
+ 	ptr = ____cache_alloc_node(cachep, flags, nodeid);
+   out:
+-	local_irq_restore(save_flags);
++	local_unlock_irqrestore(slab_lock, save_flags);
+ 	ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
+ 	kmemleak_alloc_recursive(ptr, obj_size(cachep), 1, cachep->flags,
+ 				 flags);
+@@ -3538,9 +3581,9 @@ __cache_alloc(struct kmem_cache *cachep, gfp_t flags, void *caller)
+ 		return NULL;
+ 
+ 	cache_alloc_debugcheck_before(cachep, flags);
+-	local_irq_save(save_flags);
++	local_lock_irqsave(slab_lock, save_flags);
+ 	objp = __do_cache_alloc(cachep, flags);
+-	local_irq_restore(save_flags);
++	local_unlock_irqrestore(slab_lock, save_flags);
+ 	objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller);
+ 	kmemleak_alloc_recursive(objp, obj_size(cachep), 1, cachep->flags,
+ 				 flags);
+@@ -3854,9 +3897,9 @@ void kmem_cache_free(struct kmem_cache *cachep, void *objp)
+ 	debug_check_no_locks_freed(objp, obj_size(cachep));
+ 	if (!(cachep->flags & SLAB_DEBUG_OBJECTS))
+ 		debug_check_no_obj_freed(objp, obj_size(cachep));
+-	local_irq_save(flags);
++	local_lock_irqsave(slab_lock, flags);
+ 	__cache_free(cachep, objp, __builtin_return_address(0));
+-	local_irq_restore(flags);
++	local_unlock_irqrestore(slab_lock, flags);
+ 
+ 	trace_kmem_cache_free(_RET_IP_, objp);
+ }
+@@ -3884,9 +3927,9 @@ void kfree(const void *objp)
+ 	c = virt_to_cache(objp);
+ 	debug_check_no_locks_freed(objp, obj_size(c));
+ 	debug_check_no_obj_freed(objp, obj_size(c));
+-	local_irq_save(flags);
++	local_lock_irqsave(slab_lock, flags);
+ 	__cache_free(c, (void *)objp, __builtin_return_address(0));
+-	local_irq_restore(flags);
++	local_unlock_irqrestore(slab_lock, flags);
+ }
+ EXPORT_SYMBOL(kfree);
+ 
+@@ -3929,7 +3972,7 @@ static int alloc_kmemlist(struct kmem_cache *cachep, gfp_t gfp)
+ 		if (l3) {
+ 			struct array_cache *shared = l3->shared;
+ 
+-			spin_lock_irq(&l3->list_lock);
++			local_spin_lock_irq(slab_lock, &l3->list_lock);
+ 
+ 			if (shared)
+ 				free_block(cachep, shared->entry,
+@@ -3942,7 +3985,7 @@ static int alloc_kmemlist(struct kmem_cache *cachep, gfp_t gfp)
+ 			}
+ 			l3->free_limit = (1 + nr_cpus_node(node)) *
+ 					cachep->batchcount + cachep->num;
+-			spin_unlock_irq(&l3->list_lock);
++			local_spin_unlock_irq(slab_lock, &l3->list_lock);
+ 			kfree(shared);
+ 			free_alien_cache(new_alien);
+ 			continue;
+@@ -3989,17 +4032,28 @@ struct ccupdate_struct {
+ 	struct array_cache *new[0];
+ };
+ 
+-static void do_ccupdate_local(void *info)
++static void __do_ccupdate_local(void *info, int cpu)
+ {
+ 	struct ccupdate_struct *new = info;
+ 	struct array_cache *old;
+ 
+-	check_irq_off();
+-	old = cpu_cache_get(new->cachep);
++	old = cpu_cache_get_on_cpu(new->cachep, cpu);
++
++	new->cachep->array[cpu] = new->new[cpu];
++	new->new[cpu] = old;
++}
+ 
+-	new->cachep->array[smp_processor_id()] = new->new[smp_processor_id()];
+-	new->new[smp_processor_id()] = old;
++#ifndef CONFIG_PREEMPT_RT_BASE
++static void do_ccupdate_local(void *info)
++{
++	__do_ccupdate_local(info, smp_processor_id());
+ }
++#else
++static void do_ccupdate_local(void *info, int cpu)
++{
++	__do_ccupdate_local(info, cpu);
++}
++#endif
+ 
+ /* Always called with the cache_chain_mutex held */
+ static int do_tune_cpucache(struct kmem_cache *cachep, int limit,
+@@ -4025,7 +4079,7 @@ static int do_tune_cpucache(struct kmem_cache *cachep, int limit,
+ 	}
+ 	new->cachep = cachep;
+ 
+-	on_each_cpu(do_ccupdate_local, (void *)new, 1);
++	slab_on_each_cpu(do_ccupdate_local, (void *)new);
+ 
+ 	check_irq_on();
+ 	cachep->batchcount = batchcount;
+@@ -4036,9 +4090,11 @@ static int do_tune_cpucache(struct kmem_cache *cachep, int limit,
+ 		struct array_cache *ccold = new->new[i];
+ 		if (!ccold)
+ 			continue;
+-		spin_lock_irq(&cachep->nodelists[cpu_to_mem(i)]->list_lock);
++		local_spin_lock_irq(slab_lock,
++				    &cachep->nodelists[cpu_to_mem(i)]->list_lock);
+ 		free_block(cachep, ccold->entry, ccold->avail, cpu_to_mem(i));
+-		spin_unlock_irq(&cachep->nodelists[cpu_to_mem(i)]->list_lock);
++		local_spin_unlock_irq(slab_lock,
++				      &cachep->nodelists[cpu_to_mem(i)]->list_lock);
+ 		kfree(ccold);
+ 	}
+ 	kfree(new);
+@@ -4114,7 +4170,7 @@ static void drain_array(struct kmem_cache *cachep, struct kmem_list3 *l3,
+ 	if (ac->touched && !force) {
+ 		ac->touched = 0;
+ 	} else {
+-		spin_lock_irq(&l3->list_lock);
++		local_spin_lock_irq(slab_lock, &l3->list_lock);
+ 		if (ac->avail) {
+ 			tofree = force ? ac->avail : (ac->limit + 4) / 5;
+ 			if (tofree > ac->avail)
+@@ -4124,7 +4180,7 @@ static void drain_array(struct kmem_cache *cachep, struct kmem_list3 *l3,
+ 			memmove(ac->entry, &(ac->entry[tofree]),
+ 				sizeof(void *) * ac->avail);
+ 		}
+-		spin_unlock_irq(&l3->list_lock);
++		local_spin_unlock_irq(slab_lock, &l3->list_lock);
+ 	}
+ }
+ 
+@@ -4263,7 +4319,7 @@ static int s_show(struct seq_file *m, void *p)
+ 			continue;
+ 
+ 		check_irq_on();
+-		spin_lock_irq(&l3->list_lock);
++		local_spin_lock_irq(slab_lock, &l3->list_lock);
+ 
+ 		list_for_each_entry(slabp, &l3->slabs_full, list) {
+ 			if (slabp->inuse != cachep->num && !error)
+@@ -4288,7 +4344,7 @@ static int s_show(struct seq_file *m, void *p)
+ 		if (l3->shared)
+ 			shared_avail += l3->shared->avail;
+ 
+-		spin_unlock_irq(&l3->list_lock);
++		local_spin_unlock_irq(slab_lock, &l3->list_lock);
+ 	}
+ 	num_slabs += active_slabs;
+ 	num_objs = num_slabs * cachep->num;
+@@ -4517,13 +4573,13 @@ static int leaks_show(struct seq_file *m, void *p)
+ 			continue;
+ 
+ 		check_irq_on();
+-		spin_lock_irq(&l3->list_lock);
++		local_spin_lock_irq(slab_lock, &l3->list_lock);
+ 
+ 		list_for_each_entry(slabp, &l3->slabs_full, list)
+ 			handle_slab(n, cachep, slabp);
+ 		list_for_each_entry(slabp, &l3->slabs_partial, list)
+ 			handle_slab(n, cachep, slabp);
+-		spin_unlock_irq(&l3->list_lock);
++		local_spin_unlock_irq(slab_lock, &l3->list_lock);
+ 	}
+ 	name = cachep->name;
+ 	if (n[0] == n[1]) {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0108-slab-Fix-__do_drain-to-use-the-right-array-cache.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0108-slab-Fix-__do_drain-to-use-the-right-array-cache.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0108-slab-Fix-__do_drain-to-use-the-right-array-cache.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0108-slab-Fix-__do_drain-to-use-the-right-array-cache.patch)
@@ -0,0 +1,56 @@
+From 34e1a29c704b0749b6394a20470c668be32be491 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Tue, 11 Oct 2011 23:56:23 -0400
+Subject: [PATCH 108/271] slab: Fix __do_drain to use the right array cache
+
+The array cache in __do_drain() was using the cpu_cache_get() function
+which uses smp_processor_id() to get the proper array. On mainline, this
+is fine as __do_drain() is called by for_each_cpu() which runs
+__do_drain() on the CPU it is processing. In RT locks are used instead
+and __do_drain() is only called from a single CPU. This can cause the
+accounting to be off and trigger the following bug:
+
+slab error in kmem_cache_destroy(): cache `nfs_write_data': Can't free all objects
+Pid: 2905, comm: rmmod Not tainted 3.0.6-test-rt17+ #78
+Call Trace:
+ [<ffffffff810fb623>] kmem_cache_destroy+0xa0/0xdf
+ [<ffffffffa03aaffb>] nfs_destroy_writepagecache+0x49/0x4e [nfs]
+ [<ffffffffa03c0fe0>] exit_nfs_fs+0xe/0x46 [nfs]
+ [<ffffffff8107af09>] sys_delete_module+0x1ba/0x22c
+ [<ffffffff8109429d>] ? audit_syscall_entry+0x11c/0x148
+ [<ffffffff814b6442>] system_call_fastpath+0x16/0x1b
+
+This can be easily triggered by a simple while loop:
+
+# while :; do modprobe nfs; rmmod nfs; done
+
+The proper function to use is cpu_cache_get_on_cpu(). It works for both
+RT and non-RT as the non-RT passes in smp_processor_id() into
+__do_drain().
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Cc: Luis Claudio R. Goncalves <lgoncalv at redhat.com>
+Cc: Clark Williams <clark at redhat.com>
+Cc: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/r/1318391783.13262.11.camel@gandalf.stny.rr.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/slab.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/mm/slab.c b/mm/slab.c
+index 38575a8..5b63148 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -2573,7 +2573,7 @@ static void __do_drain(void *arg, unsigned int cpu)
+ 	struct array_cache *ac;
+ 	int node = cpu_to_mem(cpu);
+ 
+-	ac = cpu_cache_get(cachep);
++	ac = cpu_cache_get_on_cpu(cachep, cpu);
+ 	spin_lock(&cachep->nodelists[node]->list_lock);
+ 	free_block(cachep, ac->entry, ac->avail, node);
+ 	spin_unlock(&cachep->nodelists[node]->list_lock);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0109-mm-More-lock-breaks-in-slab.c.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0109-mm-More-lock-breaks-in-slab.c.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0109-mm-More-lock-breaks-in-slab.c.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0109-mm-More-lock-breaks-in-slab.c.patch)
@@ -0,0 +1,254 @@
+From 78d37c46494f97844cfa0067f56f2f8c7a2a3bb9 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Fri, 3 Jul 2009 08:44:43 -0500
+Subject: [PATCH 109/271] mm: More lock breaks in slab.c
+
+Handle __free_pages outside of the locked regions. This reduces the
+lock contention on the percpu slab locks in -rt significantly.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/slab.c |   90 ++++++++++++++++++++++++++++++++++++++++++++++---------------
+ 1 file changed, 68 insertions(+), 22 deletions(-)
+
+diff --git a/mm/slab.c b/mm/slab.c
+index 5b63148..5f0c5ef 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -723,6 +723,7 @@ static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep)
+ #endif
+ 
+ static DEFINE_PER_CPU(struct delayed_work, slab_reap_work);
++static DEFINE_PER_CPU(struct list_head, slab_free_list);
+ static DEFINE_LOCAL_IRQ_LOCK(slab_lock);
+ 
+ #ifndef CONFIG_PREEMPT_RT_BASE
+@@ -738,14 +739,39 @@ slab_on_each_cpu(void (*func)(void *arg, int this_cpu), void *arg)
+ {
+ 	unsigned int i;
+ 
+-	for_each_online_cpu(i) {
+-		spin_lock_irq(&per_cpu(slab_lock, i).lock);
++	for_each_online_cpu(i)
+ 		func(arg, i);
+-		spin_unlock_irq(&per_cpu(slab_lock, i).lock);
+-	}
+ }
+ #endif
+ 
++static void free_delayed(struct list_head *h)
++{
++	while(!list_empty(h)) {
++		struct page *page = list_first_entry(h, struct page, lru);
++
++		list_del(&page->lru);
++		__free_pages(page, page->index);
++	}
++}
++
++static void unlock_l3_and_free_delayed(spinlock_t *list_lock)
++{
++	LIST_HEAD(tmp);
++
++	list_splice_init(&__get_cpu_var(slab_free_list), &tmp);
++	local_spin_unlock_irq(slab_lock, list_lock);
++	free_delayed(&tmp);
++}
++
++static void unlock_slab_and_free_delayed(unsigned long flags)
++{
++	LIST_HEAD(tmp);
++
++	list_splice_init(&__get_cpu_var(slab_free_list), &tmp);
++	local_unlock_irqrestore(slab_lock, flags);
++	free_delayed(&tmp);
++}
++
+ static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep)
+ {
+ 	return cachep->array[smp_processor_id()];
+@@ -1230,7 +1256,7 @@ static void __cpuinit cpuup_canceled(long cpu)
+ 			free_block(cachep, nc->entry, nc->avail, node);
+ 
+ 		if (!cpumask_empty(mask)) {
+-			local_spin_unlock_irq(slab_lock, &l3->list_lock);
++			unlock_l3_and_free_delayed(&l3->list_lock);
+ 			goto free_array_cache;
+ 		}
+ 
+@@ -1244,7 +1270,7 @@ static void __cpuinit cpuup_canceled(long cpu)
+ 		alien = l3->alien;
+ 		l3->alien = NULL;
+ 
+-		local_spin_unlock_irq(slab_lock, &l3->list_lock);
++		unlock_l3_and_free_delayed(&l3->list_lock);
+ 
+ 		kfree(shared);
+ 		if (alien) {
+@@ -1525,6 +1551,8 @@ void __init kmem_cache_init(void)
+ 		use_alien_caches = 0;
+ 
+ 	local_irq_lock_init(slab_lock);
++	for_each_possible_cpu(i)
++		INIT_LIST_HEAD(&per_cpu(slab_free_list, i));
+ 
+ 	for (i = 0; i < NUM_INIT_LISTS; i++) {
+ 		kmem_list3_init(&initkmem_list3[i]);
+@@ -1803,12 +1831,14 @@ static void *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid)
+ /*
+  * Interface to system's page release.
+  */
+-static void kmem_freepages(struct kmem_cache *cachep, void *addr)
++static void kmem_freepages(struct kmem_cache *cachep, void *addr, bool delayed)
+ {
+ 	unsigned long i = (1 << cachep->gfporder);
+-	struct page *page = virt_to_page(addr);
++	struct page *page, *basepage = virt_to_page(addr);
+ 	const unsigned long nr_freed = i;
+ 
++	page = basepage;
++
+ 	kmemcheck_free_shadow(page, cachep->gfporder);
+ 
+ 	if (cachep->flags & SLAB_RECLAIM_ACCOUNT)
+@@ -1824,7 +1854,13 @@ static void kmem_freepages(struct kmem_cache *cachep, void *addr)
+ 	}
+ 	if (current->reclaim_state)
+ 		current->reclaim_state->reclaimed_slab += nr_freed;
+-	free_pages((unsigned long)addr, cachep->gfporder);
++
++	if (!delayed) {
++		free_pages((unsigned long)addr, cachep->gfporder);
++	} else {
++		basepage->index = cachep->gfporder;
++		list_add(&basepage->lru, &__get_cpu_var(slab_free_list));
++	}
+ }
+ 
+ static void kmem_rcu_free(struct rcu_head *head)
+@@ -1832,7 +1868,7 @@ static void kmem_rcu_free(struct rcu_head *head)
+ 	struct slab_rcu *slab_rcu = (struct slab_rcu *)head;
+ 	struct kmem_cache *cachep = slab_rcu->cachep;
+ 
+-	kmem_freepages(cachep, slab_rcu->addr);
++	kmem_freepages(cachep, slab_rcu->addr, false);
+ 	if (OFF_SLAB(cachep))
+ 		kmem_cache_free(cachep->slabp_cache, slab_rcu);
+ }
+@@ -2051,7 +2087,8 @@ static void slab_destroy_debugcheck(struct kmem_cache *cachep, struct slab *slab
+  * Before calling the slab must have been unlinked from the cache.  The
+  * cache-lock is not held/needed.
+  */
+-static void slab_destroy(struct kmem_cache *cachep, struct slab *slabp)
++static void slab_destroy(struct kmem_cache *cachep, struct slab *slabp,
++			 bool delayed)
+ {
+ 	void *addr = slabp->s_mem - slabp->colouroff;
+ 
+@@ -2064,7 +2101,7 @@ static void slab_destroy(struct kmem_cache *cachep, struct slab *slabp)
+ 		slab_rcu->addr = addr;
+ 		call_rcu(&slab_rcu->head, kmem_rcu_free);
+ 	} else {
+-		kmem_freepages(cachep, addr);
++		kmem_freepages(cachep, addr, delayed);
+ 		if (OFF_SLAB(cachep))
+ 			kmem_cache_free(cachep->slabp_cache, slabp);
+ 	}
+@@ -2586,9 +2623,15 @@ static void do_drain(void *arg)
+ 	__do_drain(arg, smp_processor_id());
+ }
+ #else
+-static void do_drain(void *arg, int this_cpu)
++static void do_drain(void *arg, int cpu)
+ {
+-	__do_drain(arg, this_cpu);
++	LIST_HEAD(tmp);
++
++	spin_lock_irq(&per_cpu(slab_lock, cpu).lock);
++	__do_drain(arg, cpu);
++	list_splice_init(&per_cpu(slab_free_list, cpu), &tmp);
++	spin_unlock_irq(&per_cpu(slab_lock, cpu).lock);
++	free_delayed(&tmp);
+ }
+ #endif
+ 
+@@ -2646,7 +2689,7 @@ static int drain_freelist(struct kmem_cache *cache,
+ 		 */
+ 		l3->free_objects -= cache->num;
+ 		local_spin_unlock_irq(slab_lock, &l3->list_lock);
+-		slab_destroy(cache, slabp);
++		slab_destroy(cache, slabp, false);
+ 		nr_freed++;
+ 	}
+ out:
+@@ -2981,7 +3024,7 @@ static int cache_grow(struct kmem_cache *cachep,
+ 	spin_unlock(&l3->list_lock);
+ 	return 1;
+ opps1:
+-	kmem_freepages(cachep, objp);
++	kmem_freepages(cachep, objp, false);
+ failed:
+ 	if (local_flags & __GFP_WAIT)
+ 		local_lock_irq(slab_lock);
+@@ -3631,7 +3674,7 @@ static void free_block(struct kmem_cache *cachep, void **objpp, int nr_objects,
+ 				 * a different cache, refer to comments before
+ 				 * alloc_slabmgmt.
+ 				 */
+-				slab_destroy(cachep, slabp);
++				slab_destroy(cachep, slabp, true);
+ 			} else {
+ 				list_add(&slabp->list, &l3->slabs_free);
+ 			}
+@@ -3899,7 +3942,7 @@ void kmem_cache_free(struct kmem_cache *cachep, void *objp)
+ 		debug_check_no_obj_freed(objp, obj_size(cachep));
+ 	local_lock_irqsave(slab_lock, flags);
+ 	__cache_free(cachep, objp, __builtin_return_address(0));
+-	local_unlock_irqrestore(slab_lock, flags);
++	unlock_slab_and_free_delayed(flags);
+ 
+ 	trace_kmem_cache_free(_RET_IP_, objp);
+ }
+@@ -3929,7 +3972,7 @@ void kfree(const void *objp)
+ 	debug_check_no_obj_freed(objp, obj_size(c));
+ 	local_lock_irqsave(slab_lock, flags);
+ 	__cache_free(c, (void *)objp, __builtin_return_address(0));
+-	local_unlock_irqrestore(slab_lock, flags);
++	unlock_slab_and_free_delayed(flags);
+ }
+ EXPORT_SYMBOL(kfree);
+ 
+@@ -3985,7 +4028,8 @@ static int alloc_kmemlist(struct kmem_cache *cachep, gfp_t gfp)
+ 			}
+ 			l3->free_limit = (1 + nr_cpus_node(node)) *
+ 					cachep->batchcount + cachep->num;
+-			local_spin_unlock_irq(slab_lock, &l3->list_lock);
++			unlock_l3_and_free_delayed(&l3->list_lock);
++
+ 			kfree(shared);
+ 			free_alien_cache(new_alien);
+ 			continue;
+@@ -4051,7 +4095,9 @@ static void do_ccupdate_local(void *info)
+ #else
+ static void do_ccupdate_local(void *info, int cpu)
+ {
++	spin_lock_irq(&per_cpu(slab_lock, cpu).lock);
+ 	__do_ccupdate_local(info, cpu);
++	spin_unlock_irq(&per_cpu(slab_lock, cpu).lock);
+ }
+ #endif
+ 
+@@ -4093,8 +4139,8 @@ static int do_tune_cpucache(struct kmem_cache *cachep, int limit,
+ 		local_spin_lock_irq(slab_lock,
+ 				    &cachep->nodelists[cpu_to_mem(i)]->list_lock);
+ 		free_block(cachep, ccold->entry, ccold->avail, cpu_to_mem(i));
+-		local_spin_unlock_irq(slab_lock,
+-				      &cachep->nodelists[cpu_to_mem(i)]->list_lock);
++
++		unlock_l3_and_free_delayed(&cachep->nodelists[cpu_to_mem(i)]->list_lock);
+ 		kfree(ccold);
+ 	}
+ 	kfree(new);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0110-mm-page_alloc-rt-friendly-per-cpu-pages.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0110-mm-page_alloc-rt-friendly-per-cpu-pages.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0110-mm-page_alloc-rt-friendly-per-cpu-pages.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0110-mm-page_alloc-rt-friendly-per-cpu-pages.patch)
@@ -0,0 +1,199 @@
+From 9f4c4aafe685d907b24ae3c33018ce4268040e1a Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:37 -0500
+Subject: [PATCH 110/271] mm: page_alloc: rt-friendly per-cpu pages
+
+rt-friendly per-cpu pages: convert the irqs-off per-cpu locking
+method into a preemptible, explicit-per-cpu-locks method.
+
+Contains fixes from:
+	 Peter Zijlstra <a.p.zijlstra at chello.nl>
+	 Thomas Gleixner <tglx at linutronix.de>
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/page_alloc.c |   55 +++++++++++++++++++++++++++++++++++++++----------------
+ 1 file changed, 39 insertions(+), 16 deletions(-)
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 3344154..27865c9 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -57,6 +57,7 @@
+ #include <linux/ftrace_event.h>
+ #include <linux/memcontrol.h>
+ #include <linux/prefetch.h>
++#include <linux/locallock.h>
+ 
+ #include <asm/tlbflush.h>
+ #include <asm/div64.h>
+@@ -222,6 +223,18 @@ EXPORT_SYMBOL(nr_node_ids);
+ EXPORT_SYMBOL(nr_online_nodes);
+ #endif
+ 
++static DEFINE_LOCAL_IRQ_LOCK(pa_lock);
++
++#ifdef CONFIG_PREEMPT_RT_BASE
++# define cpu_lock_irqsave(cpu, flags)		\
++	spin_lock_irqsave(&per_cpu(pa_lock, cpu).lock, flags)
++# define cpu_unlock_irqrestore(cpu, flags)		\
++	spin_unlock_irqrestore(&per_cpu(pa_lock, cpu).lock, flags)
++#else
++# define cpu_lock_irqsave(cpu, flags)		local_irq_save(flags)
++# define cpu_unlock_irqrestore(cpu, flags)	local_irq_restore(flags)
++#endif
++
+ int page_group_by_mobility_disabled __read_mostly;
+ 
+ static void set_pageblock_migratetype(struct page *page, int migratetype)
+@@ -683,13 +696,13 @@ static void __free_pages_ok(struct page *page, unsigned int order)
+ 	if (!free_pages_prepare(page, order))
+ 		return;
+ 
+-	local_irq_save(flags);
++	local_lock_irqsave(pa_lock, flags);
+ 	if (unlikely(wasMlocked))
+ 		free_page_mlock(page);
+ 	__count_vm_events(PGFREE, 1 << order);
+ 	free_one_page(page_zone(page), page, order,
+ 					get_pageblock_migratetype(page));
+-	local_irq_restore(flags);
++	local_unlock_irqrestore(pa_lock, flags);
+ }
+ 
+ /*
+@@ -1067,14 +1080,14 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
+ 	unsigned long flags;
+ 	int to_drain;
+ 
+-	local_irq_save(flags);
++	local_lock_irqsave(pa_lock, flags);
+ 	if (pcp->count >= pcp->batch)
+ 		to_drain = pcp->batch;
+ 	else
+ 		to_drain = pcp->count;
+ 	free_pcppages_bulk(zone, to_drain, pcp);
+ 	pcp->count -= to_drain;
+-	local_irq_restore(flags);
++	local_unlock_irqrestore(pa_lock, flags);
+ }
+ #endif
+ 
+@@ -1094,7 +1107,7 @@ static void drain_pages(unsigned int cpu)
+ 		struct per_cpu_pageset *pset;
+ 		struct per_cpu_pages *pcp;
+ 
+-		local_irq_save(flags);
++		cpu_lock_irqsave(cpu, flags);
+ 		pset = per_cpu_ptr(zone->pageset, cpu);
+ 
+ 		pcp = &pset->pcp;
+@@ -1102,7 +1115,7 @@ static void drain_pages(unsigned int cpu)
+ 			free_pcppages_bulk(zone, pcp->count, pcp);
+ 			pcp->count = 0;
+ 		}
+-		local_irq_restore(flags);
++		cpu_unlock_irqrestore(cpu, flags);
+ 	}
+ }
+ 
+@@ -1119,7 +1132,14 @@ void drain_local_pages(void *arg)
+  */
+ void drain_all_pages(void)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	on_each_cpu(drain_local_pages, NULL, 1);
++#else
++	int i;
++
++	for_each_online_cpu(i)
++		drain_pages(i);
++#endif
+ }
+ 
+ #ifdef CONFIG_HIBERNATION
+@@ -1175,7 +1195,7 @@ void free_hot_cold_page(struct page *page, int cold)
+ 
+ 	migratetype = get_pageblock_migratetype(page);
+ 	set_page_private(page, migratetype);
+-	local_irq_save(flags);
++	local_lock_irqsave(pa_lock, flags);
+ 	if (unlikely(wasMlocked))
+ 		free_page_mlock(page);
+ 	__count_vm_event(PGFREE);
+@@ -1207,7 +1227,7 @@ void free_hot_cold_page(struct page *page, int cold)
+ 	}
+ 
+ out:
+-	local_irq_restore(flags);
++	local_unlock_irqrestore(pa_lock, flags);
+ }
+ 
+ /*
+@@ -1302,7 +1322,7 @@ again:
+ 		struct per_cpu_pages *pcp;
+ 		struct list_head *list;
+ 
+-		local_irq_save(flags);
++		local_lock_irqsave(pa_lock, flags);
+ 		pcp = &this_cpu_ptr(zone->pageset)->pcp;
+ 		list = &pcp->lists[migratetype];
+ 		if (list_empty(list)) {
+@@ -1334,17 +1354,19 @@ again:
+ 			 */
+ 			WARN_ON_ONCE(order > 1);
+ 		}
+-		spin_lock_irqsave(&zone->lock, flags);
++		local_spin_lock_irqsave(pa_lock, &zone->lock, flags);
+ 		page = __rmqueue(zone, order, migratetype);
+-		spin_unlock(&zone->lock);
+-		if (!page)
++		if (!page) {
++			spin_unlock(&zone->lock);
+ 			goto failed;
++		}
+ 		__mod_zone_page_state(zone, NR_FREE_PAGES, -(1 << order));
++		spin_unlock(&zone->lock);
+ 	}
+ 
+ 	__count_zone_vm_events(PGALLOC, zone, 1 << order);
+ 	zone_statistics(preferred_zone, zone, gfp_flags);
+-	local_irq_restore(flags);
++	local_unlock_irqrestore(pa_lock, flags);
+ 
+ 	VM_BUG_ON(bad_range(zone, page));
+ 	if (prep_new_page(page, order, gfp_flags))
+@@ -1352,7 +1374,7 @@ again:
+ 	return page;
+ 
+ failed:
+-	local_irq_restore(flags);
++	local_unlock_irqrestore(pa_lock, flags);
+ 	return NULL;
+ }
+ 
+@@ -3684,10 +3706,10 @@ static int __zone_pcp_update(void *data)
+ 		pset = per_cpu_ptr(zone->pageset, cpu);
+ 		pcp = &pset->pcp;
+ 
+-		local_irq_save(flags);
++		cpu_lock_irqsave(cpu, flags);
+ 		free_pcppages_bulk(zone, pcp->count, pcp);
+ 		setup_pageset(pset, batch);
+-		local_irq_restore(flags);
++		cpu_unlock_irqrestore(cpu, flags);
+ 	}
+ 	return 0;
+ }
+@@ -5053,6 +5075,7 @@ static int page_alloc_cpu_notify(struct notifier_block *self,
+ void __init page_alloc_init(void)
+ {
+ 	hotcpu_notifier(page_alloc_cpu_notify, 0);
++	local_irq_lock_init(pa_lock);
+ }
+ 
+ /*
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0111-mm-page_alloc-reduce-lock-sections-further.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0111-mm-page_alloc-reduce-lock-sections-further.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0111-mm-page_alloc-reduce-lock-sections-further.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0111-mm-page_alloc-reduce-lock-sections-further.patch)
@@ -0,0 +1,196 @@
+From 8f8938be20b95052f5dfe2372f8afd8e621cf0af Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Fri, 3 Jul 2009 08:44:37 -0500
+Subject: [PATCH 111/271] mm: page_alloc reduce lock sections further
+
+Split out the pages which are to be freed into a separate list and
+call free_pages_bulk() outside of the percpu page allocator locks.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/page_alloc.c |   77 +++++++++++++++++++++++++++++++++++++++++--------------
+ 1 file changed, 58 insertions(+), 19 deletions(-)
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 27865c9..5124fb0 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -594,7 +594,7 @@ static inline int free_pages_check(struct page *page)
+ }
+ 
+ /*
+- * Frees a number of pages from the PCP lists
++ * Frees a number of pages which have been collected from the pcp lists.
+  * Assumes all pages on list are in same zone, and of same order.
+  * count is the number of pages to free.
+  *
+@@ -605,16 +605,42 @@ static inline int free_pages_check(struct page *page)
+  * pinned" detection logic.
+  */
+ static void free_pcppages_bulk(struct zone *zone, int count,
+-					struct per_cpu_pages *pcp)
++			       struct list_head *list)
+ {
+-	int migratetype = 0;
+-	int batch_free = 0;
+ 	int to_free = count;
++	unsigned long flags;
+ 
+-	spin_lock(&zone->lock);
++	spin_lock_irqsave(&zone->lock, flags);
+ 	zone->all_unreclaimable = 0;
+ 	zone->pages_scanned = 0;
+ 
++	while (!list_empty(list)) {
++		struct page *page = list_first_entry(list, struct page, lru);
++
++		/* must delete as __free_one_page list manipulates */
++		list_del(&page->lru);
++		/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
++		__free_one_page(page, zone, 0, page_private(page));
++		trace_mm_page_pcpu_drain(page, 0, page_private(page));
++		to_free--;
++	}
++	WARN_ON(to_free != 0);
++	__mod_zone_page_state(zone, NR_FREE_PAGES, count);
++	spin_unlock_irqrestore(&zone->lock, flags);
++}
++
++/*
++ * Moves a number of pages from the PCP lists to free list which
++ * is freed outside of the locked region.
++ *
++ * Assumes all pages on list are in same zone, and of same order.
++ * count is the number of pages to free.
++ */
++static void isolate_pcp_pages(int to_free, struct per_cpu_pages *src,
++			      struct list_head *dst)
++{
++	int migratetype = 0, batch_free = 0;
++
+ 	while (to_free) {
+ 		struct page *page;
+ 		struct list_head *list;
+@@ -630,7 +656,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
+ 			batch_free++;
+ 			if (++migratetype == MIGRATE_PCPTYPES)
+ 				migratetype = 0;
+-			list = &pcp->lists[migratetype];
++			list = &src->lists[migratetype];
+ 		} while (list_empty(list));
+ 
+ 		/* This is the only non-empty list. Free them all. */
+@@ -639,27 +665,24 @@ static void free_pcppages_bulk(struct zone *zone, int count,
+ 
+ 		do {
+ 			page = list_last_entry(list, struct page, lru);
+-			/* must delete as __free_one_page list manipulates */
+ 			list_del(&page->lru);
+-			/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
+-			__free_one_page(page, zone, 0, page_private(page));
+-			trace_mm_page_pcpu_drain(page, 0, page_private(page));
++			list_add(&page->lru, dst);
+ 		} while (--to_free && --batch_free && !list_empty(list));
+ 	}
+-	__mod_zone_page_state(zone, NR_FREE_PAGES, count);
+-	spin_unlock(&zone->lock);
+ }
+ 
+ static void free_one_page(struct zone *zone, struct page *page, int order,
+ 				int migratetype)
+ {
+-	spin_lock(&zone->lock);
++	unsigned long flags;
++
++	spin_lock_irqsave(&zone->lock, flags);
+ 	zone->all_unreclaimable = 0;
+ 	zone->pages_scanned = 0;
+ 
+ 	__free_one_page(page, zone, order, migratetype);
+ 	__mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
+-	spin_unlock(&zone->lock);
++	spin_unlock_irqrestore(&zone->lock, flags);
+ }
+ 
+ static bool free_pages_prepare(struct page *page, unsigned int order)
+@@ -1078,6 +1101,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
+ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
+ {
+ 	unsigned long flags;
++	LIST_HEAD(dst);
+ 	int to_drain;
+ 
+ 	local_lock_irqsave(pa_lock, flags);
+@@ -1085,9 +1109,10 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
+ 		to_drain = pcp->batch;
+ 	else
+ 		to_drain = pcp->count;
+-	free_pcppages_bulk(zone, to_drain, pcp);
++	isolate_pcp_pages(to_drain, pcp, &dst);
+ 	pcp->count -= to_drain;
+ 	local_unlock_irqrestore(pa_lock, flags);
++	free_pcppages_bulk(zone, to_drain, &dst);
+ }
+ #endif
+ 
+@@ -1106,16 +1131,21 @@ static void drain_pages(unsigned int cpu)
+ 	for_each_populated_zone(zone) {
+ 		struct per_cpu_pageset *pset;
+ 		struct per_cpu_pages *pcp;
++		LIST_HEAD(dst);
++		int count;
+ 
+ 		cpu_lock_irqsave(cpu, flags);
+ 		pset = per_cpu_ptr(zone->pageset, cpu);
+ 
+ 		pcp = &pset->pcp;
+-		if (pcp->count) {
+-			free_pcppages_bulk(zone, pcp->count, pcp);
++		count = pcp->count;
++		if (count) {
++			isolate_pcp_pages(count, pcp, &dst);
+ 			pcp->count = 0;
+ 		}
+ 		cpu_unlock_irqrestore(cpu, flags);
++		if (count)
++			free_pcppages_bulk(zone, count, &dst);
+ 	}
+ }
+ 
+@@ -1222,8 +1252,15 @@ void free_hot_cold_page(struct page *page, int cold)
+ 		list_add(&page->lru, &pcp->lists[migratetype]);
+ 	pcp->count++;
+ 	if (pcp->count >= pcp->high) {
+-		free_pcppages_bulk(zone, pcp->batch, pcp);
++		LIST_HEAD(dst);
++		int count;
++
++		isolate_pcp_pages(pcp->batch, pcp, &dst);
+ 		pcp->count -= pcp->batch;
++		count = pcp->batch;
++		local_unlock_irqrestore(pa_lock, flags);
++		free_pcppages_bulk(zone, count, &dst);
++		return;
+ 	}
+ 
+ out:
+@@ -3702,12 +3739,14 @@ static int __zone_pcp_update(void *data)
+ 	for_each_possible_cpu(cpu) {
+ 		struct per_cpu_pageset *pset;
+ 		struct per_cpu_pages *pcp;
++		LIST_HEAD(dst);
+ 
+ 		pset = per_cpu_ptr(zone->pageset, cpu);
+ 		pcp = &pset->pcp;
+ 
+ 		cpu_lock_irqsave(cpu, flags);
+-		free_pcppages_bulk(zone, pcp->count, pcp);
++		isolate_pcp_pages(pcp->count, pcp, &dst);
++		free_pcppages_bulk(zone, pcp->count, &dst);
+ 		setup_pageset(pset, batch);
+ 		cpu_unlock_irqrestore(cpu, flags);
+ 	}
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0112-mm-page-alloc-fix.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0112-mm-page-alloc-fix.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0112-mm-page-alloc-fix.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0112-mm-page-alloc-fix.patch.patch)
@@ -0,0 +1,28 @@
+From 23d1d976e548d01beb6c78d3486bd10c5c5e398d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 21 Jul 2011 16:47:49 +0200
+Subject: [PATCH 112/271] mm-page-alloc-fix.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/page_alloc.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 5124fb0..49675c7 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1960,8 +1960,8 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
+ 	if (*did_some_progress != COMPACT_SKIPPED) {
+ 
+ 		/* Page migration frees to the PCP lists but we want merging */
+-		drain_pages(get_cpu());
+-		put_cpu();
++		drain_pages(get_cpu_light());
++		put_cpu_light();
+ 
+ 		page = get_page_from_freelist(gfp_mask, nodemask,
+ 				order, zonelist, high_zoneidx,
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0113-mm-convert-swap-to-percpu-locked.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0113-mm-convert-swap-to-percpu-locked.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0113-mm-convert-swap-to-percpu-locked.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0113-mm-convert-swap-to-percpu-locked.patch)
@@ -0,0 +1,127 @@
+From 153ada8ae21b6e8ba68ea78f989a884660b6a648 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:51 -0500
+Subject: [PATCH 113/271] mm: convert swap to percpu locked
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/swap.c |   33 +++++++++++++++++++++------------
+ 1 file changed, 21 insertions(+), 12 deletions(-)
+
+diff --git a/mm/swap.c b/mm/swap.c
+index 55b266d..e3f7d6f 100644
+--- a/mm/swap.c
++++ b/mm/swap.c
+@@ -31,6 +31,7 @@
+ #include <linux/backing-dev.h>
+ #include <linux/memcontrol.h>
+ #include <linux/gfp.h>
++#include <linux/locallock.h>
+ 
+ #include "internal.h"
+ 
+@@ -41,6 +42,9 @@ static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
+ static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
+ static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
+ 
++static DEFINE_LOCAL_IRQ_LOCK(rotate_lock);
++static DEFINE_LOCAL_IRQ_LOCK(swap_lock);
++
+ /*
+  * This path almost never happens for VM activity - pages are normally
+  * freed via pagevecs.  But it gets used by networking.
+@@ -267,11 +271,11 @@ void rotate_reclaimable_page(struct page *page)
+ 		unsigned long flags;
+ 
+ 		page_cache_get(page);
+-		local_irq_save(flags);
++		local_lock_irqsave(rotate_lock, flags);
+ 		pvec = &__get_cpu_var(lru_rotate_pvecs);
+ 		if (!pagevec_add(pvec, page))
+ 			pagevec_move_tail(pvec);
+-		local_irq_restore(flags);
++		local_unlock_irqrestore(rotate_lock, flags);
+ 	}
+ }
+ 
+@@ -327,12 +331,13 @@ static void activate_page_drain(int cpu)
+ void activate_page(struct page *page)
+ {
+ 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
+-		struct pagevec *pvec = &get_cpu_var(activate_page_pvecs);
++		struct pagevec *pvec = &get_locked_var(swap_lock,
++						       activate_page_pvecs);
+ 
+ 		page_cache_get(page);
+ 		if (!pagevec_add(pvec, page))
+ 			pagevec_lru_move_fn(pvec, __activate_page, NULL);
+-		put_cpu_var(activate_page_pvecs);
++		put_locked_var(swap_lock, activate_page_pvecs);
+ 	}
+ }
+ 
+@@ -373,12 +378,12 @@ EXPORT_SYMBOL(mark_page_accessed);
+ 
+ void __lru_cache_add(struct page *page, enum lru_list lru)
+ {
+-	struct pagevec *pvec = &get_cpu_var(lru_add_pvecs)[lru];
++	struct pagevec *pvec = &get_locked_var(swap_lock, lru_add_pvecs)[lru];
+ 
+ 	page_cache_get(page);
+ 	if (!pagevec_add(pvec, page))
+ 		____pagevec_lru_add(pvec, lru);
+-	put_cpu_var(lru_add_pvecs);
++	put_locked_var(swap_lock, lru_add_pvecs);
+ }
+ EXPORT_SYMBOL(__lru_cache_add);
+ 
+@@ -512,9 +517,9 @@ static void drain_cpu_pagevecs(int cpu)
+ 		unsigned long flags;
+ 
+ 		/* No harm done if a racing interrupt already did this */
+-		local_irq_save(flags);
++		local_lock_irqsave(rotate_lock, flags);
+ 		pagevec_move_tail(pvec);
+-		local_irq_restore(flags);
++		local_unlock_irqrestore(rotate_lock, flags);
+ 	}
+ 
+ 	pvec = &per_cpu(lru_deactivate_pvecs, cpu);
+@@ -542,18 +547,19 @@ void deactivate_page(struct page *page)
+ 		return;
+ 
+ 	if (likely(get_page_unless_zero(page))) {
+-		struct pagevec *pvec = &get_cpu_var(lru_deactivate_pvecs);
++		struct pagevec *pvec = &get_locked_var(swap_lock,
++						       lru_deactivate_pvecs);
+ 
+ 		if (!pagevec_add(pvec, page))
+ 			pagevec_lru_move_fn(pvec, lru_deactivate_fn, NULL);
+-		put_cpu_var(lru_deactivate_pvecs);
++		put_locked_var(swap_lock, lru_deactivate_pvecs);
+ 	}
+ }
+ 
+ void lru_add_drain(void)
+ {
+-	drain_cpu_pagevecs(get_cpu());
+-	put_cpu();
++	drain_cpu_pagevecs(local_lock_cpu(swap_lock));
++	local_unlock_cpu(swap_lock);
+ }
+ 
+ static void lru_add_drain_per_cpu(struct work_struct *dummy)
+@@ -783,6 +789,9 @@ void __init swap_setup(void)
+ {
+ 	unsigned long megs = totalram_pages >> (20 - PAGE_SHIFT);
+ 
++	local_irq_lock_init(rotate_lock);
++	local_irq_lock_init(swap_lock);
++
+ #ifdef CONFIG_SWAP
+ 	bdi_init(swapper_space.backing_dev_info);
+ #endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0114-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0114-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0114-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0114-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch)
@@ -0,0 +1,56 @@
+From 8dfb251f4ed2962e0d676366c93478d0029bf326 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 22 Jun 2011 20:47:08 +0200
+Subject: [PATCH 114/271] mm-vmstat-fix-the-irq-lock-asymetry.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/vmscan.c |   18 +++++++++---------
+ 1 file changed, 9 insertions(+), 9 deletions(-)
+
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index fbe2d2c..aa50ccf4 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1344,8 +1344,8 @@ static int too_many_isolated(struct zone *zone, int file,
+  */
+ static noinline_for_stack void
+ putback_lru_pages(struct zone *zone, struct scan_control *sc,
+-				unsigned long nr_anon, unsigned long nr_file,
+-				struct list_head *page_list)
++		  unsigned long nr_anon, unsigned long nr_file,
++		  struct list_head *page_list, unsigned long nr_reclaimed)
+ {
+ 	struct page *page;
+ 	struct pagevec pvec;
+@@ -1356,7 +1356,12 @@ putback_lru_pages(struct zone *zone, struct scan_control *sc,
+ 	/*
+ 	 * Put back any unfreeable pages.
+ 	 */
+-	spin_lock(&zone->lru_lock);
++	spin_lock_irq(&zone->lru_lock);
++
++	if (current_is_kswapd())
++		__count_vm_events(KSWAPD_STEAL, nr_reclaimed);
++	__count_zone_vm_events(PGSTEAL, zone, nr_reclaimed);
++
+ 	while (!list_empty(page_list)) {
+ 		int lru;
+ 		page = lru_to_page(page_list);
+@@ -1539,12 +1544,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct zone *zone,
+ 					priority, &nr_dirty, &nr_writeback);
+ 	}
+ 
+-	local_irq_disable();
+-	if (current_is_kswapd())
+-		__count_vm_events(KSWAPD_STEAL, nr_reclaimed);
+-	__count_zone_vm_events(PGSTEAL, zone, nr_reclaimed);
+-
+-	putback_lru_pages(zone, sc, nr_anon, nr_file, &page_list);
++	putback_lru_pages(zone, sc, nr_anon, nr_file, &page_list, nr_reclaimed);
+ 
+ 	/*
+ 	 * If reclaim is isolating dirty pages under writeback, it implies
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0115-mm-make-vmstat-rt-aware.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0115-mm-make-vmstat-rt-aware.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0115-mm-make-vmstat-rt-aware.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0115-mm-make-vmstat-rt-aware.patch)
@@ -0,0 +1,91 @@
+From 612e799fbf66488f07d5c7aff16024f7935803f4 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:13 -0500
+Subject: [PATCH 115/271] mm: make vmstat -rt aware
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/vmstat.h |    4 ++++
+ mm/vmstat.c            |    6 ++++++
+ 2 files changed, 10 insertions(+)
+
+diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
+index 65efb92..1b3f2ef 100644
+--- a/include/linux/vmstat.h
++++ b/include/linux/vmstat.h
+@@ -29,7 +29,9 @@ DECLARE_PER_CPU(struct vm_event_state, vm_event_states);
+ 
+ static inline void __count_vm_event(enum vm_event_item item)
+ {
++	preempt_disable_rt();
+ 	__this_cpu_inc(vm_event_states.event[item]);
++	preempt_enable_rt();
+ }
+ 
+ static inline void count_vm_event(enum vm_event_item item)
+@@ -39,7 +41,9 @@ static inline void count_vm_event(enum vm_event_item item)
+ 
+ static inline void __count_vm_events(enum vm_event_item item, long delta)
+ {
++	preempt_disable_rt();
+ 	__this_cpu_add(vm_event_states.event[item], delta);
++	preempt_enable_rt();
+ }
+ 
+ static inline void count_vm_events(enum vm_event_item item, long delta)
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 8fd603b..726f0b6 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -216,6 +216,7 @@ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
+ 	long x;
+ 	long t;
+ 
++	preempt_disable_rt();
+ 	x = delta + __this_cpu_read(*p);
+ 
+ 	t = __this_cpu_read(pcp->stat_threshold);
+@@ -225,6 +226,7 @@ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
+ 		x = 0;
+ 	}
+ 	__this_cpu_write(*p, x);
++	preempt_enable_rt();
+ }
+ EXPORT_SYMBOL(__mod_zone_page_state);
+ 
+@@ -257,6 +259,7 @@ void __inc_zone_state(struct zone *zone, enum zone_stat_item item)
+ 	s8 __percpu *p = pcp->vm_stat_diff + item;
+ 	s8 v, t;
+ 
++	preempt_disable_rt();
+ 	v = __this_cpu_inc_return(*p);
+ 	t = __this_cpu_read(pcp->stat_threshold);
+ 	if (unlikely(v > t)) {
+@@ -265,6 +268,7 @@ void __inc_zone_state(struct zone *zone, enum zone_stat_item item)
+ 		zone_page_state_add(v + overstep, zone, item);
+ 		__this_cpu_write(*p, -overstep);
+ 	}
++	preempt_enable_rt();
+ }
+ 
+ void __inc_zone_page_state(struct page *page, enum zone_stat_item item)
+@@ -279,6 +283,7 @@ void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
+ 	s8 __percpu *p = pcp->vm_stat_diff + item;
+ 	s8 v, t;
+ 
++	preempt_disable_rt();
+ 	v = __this_cpu_dec_return(*p);
+ 	t = __this_cpu_read(pcp->stat_threshold);
+ 	if (unlikely(v < - t)) {
+@@ -287,6 +292,7 @@ void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
+ 		zone_page_state_add(v - overstep, zone, item);
+ 		__this_cpu_write(*p, overstep);
+ 	}
++	preempt_enable_rt();
+ }
+ 
+ void __dec_zone_page_state(struct page *page, enum zone_stat_item item)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0116-mm-shrink-the-page-frame-to-rt-size.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0116-mm-shrink-the-page-frame-to-rt-size.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0116-mm-shrink-the-page-frame-to-rt-size.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0116-mm-shrink-the-page-frame-to-rt-size.patch)
@@ -0,0 +1,150 @@
+From 8ddb8785a59f0c88b8fe3c7c0e9230d631036c44 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz at infradead.org>
+Date: Fri, 3 Jul 2009 08:44:54 -0500
+Subject: [PATCH 116/271] mm: shrink the page frame to !-rt size
+
+He below is a boot-tested hack to shrink the page frame size back to
+normal.
+
+Should be a net win since there should be many less PTE-pages than
+page-frames.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/mm.h       |   46 +++++++++++++++++++++++++++++++++++++++-------
+ include/linux/mm_types.h |    6 +++++-
+ mm/memory.c              |   32 ++++++++++++++++++++++++++++++++
+ 3 files changed, 76 insertions(+), 8 deletions(-)
+
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 4baadd1..c9e64e5 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -1195,27 +1195,59 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a
+  * overflow into the next struct page (as it might with DEBUG_SPINLOCK).
+  * When freeing, reset page->mapping so free_pages_check won't complain.
+  */
++#ifndef CONFIG_PREEMPT_RT_FULL
++
+ #define __pte_lockptr(page)	&((page)->ptl)
+-#define pte_lock_init(_page)	do {					\
+-	spin_lock_init(__pte_lockptr(_page));				\
+-} while (0)
++
++static inline struct page *pte_lock_init(struct page *page)
++{
++	spin_lock_init(__pte_lockptr(page));
++	return page;
++}
++
+ #define pte_lock_deinit(page)	((page)->mapping = NULL)
++
++#else /* !PREEMPT_RT_FULL */
++
++/*
++ * On PREEMPT_RT_FULL the spinlock_t's are too large to embed in the
++ * page frame, hence it only has a pointer and we need to dynamically
++ * allocate the lock when we allocate PTE-pages.
++ *
++ * This is an overall win, since only a small fraction of the pages
++ * will be PTE pages under normal circumstances.
++ */
++
++#define __pte_lockptr(page)	((page)->ptl)
++
++extern struct page *pte_lock_init(struct page *page);
++extern void pte_lock_deinit(struct page *page);
++
++#endif /* PREEMPT_RT_FULL */
++
+ #define pte_lockptr(mm, pmd)	({(void)(mm); __pte_lockptr(pmd_page(*(pmd)));})
+ #else	/* !USE_SPLIT_PTLOCKS */
+ /*
+  * We use mm->page_table_lock to guard all pagetable pages of the mm.
+  */
+-#define pte_lock_init(page)	do {} while (0)
++static inline struct page *pte_lock_init(struct page *page) { return page; }
+ #define pte_lock_deinit(page)	do {} while (0)
+ #define pte_lockptr(mm, pmd)	({(void)(pmd); &(mm)->page_table_lock;})
+ #endif /* USE_SPLIT_PTLOCKS */
+ 
+-static inline void pgtable_page_ctor(struct page *page)
++static inline struct page *__pgtable_page_ctor(struct page *page)
+ {
+-	pte_lock_init(page);
+-	inc_zone_page_state(page, NR_PAGETABLE);
++	page = pte_lock_init(page);
++	if (page)
++		inc_zone_page_state(page, NR_PAGETABLE);
++	return page;
+ }
+ 
++#define pgtable_page_ctor(page)				\
++do {							\
++	page = __pgtable_page_ctor(page);		\
++} while (0)
++
+ static inline void pgtable_page_dtor(struct page *page)
+ {
+ 	pte_lock_deinit(page);
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 5b42f1b..1ec126f 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -118,7 +118,11 @@ struct page {
+ 						 * system if PG_buddy is set.
+ 						 */
+ #if USE_SPLIT_PTLOCKS
+-		spinlock_t ptl;
++# ifndef CONFIG_PREEMPT_RT_FULL
++	    spinlock_t ptl;
++# else
++	    spinlock_t *ptl;
++# endif
+ #endif
+ 		struct kmem_cache *slab;	/* SLUB: Pointer to slab */
+ 		struct page *first_page;	/* Compound tail pages */
+diff --git a/mm/memory.c b/mm/memory.c
+index 7fa62d9..af0df1a 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -4016,3 +4016,35 @@ void copy_user_huge_page(struct page *dst, struct page *src,
+ 	}
+ }
+ #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */
++
++#if defined(CONFIG_PREEMPT_RT_FULL) && (USE_SPLIT_PTLOCKS > 0)
++/*
++ * Heinous hack, relies on the caller doing something like:
++ *
++ *   pte = alloc_pages(PGALLOC_GFP, 0);
++ *   if (pte)
++ *     pgtable_page_ctor(pte);
++ *   return pte;
++ *
++ * This ensures we release the page and return NULL when the
++ * lock allocation fails.
++ */
++struct page *pte_lock_init(struct page *page)
++{
++	page->ptl = kmalloc(sizeof(spinlock_t), GFP_KERNEL);
++	if (page->ptl) {
++		spin_lock_init(__pte_lockptr(page));
++	} else {
++		__free_page(page);
++		page = NULL;
++	}
++	return page;
++}
++
++void pte_lock_deinit(struct page *page)
++{
++	kfree(page->ptl);
++	page->mapping = NULL;
++}
++
++#endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0117-ARM-Initialize-ptl-lock-for-vector-page.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0117-ARM-Initialize-ptl-lock-for-vector-page.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0117-ARM-Initialize-ptl-lock-for-vector-page.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0117-ARM-Initialize-ptl-lock-for-vector-page.patch)
@@ -0,0 +1,75 @@
+From c49409135686639923e2afc7ea66f3d215e41b67 Mon Sep 17 00:00:00 2001
+From: Frank Rowand <frank.rowand at am.sony.com>
+Date: Sat, 1 Oct 2011 18:58:13 -0700
+Subject: [PATCH 117/271] ARM: Initialize ptl->lock for vector page
+
+Without this patch, ARM can not use SPLIT_PTLOCK_CPUS if
+PREEMPT_RT_FULL=y because vectors_user_mapping() creates a
+VM_ALWAYSDUMP mapping of the vector page (address 0xffff0000), but no
+ptl->lock has been allocated for the page.  An attempt to coredump
+that page will result in a kernel NULL pointer dereference when
+follow_page() attempts to lock the page.
+
+The call tree to the NULL pointer dereference is:
+
+   do_notify_resume()
+      get_signal_to_deliver()
+         do_coredump()
+            elf_core_dump()
+               get_dump_page()
+                  __get_user_pages()
+                     follow_page()
+                        pte_offset_map_lock() <----- a #define
+                           ...
+                              rt_spin_lock()
+
+The underlying problem is exposed by mm-shrink-the-page-frame-to-rt-size.patch.
+
+Signed-off-by: Frank Rowand <frank.rowand at am.sony.com>
+Cc: Frank <Frank_Rowand at sonyusa.com>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Link: http://lkml.kernel.org/r/4E87C535.2030907@am.sony.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/arm/kernel/process.c |   25 +++++++++++++++++++++++++
+ 1 file changed, 25 insertions(+)
+
+diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
+index 54833ff..75227b3 100644
+--- a/arch/arm/kernel/process.c
++++ b/arch/arm/kernel/process.c
+@@ -489,6 +489,31 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
+ }
+ 
+ #ifdef CONFIG_MMU
++
++/*
++ * CONFIG_SPLIT_PTLOCK_CPUS results in a page->ptl lock.  If the lock is not
++ * initialized by pgtable_page_ctor() then a coredump of the vector page will
++ * fail.
++ */
++static int __init vectors_user_mapping_init_page(void)
++{
++	struct page *page;
++	unsigned long addr = 0xffff0000;
++	pgd_t *pgd;
++	pud_t *pud;
++	pmd_t *pmd;
++
++	pgd = pgd_offset_k(addr);
++	pud = pud_offset(pgd, addr);
++	pmd = pmd_offset(pud, addr);
++	page = pmd_page(*(pmd));
++
++	pgtable_page_ctor(page);
++
++	return 0;
++}
++late_initcall(vectors_user_mapping_init_page);
++
+ /*
+  * The vectors page is always readable from user space for the
+  * atomic helpers and the signal restart code.  Let's declare a mapping
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0118-mm-Allow-only-slab-on-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0118-mm-Allow-only-slab-on-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0118-mm-Allow-only-slab-on-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0118-mm-Allow-only-slab-on-RT.patch)
@@ -0,0 +1,34 @@
+From 87cf957abdb4fefbd52c540913269a1dbdaa2c1b Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:44:03 -0500
+Subject: [PATCH 118/271] mm: Allow only slab on RT
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ init/Kconfig |    2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/init/Kconfig b/init/Kconfig
+index 43298f9..dbc82d0 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -1239,6 +1239,7 @@ config SLAB
+ 
+ config SLUB
+ 	bool "SLUB (Unqueued Allocator)"
++	depends on !PREEMPT_RT_FULL
+ 	help
+ 	   SLUB is a slab allocator that minimizes cache line usage
+ 	   instead of managing queues of cached objects (SLAB approach).
+@@ -1250,6 +1251,7 @@ config SLUB
+ config SLOB
+ 	depends on EXPERT
+ 	bool "SLOB (Simple Allocator)"
++	depends on !PREEMPT_RT_FULL
+ 	help
+ 	   SLOB replaces the stock allocator with a drastically simpler
+ 	   allocator. SLOB is generally more space efficient but
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0119-radix-tree-rt-aware.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0119-radix-tree-rt-aware.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0119-radix-tree-rt-aware.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0119-radix-tree-rt-aware.patch.patch)
@@ -0,0 +1,76 @@
+From 55748804d6b228d28686d7b3a08e41de752e76b5 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 21:33:18 +0200
+Subject: [PATCH 119/271] radix-tree-rt-aware.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/radix-tree.h |    8 +++++++-
+ lib/radix-tree.c           |    5 ++++-
+ 2 files changed, 11 insertions(+), 2 deletions(-)
+
+diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
+index 9d4539c..6b4bdf2 100644
+--- a/include/linux/radix-tree.h
++++ b/include/linux/radix-tree.h
+@@ -231,7 +231,13 @@ unsigned long radix_tree_next_hole(struct radix_tree_root *root,
+ 				unsigned long index, unsigned long max_scan);
+ unsigned long radix_tree_prev_hole(struct radix_tree_root *root,
+ 				unsigned long index, unsigned long max_scan);
++
++#ifndef CONFIG_PREEMPT_RT_FULL
+ int radix_tree_preload(gfp_t gfp_mask);
++#else
++static inline int radix_tree_preload(gfp_t gm) { return 0; }
++#endif
++
+ void radix_tree_init(void);
+ void *radix_tree_tag_set(struct radix_tree_root *root,
+ 			unsigned long index, unsigned int tag);
+@@ -256,7 +262,7 @@ unsigned long radix_tree_locate_item(struct radix_tree_root *root, void *item);
+ 
+ static inline void radix_tree_preload_end(void)
+ {
+-	preempt_enable();
++	preempt_enable_nort();
+ }
+ 
+ #endif /* _LINUX_RADIX_TREE_H */
+diff --git a/lib/radix-tree.c b/lib/radix-tree.c
+index d9df745..770453a 100644
+--- a/lib/radix-tree.c
++++ b/lib/radix-tree.c
+@@ -166,12 +166,13 @@ radix_tree_node_alloc(struct radix_tree_root *root)
+ 		 * succeed in getting a node here (and never reach
+ 		 * kmem_cache_alloc)
+ 		 */
+-		rtp = &__get_cpu_var(radix_tree_preloads);
++		rtp = &get_cpu_var(radix_tree_preloads);
+ 		if (rtp->nr) {
+ 			ret = rtp->nodes[rtp->nr - 1];
+ 			rtp->nodes[rtp->nr - 1] = NULL;
+ 			rtp->nr--;
+ 		}
++		put_cpu_var(radix_tree_preloads);
+ 	}
+ 	if (ret == NULL)
+ 		ret = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
+@@ -206,6 +207,7 @@ radix_tree_node_free(struct radix_tree_node *node)
+ 	call_rcu(&node->rcu_head, radix_tree_node_rcu_free);
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+  * Load up this CPU's radix_tree_node buffer with sufficient objects to
+  * ensure that the addition of a single element in the tree cannot fail.  On
+@@ -240,6 +242,7 @@ out:
+ 	return ret;
+ }
+ EXPORT_SYMBOL(radix_tree_preload);
++#endif
+ 
+ /*
+  *	Return the maximum key which can be store into a
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0120-panic-disable-random-on-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0120-panic-disable-random-on-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0120-panic-disable-random-on-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0120-panic-disable-random-on-rt.patch)
@@ -0,0 +1,28 @@
+From a5b4fcfd4fe682e2d4928496ed787d8ab21db0f1 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 10 Apr 2012 14:34:04 -0400
+Subject: [PATCH 120/271] panic-disable-random-on-rt
+
+---
+ kernel/panic.c |    2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/kernel/panic.c b/kernel/panic.c
+index 3458469..50f4de5 100644
+--- a/kernel/panic.c
++++ b/kernel/panic.c
+@@ -343,9 +343,11 @@ static u64 oops_id;
+ 
+ static int init_oops_id(void)
+ {
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 	if (!oops_id)
+ 		get_random_bytes(&oops_id, sizeof(oops_id));
+ 	else
++#endif
+ 		oops_id++;
+ 
+ 	return 0;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0121-ipc-Make-the-ipc-code-rt-aware.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0121-ipc-Make-the-ipc-code-rt-aware.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0121-ipc-Make-the-ipc-code-rt-aware.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0121-ipc-Make-the-ipc-code-rt-aware.patch)
@@ -0,0 +1,92 @@
+From e0f829120e943c5f2ff5f93ed53fa5594d7816b4 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:12 -0500
+Subject: [PATCH 121/271] ipc: Make the ipc code -rt aware
+
+RT serializes the code with the (rt)spinlock but keeps preemption
+enabled. Some parts of the code need to be atomic nevertheless.
+
+Protect it with preempt_disable/enable_rt pairts.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ ipc/mqueue.c |    5 +++++
+ ipc/msg.c    |   16 ++++++++++++++++
+ 2 files changed, 21 insertions(+)
+
+diff --git a/ipc/mqueue.c b/ipc/mqueue.c
+index 5b4293d..2d06b54 100644
+--- a/ipc/mqueue.c
++++ b/ipc/mqueue.c
+@@ -820,12 +820,17 @@ static inline void pipelined_send(struct mqueue_inode_info *info,
+ 				  struct msg_msg *message,
+ 				  struct ext_wait_queue *receiver)
+ {
++	/*
++	 * Keep them in one critical section for PREEMPT_RT:
++	 */
++	preempt_disable_rt();
+ 	receiver->msg = message;
+ 	list_del(&receiver->list);
+ 	receiver->state = STATE_PENDING;
+ 	wake_up_process(receiver->task);
+ 	smp_wmb();
+ 	receiver->state = STATE_READY;
++	preempt_enable_rt();
+ }
+ 
+ /* pipelined_receive() - if there is task waiting in sys_mq_timedsend()
+diff --git a/ipc/msg.c b/ipc/msg.c
+index 7385de2..06642ac 100644
+--- a/ipc/msg.c
++++ b/ipc/msg.c
+@@ -259,12 +259,20 @@ static void expunge_all(struct msg_queue *msq, int res)
+ 	while (tmp != &msq->q_receivers) {
+ 		struct msg_receiver *msr;
+ 
++		/*
++		 * Make sure that the wakeup doesnt preempt
++		 * this CPU prematurely. (on PREEMPT_RT)
++		 */
++		preempt_disable_rt();
++
+ 		msr = list_entry(tmp, struct msg_receiver, r_list);
+ 		tmp = tmp->next;
+ 		msr->r_msg = NULL;
+ 		wake_up_process(msr->r_tsk);
+ 		smp_mb();
+ 		msr->r_msg = ERR_PTR(res);
++
++		preempt_enable_rt();
+ 	}
+ }
+ 
+@@ -611,6 +619,12 @@ static inline int pipelined_send(struct msg_queue *msq, struct msg_msg *msg)
+ 		    !security_msg_queue_msgrcv(msq, msg, msr->r_tsk,
+ 					       msr->r_msgtype, msr->r_mode)) {
+ 
++			/*
++			 * Make sure that the wakeup doesnt preempt
++			 * this CPU prematurely. (on PREEMPT_RT)
++			 */
++			preempt_disable_rt();
++
+ 			list_del(&msr->r_list);
+ 			if (msr->r_maxsize < msg->m_ts) {
+ 				msr->r_msg = NULL;
+@@ -624,9 +638,11 @@ static inline int pipelined_send(struct msg_queue *msq, struct msg_msg *msg)
+ 				wake_up_process(msr->r_tsk);
+ 				smp_mb();
+ 				msr->r_msg = msg;
++				preempt_enable_rt();
+ 
+ 				return 1;
+ 			}
++			preempt_enable_rt();
+ 		}
+ 	}
+ 	return 0;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0122-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0122-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0122-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0122-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch)
@@ -0,0 +1,66 @@
+From 18127106130f5189bf9b3a2f13837a57becc417a Mon Sep 17 00:00:00 2001
+From: KOBAYASHI Yoshitake <yoshitake.kobayashi at toshiba.co.jp>
+Date: Sat, 23 Jul 2011 11:57:36 +0900
+Subject: [PATCH 122/271] ipc/mqueue: Add a critical section to avoid a
+ deadlock
+
+(Repost for v3.0-rt1 and changed the distination addreses)
+I have tested the following patch on v3.0-rt1 with PREEMPT_RT_FULL.
+In POSIX message queue, if a sender process uses SCHED_FIFO and
+has a higher priority than a receiver process, the sender will
+be stuck at ipc/mqueue.c:452
+
+  452                 while (ewp->state == STATE_PENDING)
+  453                         cpu_relax();
+
+Description of the problem
+ (receiver process)
+   1. receiver changes sender's state to STATE_PENDING (mqueue.c:846)
+   2. wake up sender process and "switch to sender" (mqueue.c:847)
+      Note: This context switch only happens in PREEMPT_RT_FULL kernel.
+ (sender process)
+   3. sender check the own state in above loop (mqueue.c:452-453)
+   *. receiver will never wake up and cannot change sender's state to
+      STATE_READY because sender has higher priority
+
+Signed-off-by: Yoshitake Kobayashi <yoshitake.kobayashi at toshiba.co.jp>
+Cc: viro at zeniv.linux.org.uk
+Cc: dchinner at redhat.com
+Cc: npiggin at kernel.dk
+Cc: hch at lst.de
+Cc: arnd at arndb.de
+Link: http://lkml.kernel.org/r/4E2A38A0.1090601@toshiba.co.jp
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ ipc/mqueue.c |    8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/ipc/mqueue.c b/ipc/mqueue.c
+index 2d06b54..eec1d99 100644
+--- a/ipc/mqueue.c
++++ b/ipc/mqueue.c
+@@ -844,15 +844,19 @@ static inline void pipelined_receive(struct mqueue_inode_info *info)
+ 		wake_up_interruptible(&info->wait_q);
+ 		return;
+ 	}
++	/*
++	 * Keep them in one critical section for PREEMPT_RT:
++	 */
++	preempt_disable_rt();
+ 	msg_insert(sender->msg, info);
+ 	list_del(&sender->list);
+ 	sender->state = STATE_PENDING;
+ 	wake_up_process(sender->task);
+ 	smp_wmb();
+ 	sender->state = STATE_READY;
++	preempt_enable_rt();
+ }
+-
+-SYSCALL_DEFINE5(mq_timedsend, mqd_t, mqdes, const char __user *, u_msg_ptr,
++ SYSCALL_DEFINE5(mq_timedsend, mqd_t, mqdes, const char __user *, u_msg_ptr,
+ 		size_t, msg_len, unsigned int, msg_prio,
+ 		const struct timespec __user *, u_abs_timeout)
+ {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0123-relay-fix-timer-madness.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0123-relay-fix-timer-madness.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0123-relay-fix-timer-madness.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0123-relay-fix-timer-madness.patch)
@@ -0,0 +1,57 @@
+From 8d28ea78f1347985d1d48d26e964197ad99e58a6 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:44:07 -0500
+Subject: [PATCH 123/271] relay: fix timer madness
+
+remove timer calls (!!!) from deep within the tracing infrastructure.
+This was totally bogus code that can cause lockups and worse.  Poll
+the buffer every 2 jiffies for now.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/relay.c |   14 +++++---------
+ 1 file changed, 5 insertions(+), 9 deletions(-)
+
+diff --git a/kernel/relay.c b/kernel/relay.c
+index b6f803a..eae992d 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -340,6 +340,10 @@ static void wakeup_readers(unsigned long data)
+ {
+ 	struct rchan_buf *buf = (struct rchan_buf *)data;
+ 	wake_up_interruptible(&buf->read_wait);
++	/*
++	 * Stupid polling for now:
++	 */
++	mod_timer(&buf->timer, jiffies + 1);
+ }
+ 
+ /**
+@@ -357,6 +361,7 @@ static void __relay_reset(struct rchan_buf *buf, unsigned int init)
+ 		init_waitqueue_head(&buf->read_wait);
+ 		kref_init(&buf->kref);
+ 		setup_timer(&buf->timer, wakeup_readers, (unsigned long)buf);
++		mod_timer(&buf->timer, jiffies + 1);
+ 	} else
+ 		del_timer_sync(&buf->timer);
+ 
+@@ -739,15 +744,6 @@ size_t relay_switch_subbuf(struct rchan_buf *buf, size_t length)
+ 		else
+ 			buf->early_bytes += buf->chan->subbuf_size -
+ 					    buf->padding[old_subbuf];
+-		smp_mb();
+-		if (waitqueue_active(&buf->read_wait))
+-			/*
+-			 * Calling wake_up_interruptible() from here
+-			 * will deadlock if we happen to be logging
+-			 * from the scheduler (trying to re-grab
+-			 * rq->lock), so defer it.
+-			 */
+-			mod_timer(&buf->timer, jiffies + 1);
+ 	}
+ 
+ 	old = buf->data;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0124-net-ipv4-route-use-locks-on-up-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0124-net-ipv4-route-use-locks-on-up-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0124-net-ipv4-route-use-locks-on-up-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0124-net-ipv4-route-use-locks-on-up-rt.patch.patch)
@@ -0,0 +1,26 @@
+From f90f1c626b0016d54656b5f160b61e357f49fca1 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 15 Jul 2011 16:24:45 +0200
+Subject: [PATCH 124/271] net-ipv4-route-use-locks-on-up-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ net/ipv4/route.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 94cdbc5..5cb9301 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -251,7 +251,7 @@ struct rt_hash_bucket {
+ };
+ 
+ #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) || \
+-	defined(CONFIG_PROVE_LOCKING)
++	defined(CONFIG_PROVE_LOCKING) || defined(CONFIG_PREEMPT_RT_FULL)
+ /*
+  * Instead of using one spinlock for each rt_hash_bucket, we use a table of spinlocks
+  * The size of this table is a power of two and depends on the number of CPUS.
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0125-workqueue-avoid-the-lock-in-cpu-dying.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0125-workqueue-avoid-the-lock-in-cpu-dying.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0125-workqueue-avoid-the-lock-in-cpu-dying.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0125-workqueue-avoid-the-lock-in-cpu-dying.patch.patch)
@@ -0,0 +1,68 @@
+From 6a01b2cff5d8ba0889a7f96e4d20bc081e9e2bb4 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 24 Jun 2011 20:39:24 +0200
+Subject: [PATCH 125/271] workqueue-avoid-the-lock-in-cpu-dying.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/workqueue.c |   30 ++++++++++++++++++++----------
+ 1 file changed, 20 insertions(+), 10 deletions(-)
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 4b4421d..8bdc220 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3509,6 +3509,25 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
+ 				kthread_stop(new_trustee);
+ 			return NOTIFY_BAD;
+ 		}
++		break;
++	case CPU_POST_DEAD:
++	case CPU_UP_CANCELED:
++	case CPU_DOWN_FAILED:
++	case CPU_ONLINE:
++		break;
++	case CPU_DYING:
++		/*
++		 * We access this lockless. We are on the dying CPU
++		 * and called from stomp machine.
++		 *
++		 * Before this, the trustee and all workers except for
++		 * the ones which are still executing works from
++		 * before the last CPU down must be on the cpu.  After
++		 * this, they'll all be diasporas.
++		 */
++		gcwq->flags |= GCWQ_DISASSOCIATED;
++	default:
++		goto out;
+ 	}
+ 
+ 	/* some are called w/ irq disabled, don't disturb irq status */
+@@ -3528,16 +3547,6 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
+ 		gcwq->first_idle = new_worker;
+ 		break;
+ 
+-	case CPU_DYING:
+-		/*
+-		 * Before this, the trustee and all workers except for
+-		 * the ones which are still executing works from
+-		 * before the last CPU down must be on the cpu.  After
+-		 * this, they'll all be diasporas.
+-		 */
+-		gcwq->flags |= GCWQ_DISASSOCIATED;
+-		break;
+-
+ 	case CPU_POST_DEAD:
+ 		gcwq->trustee_state = TRUSTEE_BUTCHER;
+ 		/* fall through */
+@@ -3571,6 +3580,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
+ 
+ 	spin_unlock_irqrestore(&gcwq->lock, flags);
+ 
++out:
+ 	return notifier_from_errno(0);
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0126-timers-prepare-for-full-preemption.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0126-timers-prepare-for-full-preemption.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0126-timers-prepare-for-full-preemption.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0126-timers-prepare-for-full-preemption.patch)
@@ -0,0 +1,129 @@
+From ed2896fd3eb4dbe63f89b15e2a66a4feb68f6d57 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:34 -0500
+Subject: [PATCH 126/271] timers: prepare for full preemption
+
+When softirqs can be preempted we need to make sure that cancelling
+the timer from the active thread can not deadlock vs. a running timer
+callback. Add a waitqueue to resolve that.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/timer.h |    2 +-
+ kernel/timer.c        |   35 ++++++++++++++++++++++++++++++++---
+ 2 files changed, 33 insertions(+), 4 deletions(-)
+
+diff --git a/include/linux/timer.h b/include/linux/timer.h
+index 6abd913..b703477 100644
+--- a/include/linux/timer.h
++++ b/include/linux/timer.h
+@@ -276,7 +276,7 @@ extern void add_timer(struct timer_list *timer);
+ 
+ extern int try_to_del_timer_sync(struct timer_list *timer);
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
+   extern int del_timer_sync(struct timer_list *timer);
+ #else
+ # define del_timer_sync(t)		del_timer(t)
+diff --git a/kernel/timer.c b/kernel/timer.c
+index 9c3c62b..e4b2373 100644
+--- a/kernel/timer.c
++++ b/kernel/timer.c
+@@ -75,6 +75,7 @@ struct tvec_root {
+ struct tvec_base {
+ 	spinlock_t lock;
+ 	struct timer_list *running_timer;
++	wait_queue_head_t wait_for_running_timer;
+ 	unsigned long timer_jiffies;
+ 	unsigned long next_timer;
+ 	struct tvec_root tv1;
+@@ -679,12 +680,15 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
+ 
+ 	debug_activate(timer, expires);
+ 
++	preempt_disable_rt();
+ 	cpu = smp_processor_id();
+ 
+ #if defined(CONFIG_NO_HZ) && defined(CONFIG_SMP)
+ 	if (!pinned && get_sysctl_timer_migration() && idle_cpu(cpu))
+ 		cpu = get_nohz_timer_target();
+ #endif
++	preempt_enable_rt();
++
+ 	new_base = per_cpu(tvec_bases, cpu);
+ 
+ 	if (base != new_base) {
+@@ -885,6 +889,29 @@ void add_timer_on(struct timer_list *timer, int cpu)
+ }
+ EXPORT_SYMBOL_GPL(add_timer_on);
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++/*
++ * Wait for a running timer
++ */
++static void wait_for_running_timer(struct timer_list *timer)
++{
++	struct tvec_base *base = timer->base;
++
++	if (base->running_timer == timer)
++		wait_event(base->wait_for_running_timer,
++			   base->running_timer != timer);
++}
++
++# define wakeup_timer_waiters(b)	wake_up(&(b)->wait_for_tunning_timer)
++#else
++static inline void wait_for_running_timer(struct timer_list *timer)
++{
++	cpu_relax();
++}
++
++# define wakeup_timer_waiters(b)	do { } while (0)
++#endif
++
+ /**
+  * del_timer - deactive a timer.
+  * @timer: the timer to be deactivated
+@@ -953,7 +980,7 @@ out:
+ }
+ EXPORT_SYMBOL(try_to_del_timer_sync);
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
+ /**
+  * del_timer_sync - deactivate a timer and wait for the handler to finish.
+  * @timer: the timer to be deactivated
+@@ -1013,7 +1040,7 @@ int del_timer_sync(struct timer_list *timer)
+ 		int ret = try_to_del_timer_sync(timer);
+ 		if (ret >= 0)
+ 			return ret;
+-		cpu_relax();
++		wait_for_running_timer(timer);
+ 	}
+ }
+ EXPORT_SYMBOL(del_timer_sync);
+@@ -1124,10 +1151,11 @@ static inline void __run_timers(struct tvec_base *base)
+ 
+ 			spin_unlock_irq(&base->lock);
+ 			call_timer_fn(timer, fn, data);
++			base->running_timer = NULL;
+ 			spin_lock_irq(&base->lock);
+ 		}
+ 	}
+-	base->running_timer = NULL;
++	wake_up(&base->wait_for_running_timer);
+ 	spin_unlock_irq(&base->lock);
+ }
+ 
+@@ -1634,6 +1662,7 @@ static int __cpuinit init_timers_cpu(int cpu)
+ 	}
+ 
+ 	spin_lock_init(&base->lock);
++	init_waitqueue_head(&base->wait_for_running_timer);
+ 
+ 	for (j = 0; j < TVN_SIZE; j++) {
+ 		INIT_LIST_HEAD(base->tv5.vec + j);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0127-timers-preempt-rt-support.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0127-timers-preempt-rt-support.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0127-timers-preempt-rt-support.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0127-timers-preempt-rt-support.patch)
@@ -0,0 +1,50 @@
+From 835ba1d80b1859ecbe904d11b4403d069669bf38 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:20 -0500
+Subject: [PATCH 127/271] timers: preempt-rt support
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/timer.c |   18 +++++++++++++++++-
+ 1 file changed, 17 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/timer.c b/kernel/timer.c
+index e4b2373..2aa1215 100644
+--- a/kernel/timer.c
++++ b/kernel/timer.c
+@@ -1294,6 +1294,22 @@ unsigned long get_next_timer_interrupt(unsigned long now)
+ 	 */
+ 	if (cpu_is_offline(smp_processor_id()))
+ 		return now + NEXT_TIMER_MAX_DELTA;
++
++#ifdef CONFIG_PREEMPT_RT_FULL
++	/*
++	 * On PREEMPT_RT we cannot sleep here. If the trylock does not
++	 * succeed then we return the worst-case 'expires in 1 tick'
++	 * value:
++	 */
++	if (spin_trylock(&base->lock)) {
++		if (time_before_eq(base->next_timer, base->timer_jiffies))
++			base->next_timer = __next_timer_interrupt(base);
++		expires = base->next_timer;
++		spin_unlock(&base->lock);
++	} else {
++		expires = now + 1;
++	}
++#else
+ 	spin_lock(&base->lock);
+ 	if (time_before_eq(base->next_timer, base->timer_jiffies))
+ 		base->next_timer = __next_timer_interrupt(base);
+@@ -1302,7 +1318,7 @@ unsigned long get_next_timer_interrupt(unsigned long now)
+ 
+ 	if (time_before_eq(expires, now))
+ 		return now;
+-
++#endif
+ 	return cmp_next_hrtimer_event(now, expires);
+ }
+ #endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0128-timers-fix-timer-hotplug-on-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0128-timers-fix-timer-hotplug-on-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0128-timers-fix-timer-hotplug-on-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0128-timers-fix-timer-hotplug-on-rt.patch)
@@ -0,0 +1,54 @@
+From dba94b2326edf8fb9aa41552178ab1e6de0c556b Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:32 -0500
+Subject: [PATCH 128/271] timers: fix timer hotplug on -rt
+
+Here we are in the CPU_DEAD notifier, and we must not sleep nor
+enable interrupts.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/timer.c |   12 +++++++++---
+ 1 file changed, 9 insertions(+), 3 deletions(-)
+
+diff --git a/kernel/timer.c b/kernel/timer.c
+index 2aa1215..e36b343 100644
+--- a/kernel/timer.c
++++ b/kernel/timer.c
+@@ -1714,6 +1714,7 @@ static void __cpuinit migrate_timers(int cpu)
+ {
+ 	struct tvec_base *old_base;
+ 	struct tvec_base *new_base;
++	unsigned long flags;
+ 	int i;
+ 
+ 	BUG_ON(cpu_online(cpu));
+@@ -1723,8 +1724,11 @@ static void __cpuinit migrate_timers(int cpu)
+ 	 * The caller is globally serialized and nobody else
+ 	 * takes two locks at once, deadlock is not possible.
+ 	 */
+-	spin_lock_irq(&new_base->lock);
+-	spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING);
++	local_irq_save(flags);
++	while (!spin_trylock(&new_base->lock))
++		cpu_relax();
++	while (!spin_trylock(&old_base->lock))
++		cpu_relax();
+ 
+ 	BUG_ON(old_base->running_timer);
+ 
+@@ -1738,7 +1742,9 @@ static void __cpuinit migrate_timers(int cpu)
+ 	}
+ 
+ 	spin_unlock(&old_base->lock);
+-	spin_unlock_irq(&new_base->lock);
++	spin_unlock(&new_base->lock);
++	local_irq_restore(flags);
++
+ 	put_cpu_var(tvec_bases);
+ }
+ #endif /* CONFIG_HOTPLUG_CPU */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0129-timers-mov-printk_tick-to-soft-interrupt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0129-timers-mov-printk_tick-to-soft-interrupt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0129-timers-mov-printk_tick-to-soft-interrupt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0129-timers-mov-printk_tick-to-soft-interrupt.patch)
@@ -0,0 +1,34 @@
+From d26d5ee4b6d3f1e54ad7377bf9d6c92280463503 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 3 Jul 2009 08:44:30 -0500
+Subject: [PATCH 129/271] timers: mov printk_tick to soft interrupt
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+---
+ kernel/timer.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/timer.c b/kernel/timer.c
+index e36b343..7954334 100644
+--- a/kernel/timer.c
++++ b/kernel/timer.c
+@@ -1336,7 +1336,6 @@ void update_process_times(int user_tick)
+ 	account_process_tick(p, user_tick);
+ 	run_local_timers();
+ 	rcu_check_callbacks(cpu, user_tick);
+-	printk_tick();
+ #ifdef CONFIG_IRQ_WORK
+ 	if (in_irq())
+ 		irq_work_run();
+@@ -1352,6 +1351,7 @@ static void run_timer_softirq(struct softirq_action *h)
+ {
+ 	struct tvec_base *base = __this_cpu_read(tvec_bases);
+ 
++	printk_tick();
+ 	hrtimer_run_pending();
+ 
+ 	if (time_after_eq(jiffies, base->timer_jiffies))
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0130-timer-delay-waking-softirqs-from-the-jiffy-tick.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0130-timer-delay-waking-softirqs-from-the-jiffy-tick.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0130-timer-delay-waking-softirqs-from-the-jiffy-tick.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0130-timer-delay-waking-softirqs-from-the-jiffy-tick.patch)
@@ -0,0 +1,80 @@
+From ff871349d460ecd8b294846333ffd6ae176b415c Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz at infradead.org>
+Date: Fri, 21 Aug 2009 11:56:45 +0200
+Subject: [PATCH 130/271] timer: delay waking softirqs from the jiffy tick
+
+People were complaining about broken balancing with the recent -rt
+series.
+
+A look at /proc/sched_debug yielded:
+
+cpu#0, 2393.874 MHz
+  .nr_running                    : 0
+  .load                          : 0
+  .cpu_load[0]                   : 177522
+  .cpu_load[1]                   : 177522
+  .cpu_load[2]                   : 177522
+  .cpu_load[3]                   : 177522
+  .cpu_load[4]                   : 177522
+cpu#1, 2393.874 MHz
+  .nr_running                    : 4
+  .load                          : 4096
+  .cpu_load[0]                   : 181618
+  .cpu_load[1]                   : 180850
+  .cpu_load[2]                   : 180274
+  .cpu_load[3]                   : 179938
+  .cpu_load[4]                   : 179758
+
+Which indicated the cpu_load computation was hosed, the 177522 value
+indicates that there is one RT task runnable. Initially I thought the
+old problem of calculating the cpu_load from a softirq had re-surfaced,
+however looking at the code shows its being done from scheduler_tick().
+
+[ we really should fix this RT/cfs interaction some day... ]
+
+A few trace_printk()s later:
+
+    sirq-timer/1-19    [001]   174.289744:     19: 50:S ==> [001]     0:140:R <idle>
+          <idle>-0     [001]   174.290724: enqueue_task_rt: adding task: 19/sirq-timer/1 with load: 177522
+          <idle>-0     [001]   174.290725:      0:140:R   + [001]    19: 50:S sirq-timer/1
+          <idle>-0     [001]   174.290730: scheduler_tick: current load: 177522
+          <idle>-0     [001]   174.290732: scheduler_tick: current: 0/swapper
+          <idle>-0     [001]   174.290736:      0:140:R ==> [001]    19: 50:R sirq-timer/1
+    sirq-timer/1-19    [001]   174.290741: dequeue_task_rt: removing task: 19/sirq-timer/1 with load: 177522
+    sirq-timer/1-19    [001]   174.290743:     19: 50:S ==> [001]     0:140:R <idle>
+
+We see that we always raise the timer softirq before doing the load
+calculation. Avoid this by re-ordering the scheduler_tick() call in
+update_process_times() to occur before we deal with timers.
+
+This lowers the load back to sanity and restores regular load-balancing
+behaviour.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/timer.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/timer.c b/kernel/timer.c
+index 7954334..d1bc5a9 100644
+--- a/kernel/timer.c
++++ b/kernel/timer.c
+@@ -1334,13 +1334,13 @@ void update_process_times(int user_tick)
+ 
+ 	/* Note: this timer irq context must be accounted for as well. */
+ 	account_process_tick(p, user_tick);
++	scheduler_tick();
+ 	run_local_timers();
+ 	rcu_check_callbacks(cpu, user_tick);
+ #ifdef CONFIG_IRQ_WORK
+ 	if (in_irq())
+ 		irq_work_run();
+ #endif
+-	scheduler_tick();
+ 	run_posix_cpu_timers(p);
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0131-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0131-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0131-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0131-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch)
@@ -0,0 +1,75 @@
+From 46052eed5a76f1588062da4f2e0a3099ff0221c2 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 21 Jul 2011 15:23:39 +0200
+Subject: [PATCH 131/271] timers: Avoid the switch timers base set to NULL
+ trick on RT
+
+On RT that code is preemptible, so we cannot assign NULL to timers
+base as a preempter would spin forever in lock_timer_base().
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/timer.c |   40 ++++++++++++++++++++++++++++++++--------
+ 1 file changed, 32 insertions(+), 8 deletions(-)
+
+diff --git a/kernel/timer.c b/kernel/timer.c
+index d1bc5a9..8a9ca7d 100644
+--- a/kernel/timer.c
++++ b/kernel/timer.c
+@@ -654,6 +654,36 @@ static struct tvec_base *lock_timer_base(struct timer_list *timer,
+ 	}
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
++static inline struct tvec_base *switch_timer_base(struct timer_list *timer,
++						  struct tvec_base *old,
++						  struct tvec_base *new)
++{
++	/* See the comment in lock_timer_base() */
++	timer_set_base(timer, NULL);
++	spin_unlock(&old->lock);
++	spin_lock(&new->lock);
++	timer_set_base(timer, new);
++	return new;
++}
++#else
++static inline struct tvec_base *switch_timer_base(struct timer_list *timer,
++						  struct tvec_base *old,
++						  struct tvec_base *new)
++{
++	/*
++	 * We cannot do the above because we might be preempted and
++	 * then the preempter would see NULL and loop forever.
++	 */
++	if (spin_trylock(&new->lock)) {
++		timer_set_base(timer, new);
++		spin_unlock(&old->lock);
++		return new;
++	}
++	return old;
++}
++#endif
++
+ static inline int
+ __mod_timer(struct timer_list *timer, unsigned long expires,
+ 						bool pending_only, int pinned)
+@@ -699,14 +729,8 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
+ 		 * handler yet has not finished. This also guarantees that
+ 		 * the timer is serialized wrt itself.
+ 		 */
+-		if (likely(base->running_timer != timer)) {
+-			/* See the comment in lock_timer_base() */
+-			timer_set_base(timer, NULL);
+-			spin_unlock(&base->lock);
+-			base = new_base;
+-			spin_lock(&base->lock);
+-			timer_set_base(timer, base);
+-		}
++		if (likely(base->running_timer != timer))
++			base = switch_timer_base(timer, base, new_base);
+ 	}
+ 
+ 	timer->expires = expires;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0132-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0132-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0132-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0132-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch)
@@ -0,0 +1,52 @@
+From ba1c06e48ffaa241200a5bfc9c8e0a5f2d891eb7 Mon Sep 17 00:00:00 2001
+From: Yong Zhang <yong.zhang0 at gmail.com>
+Date: Sun, 16 Oct 2011 18:56:45 +0800
+Subject: [PATCH 132/271] printk: Don't call printk_tick in printk_needs_cpu()
+ on RT
+
+printk_tick() can't be called in atomic context when RT is enabled,
+otherwise below warning will show:
+
+[  117.597095] BUG: sleeping function called from invalid context at kernel/rtmutex.c:645
+[  117.597102] in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: kworker/0:0
+[  117.597111] Pid: 0, comm: kworker/0:0 Not tainted 3.0.6-rt17-00284-gb76d419-dirty #7
+[  117.597116] Call Trace:
+[  117.597131]  [<c06e3b61>] ? printk+0x1d/0x24
+[  117.597142]  [<c01390b6>] __might_sleep+0xe6/0x110
+[  117.597151]  [<c06e634c>] rt_spin_lock+0x1c/0x30
+[  117.597158]  [<c0142f26>] __wake_up+0x26/0x60
+[  117.597166]  [<c014c78e>] printk_tick+0x3e/0x40
+[  117.597173]  [<c014c7b4>] printk_needs_cpu+0x24/0x30
+[  117.597181]  [<c017ecc8>] tick_nohz_stop_sched_tick+0x2e8/0x410
+[  117.597191]  [<c017305a>] ? sched_clock_idle_wakeup_event+0x1a/0x20
+[  117.597201]  [<c010182a>] cpu_idle+0x4a/0xb0
+[  117.597209]  [<c06e0b97>] start_secondary+0xd3/0xd7
+
+Now this is a really rare case and it's very unlikely that we starve
+an logbuf waiter that way.
+
+Signed-off-by: Yong Zhang <yong.zhang0 at gmail.com>
+Link: http://lkml.kernel.org/r/1318762607-2261-4-git-send-email-yong.zhang0@gmail.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/printk.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/printk.c b/kernel/printk.c
+index 1f06626..2b95bc0 100644
+--- a/kernel/printk.c
++++ b/kernel/printk.c
+@@ -1274,8 +1274,8 @@ void printk_tick(void)
+ 
+ int printk_needs_cpu(int cpu)
+ {
+-	if (cpu_is_offline(cpu))
+-		printk_tick();
++	if (unlikely(cpu_is_offline(cpu)))
++		__this_cpu_write(printk_pending, 0);
+ 	return __this_cpu_read(printk_pending);
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0133-hrtimers-prepare-full-preemption.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0133-hrtimers-prepare-full-preemption.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0133-hrtimers-prepare-full-preemption.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0133-hrtimers-prepare-full-preemption.patch)
@@ -0,0 +1,206 @@
+From 5e424120158970ab67082b0102c193a24ba90482 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:29:34 -0500
+Subject: [PATCH 133/271] hrtimers: prepare full preemption
+
+Make cancellation of a running callback in softirq context safe
+against preemption.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/hrtimer.h |   10 ++++++++++
+ kernel/hrtimer.c        |   33 ++++++++++++++++++++++++++++++++-
+ kernel/itimer.c         |    1 +
+ kernel/posix-timers.c   |   33 +++++++++++++++++++++++++++++++++
+ 4 files changed, 76 insertions(+), 1 deletion(-)
+
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index fd0dc30..e8b395d 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -187,6 +187,9 @@ struct hrtimer_cpu_base {
+ 	unsigned long			nr_hangs;
+ 	ktime_t				max_hang_time;
+ #endif
++#ifdef CONFIG_PREEMPT_RT_BASE
++	wait_queue_head_t		wait;
++#endif
+ 	struct hrtimer_clock_base	clock_base[HRTIMER_MAX_CLOCK_BASES];
+ };
+ 
+@@ -374,6 +377,13 @@ static inline int hrtimer_restart(struct hrtimer *timer)
+ 	return hrtimer_start_expires(timer, HRTIMER_MODE_ABS);
+ }
+ 
++/* Softirq preemption could deadlock timer removal */
++#ifdef CONFIG_PREEMPT_RT_BASE
++  extern void hrtimer_wait_for_timer(const struct hrtimer *timer);
++#else
++# define hrtimer_wait_for_timer(timer)	do { cpu_relax(); } while (0)
++#endif
++
+ /* Query timers: */
+ extern ktime_t hrtimer_get_remaining(const struct hrtimer *timer);
+ extern int hrtimer_get_res(const clockid_t which_clock, struct timespec *tp);
+diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
+index 1a3695e..905e2cd2 100644
+--- a/kernel/hrtimer.c
++++ b/kernel/hrtimer.c
+@@ -847,6 +847,32 @@ u64 hrtimer_forward(struct hrtimer *timer, ktime_t now, ktime_t interval)
+ }
+ EXPORT_SYMBOL_GPL(hrtimer_forward);
+ 
++#ifdef CONFIG_PREEMPT_RT_BASE
++# define wake_up_timer_waiters(b)	wake_up(&(b)->wait)
++
++/**
++ * hrtimer_wait_for_timer - Wait for a running timer
++ *
++ * @timer:	timer to wait for
++ *
++ * The function waits in case the timers callback function is
++ * currently executed on the waitqueue of the timer base. The
++ * waitqueue is woken up after the timer callback function has
++ * finished execution.
++ */
++void hrtimer_wait_for_timer(const struct hrtimer *timer)
++{
++	struct hrtimer_clock_base *base = timer->base;
++
++	if (base && base->cpu_base && !hrtimer_hres_active(base->cpu_base))
++		wait_event(base->cpu_base->wait,
++				!(timer->state & HRTIMER_STATE_CALLBACK));
++}
++
++#else
++# define wake_up_timer_waiters(b)	do { } while (0)
++#endif
++
+ /*
+  * enqueue_hrtimer - internal function to (re)start a timer
+  *
+@@ -1073,7 +1099,7 @@ int hrtimer_cancel(struct hrtimer *timer)
+ 
+ 		if (ret >= 0)
+ 			return ret;
+-		cpu_relax();
++		hrtimer_wait_for_timer(timer);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(hrtimer_cancel);
+@@ -1476,6 +1502,8 @@ void hrtimer_run_queues(void)
+ 		}
+ 		raw_spin_unlock(&cpu_base->lock);
+ 	}
++
++	wake_up_timer_waiters(cpu_base);
+ }
+ 
+ /*
+@@ -1638,6 +1666,9 @@ static void __cpuinit init_hrtimers_cpu(int cpu)
+ 	}
+ 
+ 	hrtimer_init_hres(cpu_base);
++#ifdef CONFIG_PREEMPT_RT_BASE
++	init_waitqueue_head(&cpu_base->wait);
++#endif
+ }
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+diff --git a/kernel/itimer.c b/kernel/itimer.c
+index d802883..2c582fc 100644
+--- a/kernel/itimer.c
++++ b/kernel/itimer.c
+@@ -214,6 +214,7 @@ again:
+ 		/* We are sharing ->siglock with it_real_fn() */
+ 		if (hrtimer_try_to_cancel(timer) < 0) {
+ 			spin_unlock_irq(&tsk->sighand->siglock);
++			hrtimer_wait_for_timer(&tsk->signal->real_timer);
+ 			goto again;
+ 		}
+ 		expires = timeval_to_ktime(value->it_value);
+diff --git a/kernel/posix-timers.c b/kernel/posix-timers.c
+index 7b73c34..6a74800 100644
+--- a/kernel/posix-timers.c
++++ b/kernel/posix-timers.c
+@@ -766,6 +766,20 @@ SYSCALL_DEFINE1(timer_getoverrun, timer_t, timer_id)
+ 	return overrun;
+ }
+ 
++/*
++ * Protected by RCU!
++ */
++static void timer_wait_for_callback(struct k_clock *kc, struct k_itimer *timr)
++{
++#ifdef CONFIG_PREEMPT_RT_FULL
++	if (kc->timer_set == common_timer_set)
++		hrtimer_wait_for_timer(&timr->it.real.timer);
++	else
++		/* FIXME: Whacky hack for posix-cpu-timers */
++		schedule_timeout(1);
++#endif
++}
++
+ /* Set a POSIX.1b interval timer. */
+ /* timr->it_lock is taken. */
+ static int
+@@ -843,6 +857,7 @@ retry:
+ 	if (!timr)
+ 		return -EINVAL;
+ 
++	rcu_read_lock();
+ 	kc = clockid_to_kclock(timr->it_clock);
+ 	if (WARN_ON_ONCE(!kc || !kc->timer_set))
+ 		error = -EINVAL;
+@@ -851,9 +866,12 @@ retry:
+ 
+ 	unlock_timer(timr, flag);
+ 	if (error == TIMER_RETRY) {
++		timer_wait_for_callback(kc, timr);
+ 		rtn = NULL;	// We already got the old time...
++		rcu_read_unlock();
+ 		goto retry;
+ 	}
++	rcu_read_unlock();
+ 
+ 	if (old_setting && !error &&
+ 	    copy_to_user(old_setting, &old_spec, sizeof (old_spec)))
+@@ -891,10 +909,15 @@ retry_delete:
+ 	if (!timer)
+ 		return -EINVAL;
+ 
++	rcu_read_lock();
+ 	if (timer_delete_hook(timer) == TIMER_RETRY) {
+ 		unlock_timer(timer, flags);
++		timer_wait_for_callback(clockid_to_kclock(timer->it_clock),
++					timer);
++		rcu_read_unlock();
+ 		goto retry_delete;
+ 	}
++	rcu_read_unlock();
+ 
+ 	spin_lock(&current->sighand->siglock);
+ 	list_del(&timer->list);
+@@ -920,8 +943,18 @@ static void itimer_delete(struct k_itimer *timer)
+ retry_delete:
+ 	spin_lock_irqsave(&timer->it_lock, flags);
+ 
++	/* On RT we can race with a deletion */
++	if (!timer->it_signal) {
++		unlock_timer(timer, flags);
++		return;
++	}
++
+ 	if (timer_delete_hook(timer) == TIMER_RETRY) {
++		rcu_read_lock();
+ 		unlock_timer(timer, flags);
++		timer_wait_for_callback(clockid_to_kclock(timer->it_clock),
++					timer);
++		rcu_read_unlock();
+ 		goto retry_delete;
+ 	}
+ 	list_del(&timer->list);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0134-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0134-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0134-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0134-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch)
@@ -0,0 +1,418 @@
+From 2c4c5c9e998f52729077c5664996e02f8235b5e8 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 3 Jul 2009 08:44:31 -0500
+Subject: [PATCH 134/271] hrtimer: fixup hrtimer callback changes for
+ preempt-rt
+
+In preempt-rt we can not call the callbacks which take sleeping locks
+from the timer interrupt context.
+
+Bring back the softirq split for now, until we fixed the signal
+delivery problem for real.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+---
+ include/linux/hrtimer.h  |    3 +
+ kernel/hrtimer.c         |  190 +++++++++++++++++++++++++++++++++++++++++-----
+ kernel/sched.c           |    2 +
+ kernel/time/tick-sched.c |    1 +
+ kernel/watchdog.c        |    1 +
+ 5 files changed, 179 insertions(+), 18 deletions(-)
+
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index e8b395d..0e37086 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -111,6 +111,8 @@ struct hrtimer {
+ 	enum hrtimer_restart		(*function)(struct hrtimer *);
+ 	struct hrtimer_clock_base	*base;
+ 	unsigned long			state;
++	struct list_head		cb_entry;
++	int				irqsafe;
+ #ifdef CONFIG_TIMER_STATS
+ 	int				start_pid;
+ 	void				*start_site;
+@@ -147,6 +149,7 @@ struct hrtimer_clock_base {
+ 	int			index;
+ 	clockid_t		clockid;
+ 	struct timerqueue_head	active;
++	struct list_head	expired;
+ 	ktime_t			resolution;
+ 	ktime_t			(*get_time)(void);
+ 	ktime_t			softirq_time;
+diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
+index 905e2cd2..1dd627b 100644
+--- a/kernel/hrtimer.c
++++ b/kernel/hrtimer.c
+@@ -589,8 +589,7 @@ static int hrtimer_reprogram(struct hrtimer *timer,
+ 	 * When the callback is running, we do not reprogram the clock event
+ 	 * device. The timer callback is either running on a different CPU or
+ 	 * the callback is executed in the hrtimer_interrupt context. The
+-	 * reprogramming is handled either by the softirq, which called the
+-	 * callback or at the end of the hrtimer_interrupt.
++	 * reprogramming is handled at the end of the hrtimer_interrupt.
+ 	 */
+ 	if (hrtimer_callback_running(timer))
+ 		return 0;
+@@ -625,6 +624,9 @@ static int hrtimer_reprogram(struct hrtimer *timer,
+ 	return res;
+ }
+ 
++static void __run_hrtimer(struct hrtimer *timer, ktime_t *now);
++static int hrtimer_rt_defer(struct hrtimer *timer);
++
+ /*
+  * Initialize the high resolution related parts of cpu_base
+  */
+@@ -644,7 +646,29 @@ static inline int hrtimer_enqueue_reprogram(struct hrtimer *timer,
+ 					    struct hrtimer_clock_base *base,
+ 					    int wakeup)
+ {
++#ifdef CONFIG_PREEMPT_RT_BASE
++again:
++	if (base->cpu_base->hres_active && hrtimer_reprogram(timer, base)) {
++		/*
++		 * Move softirq based timers away from the rbtree in
++		 * case it expired already. Otherwise we would have a
++		 * stale base->first entry until the softirq runs.
++		 */
++		if (!hrtimer_rt_defer(timer)) {
++			ktime_t now = ktime_get();
++
++			__run_hrtimer(timer, &now);
++			/*
++			 * __run_hrtimer might have requeued timer and
++			 * it could be base->first again.
++			 */
++			if (&timer->node == base->active.next)
++				goto again;
++			return 1;
++		}
++#else
+ 	if (base->cpu_base->hres_active && hrtimer_reprogram(timer, base)) {
++#endif
+ 		if (wakeup) {
+ 			raw_spin_unlock(&base->cpu_base->lock);
+ 			raise_softirq_irqoff(HRTIMER_SOFTIRQ);
+@@ -733,6 +757,11 @@ static inline int hrtimer_enqueue_reprogram(struct hrtimer *timer,
+ }
+ static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base) { }
+ static inline void retrigger_next_event(void *arg) { }
++static inline int hrtimer_reprogram(struct hrtimer *timer,
++				    struct hrtimer_clock_base *base)
++{
++	return 0;
++}
+ 
+ #endif /* CONFIG_HIGH_RES_TIMERS */
+ 
+@@ -864,9 +893,9 @@ void hrtimer_wait_for_timer(const struct hrtimer *timer)
+ {
+ 	struct hrtimer_clock_base *base = timer->base;
+ 
+-	if (base && base->cpu_base && !hrtimer_hres_active(base->cpu_base))
++	if (base && base->cpu_base && !timer->irqsafe)
+ 		wait_event(base->cpu_base->wait,
+-				!(timer->state & HRTIMER_STATE_CALLBACK));
++			   !(timer->state & HRTIMER_STATE_CALLBACK));
+ }
+ 
+ #else
+@@ -916,6 +945,11 @@ static void __remove_hrtimer(struct hrtimer *timer,
+ 	if (!(timer->state & HRTIMER_STATE_ENQUEUED))
+ 		goto out;
+ 
++	if (unlikely(!list_empty(&timer->cb_entry))) {
++		list_del_init(&timer->cb_entry);
++		goto out;
++	}
++
+ 	next_timer = timerqueue_getnext(&base->active);
+ 	timerqueue_del(&base->active, &timer->node);
+ 	if (&timer->node == next_timer) {
+@@ -1178,6 +1212,7 @@ static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id,
+ 
+ 	base = hrtimer_clockid_to_base(clock_id);
+ 	timer->base = &cpu_base->clock_base[base];
++	INIT_LIST_HEAD(&timer->cb_entry);
+ 	timerqueue_init(&timer->node);
+ 
+ #ifdef CONFIG_TIMER_STATS
+@@ -1261,10 +1296,118 @@ static void __run_hrtimer(struct hrtimer *timer, ktime_t *now)
+ 	timer->state &= ~HRTIMER_STATE_CALLBACK;
+ }
+ 
+-#ifdef CONFIG_HIGH_RES_TIMERS
+-
+ static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer);
+ 
++#ifdef CONFIG_PREEMPT_RT_BASE
++static void hrtimer_rt_reprogram(int restart, struct hrtimer *timer,
++				 struct hrtimer_clock_base *base)
++{
++	/*
++	 * Note, we clear the callback flag before we requeue the
++	 * timer otherwise we trigger the callback_running() check
++	 * in hrtimer_reprogram().
++	 */
++	timer->state &= ~HRTIMER_STATE_CALLBACK;
++
++	if (restart != HRTIMER_NORESTART) {
++		BUG_ON(hrtimer_active(timer));
++		/*
++		 * Enqueue the timer, if it's the leftmost timer then
++		 * we need to reprogram it.
++		 */
++		if (!enqueue_hrtimer(timer, base))
++			return;
++
++		if (hrtimer_reprogram(timer, base))
++			goto requeue;
++
++	} else if (hrtimer_active(timer)) {
++		/*
++		 * If the timer was rearmed on another CPU, reprogram
++		 * the event device.
++		 */
++		if (&timer->node == base->active.next &&
++		    hrtimer_reprogram(timer, base))
++			goto requeue;
++	}
++	return;
++
++requeue:
++	/*
++	 * Timer is expired. Thus move it from tree to pending list
++	 * again.
++	 */
++	__remove_hrtimer(timer, base, timer->state, 0);
++	list_add_tail(&timer->cb_entry, &base->expired);
++}
++
++/*
++ * The changes in mainline which removed the callback modes from
++ * hrtimer are not yet working with -rt. The non wakeup_process()
++ * based callbacks which involve sleeping locks need to be treated
++ * seperately.
++ */
++static void hrtimer_rt_run_pending(void)
++{
++	enum hrtimer_restart (*fn)(struct hrtimer *);
++	struct hrtimer_cpu_base *cpu_base;
++	struct hrtimer_clock_base *base;
++	struct hrtimer *timer;
++	int index, restart;
++
++	local_irq_disable();
++	cpu_base = &per_cpu(hrtimer_bases, smp_processor_id());
++
++	raw_spin_lock(&cpu_base->lock);
++
++	for (index = 0; index < HRTIMER_MAX_CLOCK_BASES; index++) {
++		base = &cpu_base->clock_base[index];
++
++		while (!list_empty(&base->expired)) {
++			timer = list_first_entry(&base->expired,
++						 struct hrtimer, cb_entry);
++
++			/*
++			 * Same as the above __run_hrtimer function
++			 * just we run with interrupts enabled.
++			 */
++			debug_hrtimer_deactivate(timer);
++			__remove_hrtimer(timer, base, HRTIMER_STATE_CALLBACK, 0);
++			timer_stats_account_hrtimer(timer);
++			fn = timer->function;
++
++			raw_spin_unlock_irq(&cpu_base->lock);
++			restart = fn(timer);
++			raw_spin_lock_irq(&cpu_base->lock);
++
++			hrtimer_rt_reprogram(restart, timer, base);
++		}
++	}
++
++	raw_spin_unlock_irq(&cpu_base->lock);
++
++	wake_up_timer_waiters(cpu_base);
++}
++
++static int hrtimer_rt_defer(struct hrtimer *timer)
++{
++	if (timer->irqsafe)
++		return 0;
++
++	__remove_hrtimer(timer, timer->base, timer->state, 0);
++	list_add_tail(&timer->cb_entry, &timer->base->expired);
++	return 1;
++}
++
++#else
++
++static inline void hrtimer_rt_run_pending(void) { }
++static inline int hrtimer_rt_defer(struct hrtimer *timer) { return 0; }
++
++#endif
++
++#ifdef CONFIG_HIGH_RES_TIMERS
++
+ /*
+  * High resolution timer interrupt
+  * Called with interrupts disabled
+@@ -1273,7 +1416,7 @@ void hrtimer_interrupt(struct clock_event_device *dev)
+ {
+ 	struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases);
+ 	ktime_t expires_next, now, entry_time, delta;
+-	int i, retries = 0;
++	int i, retries = 0, raise = 0;
+ 
+ 	BUG_ON(!cpu_base->hres_active);
+ 	cpu_base->nr_events++;
+@@ -1340,7 +1483,10 @@ retry:
+ 				break;
+ 			}
+ 
+-			__run_hrtimer(timer, &basenow);
++			if (!hrtimer_rt_defer(timer))
++				__run_hrtimer(timer, &basenow);
++			else
++				raise = 1;
+ 		}
+ 	}
+ 
+@@ -1355,6 +1501,10 @@ retry:
+ 	if (expires_next.tv64 == KTIME_MAX ||
+ 	    !tick_program_event(expires_next, 0)) {
+ 		cpu_base->hang_detected = 0;
++
++		if (raise)
++			raise_softirq_irqoff(HRTIMER_SOFTIRQ);
++
+ 		return;
+ 	}
+ 
+@@ -1430,17 +1580,17 @@ void hrtimer_peek_ahead_timers(void)
+ 	local_irq_restore(flags);
+ }
+ 
+-static void run_hrtimer_softirq(struct softirq_action *h)
+-{
+-	hrtimer_peek_ahead_timers();
+-}
+-
+ #else /* CONFIG_HIGH_RES_TIMERS */
+ 
+ static inline void __hrtimer_peek_ahead_timers(void) { }
+ 
+ #endif	/* !CONFIG_HIGH_RES_TIMERS */
+ 
++static void run_hrtimer_softirq(struct softirq_action *h)
++{
++	hrtimer_rt_run_pending();
++}
++
+ /*
+  * Called from timer softirq every jiffy, expire hrtimers:
+  *
+@@ -1473,7 +1623,7 @@ void hrtimer_run_queues(void)
+ 	struct timerqueue_node *node;
+ 	struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases);
+ 	struct hrtimer_clock_base *base;
+-	int index, gettime = 1;
++	int index, gettime = 1, raise = 0;
+ 
+ 	if (hrtimer_hres_active())
+ 		return;
+@@ -1498,12 +1648,16 @@ void hrtimer_run_queues(void)
+ 					hrtimer_get_expires_tv64(timer))
+ 				break;
+ 
+-			__run_hrtimer(timer, &base->softirq_time);
++			if (!hrtimer_rt_defer(timer))
++				__run_hrtimer(timer, &base->softirq_time);
++			else
++				raise = 1;
+ 		}
+ 		raw_spin_unlock(&cpu_base->lock);
+ 	}
+ 
+-	wake_up_timer_waiters(cpu_base);
++	if (raise)
++		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
+ }
+ 
+ /*
+@@ -1525,6 +1679,7 @@ static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer)
+ void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, struct task_struct *task)
+ {
+ 	sl->timer.function = hrtimer_wakeup;
++	sl->timer.irqsafe = 1;
+ 	sl->task = task;
+ }
+ EXPORT_SYMBOL_GPL(hrtimer_init_sleeper);
+@@ -1663,6 +1818,7 @@ static void __cpuinit init_hrtimers_cpu(int cpu)
+ 	for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
+ 		cpu_base->clock_base[i].cpu_base = cpu_base;
+ 		timerqueue_init_head(&cpu_base->clock_base[i].active);
++		INIT_LIST_HEAD(&cpu_base->clock_base[i].expired);
+ 	}
+ 
+ 	hrtimer_init_hres(cpu_base);
+@@ -1781,9 +1937,7 @@ void __init hrtimers_init(void)
+ 	hrtimer_cpu_notify(&hrtimers_nb, (unsigned long)CPU_UP_PREPARE,
+ 			  (void *)(long)smp_processor_id());
+ 	register_cpu_notifier(&hrtimers_nb);
+-#ifdef CONFIG_HIGH_RES_TIMERS
+ 	open_softirq(HRTIMER_SOFTIRQ, run_hrtimer_softirq);
+-#endif
+ }
+ 
+ /**
+diff --git a/kernel/sched.c b/kernel/sched.c
+index e1fee8d..dbb7d80 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -189,6 +189,7 @@ void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime)
+ 
+ 	hrtimer_init(&rt_b->rt_period_timer,
+ 			CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	rt_b->rt_period_timer.irqsafe = 1;
+ 	rt_b->rt_period_timer.function = sched_rt_period_timer;
+ }
+ 
+@@ -1277,6 +1278,7 @@ static void init_rq_hrtick(struct rq *rq)
+ 
+ 	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	rq->hrtick_timer.function = hrtick;
++	rq->hrtick_timer.irqsafe = 1;
+ }
+ #else	/* CONFIG_SCHED_HRTICK */
+ static inline void hrtick_clear(struct rq *rq)
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index d7abd2f..a2c2a64 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -802,6 +802,7 @@ void tick_setup_sched_timer(void)
+ 	 * Emulate tick processing via per-CPU hrtimers:
+ 	 */
+ 	hrtimer_init(&ts->sched_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
++	ts->sched_timer.irqsafe = 1;
+ 	ts->sched_timer.function = tick_sched_timer;
+ 
+ 	/* Get the next period (per cpu) */
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index c7e2a2f..c0c47d7 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -436,6 +436,7 @@ static void watchdog_prepare_cpu(int cpu)
+ 	WARN_ON(per_cpu(softlockup_watchdog, cpu));
+ 	hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	hrtimer->function = watchdog_timer_fn;
++	hrtimer->irqsafe = 1;
+ }
+ 
+ static int watchdog_enable(int cpu)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0135-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0135-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0135-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0135-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch)
@@ -0,0 +1,111 @@
+From b7dd0bd76953eb357261f366476b8bac651001b6 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Fri, 12 Aug 2011 17:39:54 +0200
+Subject: [PATCH 135/271] hrtimer: Don't call the timer handler from
+ hrtimer_start
+
+ [<ffffffff812de4a9>] __delay+0xf/0x11
+ [<ffffffff812e36e9>] do_raw_spin_lock+0xd2/0x13c
+ [<ffffffff815028ee>] _raw_spin_lock+0x60/0x73              rt_b->rt_runtime_lock
+ [<ffffffff81068f68>] ? sched_rt_period_timer+0xad/0x281
+ [<ffffffff81068f68>] sched_rt_period_timer+0xad/0x281
+ [<ffffffff8109e5e1>] __run_hrtimer+0x1e4/0x347
+ [<ffffffff81068ebb>] ? enqueue_rt_entity+0x36/0x36
+ [<ffffffff8109f2b1>] __hrtimer_start_range_ns+0x2b5/0x40a  base->cpu_base->lock  (lock_hrtimer_base)
+ [<ffffffff81068b6f>] __enqueue_rt_entity+0x26f/0x2aa       rt_b->rt_runtime_lock (start_rt_bandwidth)
+ [<ffffffff81068ead>] enqueue_rt_entity+0x28/0x36
+ [<ffffffff81069355>] enqueue_task_rt+0x3d/0xb0
+ [<ffffffff810679d6>] enqueue_task+0x5d/0x64
+ [<ffffffff810714fc>] task_setprio+0x210/0x29c              rq->lock
+ [<ffffffff810b56cb>] __rt_mutex_adjust_prio+0x25/0x2a      p->pi_lock
+ [<ffffffff810b5d2c>] task_blocks_on_rt_mutex+0x196/0x20f
+
+Instead make __hrtimer_start_range_ns() return -ETIME when the timer
+is in the past. Since body actually uses the hrtimer_start*() return
+value its pretty safe to wreck it.
+
+Also, it will only ever return -ETIME for timer->irqsafe || !wakeup
+timers.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+---
+ kernel/hrtimer.c |   48 +++++++++++++++++++++++-------------------------
+ 1 file changed, 23 insertions(+), 25 deletions(-)
+
+diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
+index 1dd627b..358442b 100644
+--- a/kernel/hrtimer.c
++++ b/kernel/hrtimer.c
+@@ -646,37 +646,24 @@ static inline int hrtimer_enqueue_reprogram(struct hrtimer *timer,
+ 					    struct hrtimer_clock_base *base,
+ 					    int wakeup)
+ {
+-#ifdef CONFIG_PREEMPT_RT_BASE
+-again:
+ 	if (base->cpu_base->hres_active && hrtimer_reprogram(timer, base)) {
++		if (!wakeup)
++			return -ETIME;
++
++#ifdef CONFIG_PREEMPT_RT_BASE
+ 		/*
+ 		 * Move softirq based timers away from the rbtree in
+ 		 * case it expired already. Otherwise we would have a
+ 		 * stale base->first entry until the softirq runs.
+ 		 */
+-		if (!hrtimer_rt_defer(timer)) {
+-			ktime_t now = ktime_get();
+-
+-			__run_hrtimer(timer, &now);
+-			/*
+-			 * __run_hrtimer might have requeued timer and
+-			 * it could be base->first again.
+-			 */
+-			if (&timer->node == base->active.next)
+-				goto again;
+-			return 1;
+-		}
+-#else
+-	if (base->cpu_base->hres_active && hrtimer_reprogram(timer, base)) {
++		if (!hrtimer_rt_defer(timer))
++			return -ETIME;
+ #endif
+-		if (wakeup) {
+-			raw_spin_unlock(&base->cpu_base->lock);
+-			raise_softirq_irqoff(HRTIMER_SOFTIRQ);
+-			raw_spin_lock(&base->cpu_base->lock);
+-		} else
+-			__raise_softirq_irqoff(HRTIMER_SOFTIRQ);
++		raw_spin_unlock(&base->cpu_base->lock);
++		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
++		raw_spin_lock(&base->cpu_base->lock);
+ 
+-		return 1;
++		return 0;
+ 	}
+ 
+ 	return 0;
+@@ -1046,8 +1033,19 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ 	 *
+ 	 * XXX send_remote_softirq() ?
+ 	 */
+-	if (leftmost && new_base->cpu_base == &__get_cpu_var(hrtimer_bases))
+-		hrtimer_enqueue_reprogram(timer, new_base, wakeup);
++	if (leftmost && new_base->cpu_base == &__get_cpu_var(hrtimer_bases)) {
++		ret = hrtimer_enqueue_reprogram(timer, new_base, wakeup);
++		if (ret) {
++			/*
++			 * In case we failed to reprogram the timer (mostly
++			 * because out current timer is already elapsed),
++			 * remove it again and report a failure. This avoids
++			 * stale base->first entries.
++			 */
++			__remove_hrtimer(timer, new_base,
++					timer->state & HRTIMER_STATE_CALLBACK, 0);
++		}
++	}
+ 
+ 	unlock_hrtimer_base(timer, &flags);
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0136-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0136-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0136-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0136-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch)
@@ -0,0 +1,41 @@
+From 5100fc732eac1a036fa5e532336d6b2565945415 Mon Sep 17 00:00:00 2001
+From: Yong Zhang <yong.zhang0 at gmail.com>
+Date: Thu, 13 Oct 2011 15:52:30 +0800
+Subject: [PATCH 136/271] hrtimer: Add missing debug_activate() aid [Was: Re:
+ [ANNOUNCE] 3.0.6-rt17]
+
+On Fri, Oct 07, 2011 at 10:25:25AM -0700, Fernando Lopez-Lezcano wrote:
+> On 10/06/2011 06:15 PM, Thomas Gleixner wrote:
+> >Dear RT Folks,
+> >
+> >I'm pleased to announce the 3.0.6-rt17 release.
+>
+> Hi and thanks again. So far this one is not hanging which is very
+> good news. But I still see the hrtimer_fixup_activate warnings I
+> reported for rt16...
+
+Hi Fernando,
+
+I think below patch will smooth your concern?
+
+Thanks,
+Yong
+---
+ kernel/hrtimer.c |    1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
+index 358442b..d363df8 100644
+--- a/kernel/hrtimer.c
++++ b/kernel/hrtimer.c
+@@ -1042,6 +1042,7 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ 			 * remove it again and report a failure. This avoids
+ 			 * stale base->first entries.
+ 			 */
++			debug_deactivate(timer);
+ 			__remove_hrtimer(timer, new_base,
+ 					timer->state & HRTIMER_STATE_CALLBACK, 0);
+ 		}
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0137-hrtimer-fix-reprogram-madness.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0137-hrtimer-fix-reprogram-madness.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0137-hrtimer-fix-reprogram-madness.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0137-hrtimer-fix-reprogram-madness.patch.patch)
@@ -0,0 +1,46 @@
+From 83fea32cbee51711340897638dfb28040990ac65 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 14 Sep 2011 14:48:43 +0200
+Subject: [PATCH 137/271] hrtimer-fix-reprogram-madness.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/hrtimer.c |    8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
+index d363df8..3991464 100644
+--- a/kernel/hrtimer.c
++++ b/kernel/hrtimer.c
+@@ -1317,7 +1317,11 @@ static void hrtimer_rt_reprogram(int restart, struct hrtimer *timer,
+ 		if (!enqueue_hrtimer(timer, base))
+ 			return;
+ 
+-		if (hrtimer_reprogram(timer, base))
++#ifndef CONFIG_HIGH_RES_TIMERS
++	}
++#else
++		if (base->cpu_base->hres_active &&
++		    hrtimer_reprogram(timer, base))
+ 			goto requeue;
+ 
+ 	} else if (hrtimer_active(timer)) {
+@@ -1326,6 +1330,7 @@ static void hrtimer_rt_reprogram(int restart, struct hrtimer *timer,
+ 		 * the event device.
+ 		 */
+ 		if (&timer->node == base->active.next &&
++		    base->cpu_base->hres_active &&
+ 		    hrtimer_reprogram(timer, base))
+ 			goto requeue;
+ 	}
+@@ -1338,6 +1343,7 @@ requeue:
+ 	 */
+ 	__remove_hrtimer(timer, base, timer->state, 0);
+ 	list_add_tail(&timer->cb_entry, &base->expired);
++#endif
+ }
+ 
+ /*
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0138-timer-fd-Prevent-live-lock.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0138-timer-fd-Prevent-live-lock.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0138-timer-fd-Prevent-live-lock.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0138-timer-fd-Prevent-live-lock.patch)
@@ -0,0 +1,33 @@
+From 4403d2d83fd79737bf901ee357d472781148fbd1 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 25 Jan 2012 11:08:40 +0100
+Subject: [PATCH 138/271] timer-fd: Prevent live lock
+
+If hrtimer_try_to_cancel() requires a retry, then depending on the
+priority setting te retry loop might prevent timer callback completion
+on RT. Prevent that by waiting for completion on RT, no change for a
+non RT kernel.
+
+Reported-by: Sankara Muthukrishnan <sankara.m at gmail.com>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+---
+ fs/timerfd.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/fs/timerfd.c b/fs/timerfd.c
+index dffeb37..57f0e4e 100644
+--- a/fs/timerfd.c
++++ b/fs/timerfd.c
+@@ -313,7 +313,7 @@ SYSCALL_DEFINE4(timerfd_settime, int, ufd, int, flags,
+ 		if (hrtimer_try_to_cancel(&ctx->tmr) >= 0)
+ 			break;
+ 		spin_unlock_irq(&ctx->wqh.lock);
+-		cpu_relax();
++		hrtimer_wait_for_timer(&ctx->tmr);
+ 	}
+ 
+ 	/*
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0139-posix-timers-thread-posix-cpu-timers-on-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0139-posix-timers-thread-posix-cpu-timers-on-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0139-posix-timers-thread-posix-cpu-timers-on-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0139-posix-timers-thread-posix-cpu-timers-on-rt.patch)
@@ -0,0 +1,316 @@
+From 25d6c5a8997c017fbe3665a01c9ab5ebc5d2554f Mon Sep 17 00:00:00 2001
+From: John Stultz <johnstul at us.ibm.com>
+Date: Fri, 3 Jul 2009 08:29:58 -0500
+Subject: [PATCH 139/271] posix-timers: thread posix-cpu-timers on -rt
+
+posix-cpu-timer code takes non -rt safe locks in hard irq
+context. Move it to a thread.
+
+[ 3.0 fixes from Peter Zijlstra <peterz at infradead.org> ]
+
+Signed-off-by: John Stultz <johnstul at us.ibm.com>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/init_task.h |    7 ++
+ include/linux/sched.h     |    3 +
+ init/main.c               |    1 +
+ kernel/fork.c             |    3 +
+ kernel/posix-cpu-timers.c |  182 +++++++++++++++++++++++++++++++++++++++++++--
+ 5 files changed, 190 insertions(+), 6 deletions(-)
+
+diff --git a/include/linux/init_task.h b/include/linux/init_task.h
+index 32574ee..cfd9f8d 100644
+--- a/include/linux/init_task.h
++++ b/include/linux/init_task.h
+@@ -126,6 +126,12 @@ extern struct cred init_cred;
+ # define INIT_PERF_EVENTS(tsk)
+ #endif
+ 
++#ifdef CONFIG_PREEMPT_RT_BASE
++# define INIT_TIMER_LIST		.posix_timer_list = NULL,
++#else
++# define INIT_TIMER_LIST
++#endif
++
+ #define INIT_TASK_COMM "swapper"
+ 
+ /*
+@@ -180,6 +186,7 @@ extern struct cred init_cred;
+ 	.cpu_timers	= INIT_CPU_TIMERS(tsk.cpu_timers),		\
+ 	.pi_lock	= __RAW_SPIN_LOCK_UNLOCKED(tsk.pi_lock),	\
+ 	.timer_slack_ns = 50000, /* 50 usec default slack */		\
++	INIT_TIMER_LIST							\
+ 	.pids = {							\
+ 		[PIDTYPE_PID]  = INIT_PID_LINK(PIDTYPE_PID),		\
+ 		[PIDTYPE_PGID] = INIT_PID_LINK(PIDTYPE_PGID),		\
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 30ac0b5..9ff731d 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1355,6 +1355,9 @@ struct task_struct {
+ 
+ 	struct task_cputime cputime_expires;
+ 	struct list_head cpu_timers[3];
++#ifdef CONFIG_PREEMPT_RT_BASE
++	struct task_struct *posix_timer_list;
++#endif
+ 
+ /* process credentials */
+ 	const struct cred __rcu *real_cred; /* objective and real subjective task
+diff --git a/init/main.c b/init/main.c
+index d30d42a..6569987 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -68,6 +68,7 @@
+ #include <linux/shmem_fs.h>
+ #include <linux/slab.h>
+ #include <linux/perf_event.h>
++#include <linux/posix-timers.h>
+ 
+ #include <asm/io.h>
+ #include <asm/bugs.h>
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 88712a6..7595cea 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1028,6 +1028,9 @@ void mm_init_owner(struct mm_struct *mm, struct task_struct *p)
+  */
+ static void posix_cpu_timers_init(struct task_struct *tsk)
+ {
++#ifdef CONFIG_PREEMPT_RT_BASE
++	tsk->posix_timer_list = NULL;
++#endif
+ 	tsk->cputime_expires.prof_exp = cputime_zero;
+ 	tsk->cputime_expires.virt_exp = cputime_zero;
+ 	tsk->cputime_expires.sched_exp = 0;
+diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
+index e7cb76d..17336ab 100644
+--- a/kernel/posix-cpu-timers.c
++++ b/kernel/posix-cpu-timers.c
+@@ -701,7 +701,7 @@ static int posix_cpu_timer_set(struct k_itimer *timer, int flags,
+ 	/*
+ 	 * Disarm any old timer after extracting its expiry time.
+ 	 */
+-	BUG_ON(!irqs_disabled());
++	BUG_ON_NONRT(!irqs_disabled());
+ 
+ 	ret = 0;
+ 	old_incr = timer->it.cpu.incr;
+@@ -1223,7 +1223,7 @@ void posix_cpu_timer_schedule(struct k_itimer *timer)
+ 	/*
+ 	 * Now re-arm for the new expiry time.
+ 	 */
+-	BUG_ON(!irqs_disabled());
++	BUG_ON_NONRT(!irqs_disabled());
+ 	arm_timer(timer);
+ 	spin_unlock(&p->sighand->siglock);
+ 
+@@ -1290,10 +1290,11 @@ static inline int fastpath_timer_check(struct task_struct *tsk)
+ 	sig = tsk->signal;
+ 	if (sig->cputimer.running) {
+ 		struct task_cputime group_sample;
++		unsigned long flags;
+ 
+-		raw_spin_lock(&sig->cputimer.lock);
++		raw_spin_lock_irqsave(&sig->cputimer.lock, flags);
+ 		group_sample = sig->cputimer.cputime;
+-		raw_spin_unlock(&sig->cputimer.lock);
++		raw_spin_unlock_irqrestore(&sig->cputimer.lock, flags);
+ 
+ 		if (task_cputime_expired(&group_sample, &sig->cputime_expires))
+ 			return 1;
+@@ -1307,13 +1308,13 @@ static inline int fastpath_timer_check(struct task_struct *tsk)
+  * already updated our counts.  We need to check if any timers fire now.
+  * Interrupts are disabled.
+  */
+-void run_posix_cpu_timers(struct task_struct *tsk)
++static void __run_posix_cpu_timers(struct task_struct *tsk)
+ {
+ 	LIST_HEAD(firing);
+ 	struct k_itimer *timer, *next;
+ 	unsigned long flags;
+ 
+-	BUG_ON(!irqs_disabled());
++	BUG_ON_NONRT(!irqs_disabled());
+ 
+ 	/*
+ 	 * The fast path checks that there are no expired thread or thread
+@@ -1371,6 +1372,175 @@ void run_posix_cpu_timers(struct task_struct *tsk)
+ 	}
+ }
+ 
++#ifdef CONFIG_PREEMPT_RT_BASE
++#include <linux/kthread.h>
++#include <linux/cpu.h>
++DEFINE_PER_CPU(struct task_struct *, posix_timer_task);
++DEFINE_PER_CPU(struct task_struct *, posix_timer_tasklist);
++
++static int posix_cpu_timers_thread(void *data)
++{
++	int cpu = (long)data;
++
++	BUG_ON(per_cpu(posix_timer_task,cpu) != current);
++
++	while (!kthread_should_stop()) {
++		struct task_struct *tsk = NULL;
++		struct task_struct *next = NULL;
++
++		if (cpu_is_offline(cpu))
++			goto wait_to_die;
++
++		/* grab task list */
++		raw_local_irq_disable();
++		tsk = per_cpu(posix_timer_tasklist, cpu);
++		per_cpu(posix_timer_tasklist, cpu) = NULL;
++		raw_local_irq_enable();
++
++		/* its possible the list is empty, just return */
++		if (!tsk) {
++			set_current_state(TASK_INTERRUPTIBLE);
++			schedule();
++			__set_current_state(TASK_RUNNING);
++			continue;
++		}
++
++		/* Process task list */
++		while (1) {
++			/* save next */
++			next = tsk->posix_timer_list;
++
++			/* run the task timers, clear its ptr and
++			 * unreference it
++			 */
++			__run_posix_cpu_timers(tsk);
++			tsk->posix_timer_list = NULL;
++			put_task_struct(tsk);
++
++			/* check if this is the last on the list */
++			if (next == tsk)
++				break;
++			tsk = next;
++		}
++	}
++	return 0;
++
++wait_to_die:
++	/* Wait for kthread_stop */
++	set_current_state(TASK_INTERRUPTIBLE);
++	while (!kthread_should_stop()) {
++		schedule();
++		set_current_state(TASK_INTERRUPTIBLE);
++	}
++	__set_current_state(TASK_RUNNING);
++	return 0;
++}
++
++void run_posix_cpu_timers(struct task_struct *tsk)
++{
++	unsigned long cpu = smp_processor_id();
++	struct task_struct *tasklist;
++
++	BUG_ON(!irqs_disabled());
++	if(!per_cpu(posix_timer_task, cpu))
++		return;
++	/* get per-cpu references */
++	tasklist = per_cpu(posix_timer_tasklist, cpu);
++
++	/* check to see if we're already queued */
++	if (!tsk->posix_timer_list) {
++		get_task_struct(tsk);
++		if (tasklist) {
++			tsk->posix_timer_list = tasklist;
++		} else {
++			/*
++			 * The list is terminated by a self-pointing
++			 * task_struct
++			 */
++			tsk->posix_timer_list = tsk;
++		}
++		per_cpu(posix_timer_tasklist, cpu) = tsk;
++	}
++	/* XXX signal the thread somehow */
++	wake_up_process(per_cpu(posix_timer_task, cpu));
++}
++
++/*
++ * posix_cpu_thread_call - callback that gets triggered when a CPU is added.
++ * Here we can start up the necessary migration thread for the new CPU.
++ */
++static int posix_cpu_thread_call(struct notifier_block *nfb,
++				 unsigned long action, void *hcpu)
++{
++	int cpu = (long)hcpu;
++	struct task_struct *p;
++	struct sched_param param;
++
++	switch (action) {
++	case CPU_UP_PREPARE:
++		p = kthread_create(posix_cpu_timers_thread, hcpu,
++					"posix_cpu_timers/%d",cpu);
++		if (IS_ERR(p))
++			return NOTIFY_BAD;
++		p->flags |= PF_NOFREEZE;
++		kthread_bind(p, cpu);
++		/* Must be high prio to avoid getting starved */
++		param.sched_priority = MAX_RT_PRIO-1;
++		sched_setscheduler(p, SCHED_FIFO, &param);
++		per_cpu(posix_timer_task,cpu) = p;
++		break;
++	case CPU_ONLINE:
++		/* Strictly unneccessary, as first user will wake it. */
++		wake_up_process(per_cpu(posix_timer_task,cpu));
++		break;
++#ifdef CONFIG_HOTPLUG_CPU
++	case CPU_UP_CANCELED:
++		/* Unbind it from offline cpu so it can run.  Fall thru. */
++		kthread_bind(per_cpu(posix_timer_task,cpu),
++			     any_online_cpu(cpu_online_map));
++		kthread_stop(per_cpu(posix_timer_task,cpu));
++		per_cpu(posix_timer_task,cpu) = NULL;
++		break;
++	case CPU_DEAD:
++		kthread_stop(per_cpu(posix_timer_task,cpu));
++		per_cpu(posix_timer_task,cpu) = NULL;
++		break;
++#endif
++	}
++	return NOTIFY_OK;
++}
++
++/* Register at highest priority so that task migration (migrate_all_tasks)
++ * happens before everything else.
++ */
++static struct notifier_block __devinitdata posix_cpu_thread_notifier = {
++	.notifier_call = posix_cpu_thread_call,
++	.priority = 10
++};
++
++static int __init posix_cpu_thread_init(void)
++{
++	void *hcpu = (void *)(long)smp_processor_id();
++	/* Start one for boot CPU. */
++	unsigned long cpu;
++
++	/* init the per-cpu posix_timer_tasklets */
++	for_each_cpu_mask(cpu, cpu_possible_map)
++		per_cpu(posix_timer_tasklist, cpu) = NULL;
++
++	posix_cpu_thread_call(&posix_cpu_thread_notifier, CPU_UP_PREPARE, hcpu);
++	posix_cpu_thread_call(&posix_cpu_thread_notifier, CPU_ONLINE, hcpu);
++	register_cpu_notifier(&posix_cpu_thread_notifier);
++	return 0;
++}
++early_initcall(posix_cpu_thread_init);
++#else /* CONFIG_PREEMPT_RT_BASE */
++void run_posix_cpu_timers(struct task_struct *tsk)
++{
++	__run_posix_cpu_timers(tsk);
++}
++#endif /* CONFIG_PREEMPT_RT_BASE */
++
+ /*
+  * Set one of the process-wide special case CPU timers or RLIMIT_CPU.
+  * The tsk->sighand->siglock must be held by the caller.
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0140-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0140-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0140-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0140-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch)
@@ -0,0 +1,32 @@
+From 3c61ad031004a517a0c5becd68ec3be2f30307ad Mon Sep 17 00:00:00 2001
+From: Arnaldo Carvalho de Melo <acme at redhat.com>
+Date: Fri, 3 Jul 2009 08:30:00 -0500
+Subject: [PATCH 140/271] posix-timers: Shorten posix_cpu_timers/<CPU> kernel
+ thread names
+
+Shorten the softirq kernel thread names because they always overflow the
+limited comm length, appearing as "posix_cpu_timer" CPU# times.
+
+Signed-off-by: Arnaldo Carvalho de Melo <acme at redhat.com>
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/posix-cpu-timers.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
+index 17336ab..fb350d7 100644
+--- a/kernel/posix-cpu-timers.c
++++ b/kernel/posix-cpu-timers.c
+@@ -1479,7 +1479,7 @@ static int posix_cpu_thread_call(struct notifier_block *nfb,
+ 	switch (action) {
+ 	case CPU_UP_PREPARE:
+ 		p = kthread_create(posix_cpu_timers_thread, hcpu,
+-					"posix_cpu_timers/%d",cpu);
++					"posixcputmr/%d",cpu);
+ 		if (IS_ERR(p))
+ 			return NOTIFY_BAD;
+ 		p->flags |= PF_NOFREEZE;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0141-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0141-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0141-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0141-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch)
@@ -0,0 +1,63 @@
+From 2542c38de3be7dd9a2e85b15cfb24904c49a851b Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 3 Jul 2009 08:44:44 -0500
+Subject: [PATCH 141/271] posix-timers: Avoid wakeups when no timers are
+ active
+
+Waking the thread even when no timers are scheduled is useless.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/posix-cpu-timers.c |   21 ++++++++++++++++++---
+ 1 file changed, 18 insertions(+), 3 deletions(-)
+
+diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
+index fb350d7..1d4c609 100644
+--- a/kernel/posix-cpu-timers.c
++++ b/kernel/posix-cpu-timers.c
+@@ -1436,6 +1436,21 @@ wait_to_die:
+ 	return 0;
+ }
+ 
++static inline int __fastpath_timer_check(struct task_struct *tsk)
++{
++	/* tsk == current, ensure it is safe to use ->signal/sighand */
++	if (unlikely(tsk->exit_state))
++		return 0;
++
++	if (!task_cputime_zero(&tsk->cputime_expires))
++			return 1;
++
++	if (!task_cputime_zero(&tsk->signal->cputime_expires))
++			return 1;
++
++	return 0;
++}
++
+ void run_posix_cpu_timers(struct task_struct *tsk)
+ {
+ 	unsigned long cpu = smp_processor_id();
+@@ -1448,7 +1463,7 @@ void run_posix_cpu_timers(struct task_struct *tsk)
+ 	tasklist = per_cpu(posix_timer_tasklist, cpu);
+ 
+ 	/* check to see if we're already queued */
+-	if (!tsk->posix_timer_list) {
++	if (!tsk->posix_timer_list && __fastpath_timer_check(tsk)) {
+ 		get_task_struct(tsk);
+ 		if (tasklist) {
+ 			tsk->posix_timer_list = tasklist;
+@@ -1460,9 +1475,9 @@ void run_posix_cpu_timers(struct task_struct *tsk)
+ 			tsk->posix_timer_list = tsk;
+ 		}
+ 		per_cpu(posix_timer_tasklist, cpu) = tsk;
++
++		wake_up_process(per_cpu(posix_timer_task, cpu));
+ 	}
+-	/* XXX signal the thread somehow */
+-	wake_up_process(per_cpu(posix_timer_task, cpu));
+ }
+ 
+ /*
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0142-sched-delay-put-task.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0142-sched-delay-put-task.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0142-sched-delay-put-task.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0142-sched-delay-put-task.patch.patch)
@@ -0,0 +1,75 @@
+From 90627f9b0947ad2b5152b1066a55fbb2cbee0f55 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 31 May 2011 16:59:16 +0200
+Subject: [PATCH 142/271] sched-delay-put-task.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/sched.h |   13 +++++++++++++
+ kernel/fork.c         |   11 +++++++++++
+ 2 files changed, 24 insertions(+)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 9ff731d..fb5e51f 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1591,6 +1591,9 @@ struct task_struct {
+ #ifdef CONFIG_HAVE_HW_BREAKPOINT
+ 	atomic_t ptrace_bp_refcnt;
+ #endif
++#ifdef CONFIG_PREEMPT_RT_BASE
++	struct rcu_head put_rcu;
++#endif
+ };
+ 
+ /* Future-safe accessor for struct task_struct's cpus_allowed. */
+@@ -1775,6 +1778,15 @@ extern struct pid *cad_pid;
+ extern void free_task(struct task_struct *tsk);
+ #define get_task_struct(tsk) do { atomic_inc(&(tsk)->usage); } while(0)
+ 
++#ifdef CONFIG_PREEMPT_RT_BASE
++extern void __put_task_struct_cb(struct rcu_head *rhp);
++
++static inline void put_task_struct(struct task_struct *t)
++{
++	if (atomic_dec_and_test(&t->usage))
++		call_rcu(&t->put_rcu, __put_task_struct_cb);
++}
++#else
+ extern void __put_task_struct(struct task_struct *t);
+ 
+ static inline void put_task_struct(struct task_struct *t)
+@@ -1782,6 +1794,7 @@ static inline void put_task_struct(struct task_struct *t)
+ 	if (atomic_dec_and_test(&t->usage))
+ 		__put_task_struct(t);
+ }
++#endif
+ 
+ extern void task_times(struct task_struct *p, cputime_t *ut, cputime_t *st);
+ extern void thread_group_times(struct task_struct *p, cputime_t *ut, cputime_t *st);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 7595cea..9d4653a 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -198,7 +198,18 @@ void __put_task_struct(struct task_struct *tsk)
+ 	if (!profile_handoff_task(tsk))
+ 		free_task(tsk);
+ }
++#ifndef CONFIG_PREEMPT_RT_BASE
+ EXPORT_SYMBOL_GPL(__put_task_struct);
++#else
++void __put_task_struct_cb(struct rcu_head *rhp)
++{
++	struct task_struct *tsk = container_of(rhp, struct task_struct, rcu);
++
++	__put_task_struct(tsk);
++
++}
++EXPORT_SYMBOL_GPL(__put_task_struct_cb);
++#endif
+ 
+ /*
+  * macro override instead of weak attribute alias, to workaround
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0143-sched-limit-nr-migrate.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0143-sched-limit-nr-migrate.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0143-sched-limit-nr-migrate.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0143-sched-limit-nr-migrate.patch.patch)
@@ -0,0 +1,29 @@
+From ccb4919cecc3f5a8f2a611814b65570d3aaecac5 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 6 Jun 2011 12:12:51 +0200
+Subject: [PATCH 143/271] sched-limit-nr-migrate.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c |    4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index dbb7d80..3c204e5 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -942,7 +942,11 @@ late_initcall(sched_init_debug);
+  * Number of tasks to iterate in a single balance run.
+  * Limited because this is done with IRQs disabled.
+  */
++#ifndef CONFIG_PREEMPT_RT_FULL
+ const_debug unsigned int sysctl_sched_nr_migrate = 32;
++#else
++const_debug unsigned int sysctl_sched_nr_migrate = 8;
++#endif
+ 
+ /*
+  * period over which we average the RT time consumption, measured
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0144-sched-mmdrop-delayed.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0144-sched-mmdrop-delayed.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0144-sched-mmdrop-delayed.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0144-sched-mmdrop-delayed.patch.patch)
@@ -0,0 +1,157 @@
+From d18ec62168c6c76a19dbed60186573c26680f474 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 6 Jun 2011 12:20:33 +0200
+Subject: [PATCH 144/271] sched-mmdrop-delayed.patch
+
+Needs thread context (pgd_lock) -> ifdeffed. workqueues wont work with
+RT
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/mm_types.h |    4 ++++
+ include/linux/sched.h    |   12 ++++++++++++
+ kernel/fork.c            |   15 ++++++++++++++-
+ kernel/sched.c           |   21 +++++++++++++++++++--
+ 4 files changed, 49 insertions(+), 3 deletions(-)
+
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 1ec126f..c303a27 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -12,6 +12,7 @@
+ #include <linux/completion.h>
+ #include <linux/cpumask.h>
+ #include <linux/page-debug-flags.h>
++#include <linux/rcupdate.h>
+ #include <asm/page.h>
+ #include <asm/mmu.h>
+ 
+@@ -393,6 +394,9 @@ struct mm_struct {
+ #ifdef CONFIG_CPUMASK_OFFSTACK
+ 	struct cpumask cpumask_allocation;
+ #endif
++#ifdef CONFIG_PREEMPT_RT_BASE
++	struct rcu_head delayed_drop;
++#endif
+ };
+ 
+ static inline void mm_init_cpumask(struct mm_struct *mm)
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index fb5e51f..e6f37ca 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -2268,12 +2268,24 @@ extern struct mm_struct * mm_alloc(void);
+ 
+ /* mmdrop drops the mm and the page tables */
+ extern void __mmdrop(struct mm_struct *);
++
+ static inline void mmdrop(struct mm_struct * mm)
+ {
+ 	if (unlikely(atomic_dec_and_test(&mm->mm_count)))
+ 		__mmdrop(mm);
+ }
+ 
++#ifdef CONFIG_PREEMPT_RT_BASE
++extern void __mmdrop_delayed(struct rcu_head *rhp);
++static inline void mmdrop_delayed(struct mm_struct *mm)
++{
++	if (atomic_dec_and_test(&mm->mm_count))
++		call_rcu(&mm->delayed_drop, __mmdrop_delayed);
++}
++#else
++# define mmdrop_delayed(mm)	mmdrop(mm)
++#endif
++
+ /* mmput gets rid of the mappings and all user-space */
+ extern void mmput(struct mm_struct *);
+ /* Grab a reference to a task's mm, if it is not already going away */
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 9d4653a..8aeb811 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -203,7 +203,7 @@ EXPORT_SYMBOL_GPL(__put_task_struct);
+ #else
+ void __put_task_struct_cb(struct rcu_head *rhp)
+ {
+-	struct task_struct *tsk = container_of(rhp, struct task_struct, rcu);
++	struct task_struct *tsk = container_of(rhp, struct task_struct, put_rcu);
+ 
+ 	__put_task_struct(tsk);
+ 
+@@ -555,6 +555,19 @@ void __mmdrop(struct mm_struct *mm)
+ }
+ EXPORT_SYMBOL_GPL(__mmdrop);
+ 
++#ifdef CONFIG_PREEMPT_RT_BASE
++/*
++ * RCU callback for delayed mm drop. Not strictly rcu, but we don't
++ * want another facility to make this work.
++ */
++void __mmdrop_delayed(struct rcu_head *rhp)
++{
++	struct mm_struct *mm = container_of(rhp, struct mm_struct, delayed_drop);
++
++	__mmdrop(mm);
++}
++#endif
++
+ /*
+  * Decrement the use count and release all resources for an mm.
+  */
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 3c204e5..50d5ffe 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -3174,8 +3174,12 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev)
+ 	finish_lock_switch(rq, prev);
+ 
+ 	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * We use mmdrop_delayed() here so we don't have to do the
++	 * full __mmdrop() when we are the last user.
++	 */
+ 	if (mm)
+-		mmdrop(mm);
++		mmdrop_delayed(mm);
+ 	if (unlikely(prev_state == TASK_DEAD)) {
+ 		/*
+ 		 * Remove function-return probe instances associated with this
+@@ -6302,6 +6306,8 @@ static int migration_cpu_stop(void *data)
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+ 
++static DEFINE_PER_CPU(struct mm_struct *, idle_last_mm);
++
+ /*
+  * Ensures that the idle task is using init_mm right before its cpu goes
+  * offline.
+@@ -6314,7 +6320,12 @@ void idle_task_exit(void)
+ 
+ 	if (mm != &init_mm)
+ 		switch_mm(mm, &init_mm, current);
+-	mmdrop(mm);
++
++	/*
++	 * Defer the cleanup to an alive cpu. On RT we can neither
++	 * call mmdrop() nor mmdrop_delayed() from here.
++	 */
++	per_cpu(idle_last_mm, smp_processor_id()) = mm;
+ }
+ 
+ /*
+@@ -6659,6 +6670,12 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
+ 		migrate_nr_uninterruptible(rq);
+ 		calc_global_load_remove(rq);
+ 		break;
++	case CPU_DEAD:
++		if (per_cpu(idle_last_mm, cpu)) {
++			mmdrop(per_cpu(idle_last_mm, cpu));
++			per_cpu(idle_last_mm, cpu) = NULL;
++		}
++		break;
+ #endif
+ 	}
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0145-sched-rt-mutex-wakeup.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0145-sched-rt-mutex-wakeup.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0145-sched-rt-mutex-wakeup.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0145-sched-rt-mutex-wakeup.patch.patch)
@@ -0,0 +1,92 @@
+From 362bef200691a6e163ec9b444ae0c69b6d731c27 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 25 Jun 2011 09:21:04 +0200
+Subject: [PATCH 145/271] sched-rt-mutex-wakeup.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/sched.h |    3 +++
+ kernel/sched.c        |   31 ++++++++++++++++++++++++++++++-
+ 2 files changed, 33 insertions(+), 1 deletion(-)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index e6f37ca..6c20349 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1072,6 +1072,7 @@ struct sched_domain;
+ #define WF_SYNC		0x01		/* waker goes to sleep after wakup */
+ #define WF_FORK		0x02		/* child wakeup after fork */
+ #define WF_MIGRATED	0x04		/* internal use, task got migrated */
++#define WF_LOCK_SLEEPER	0x08		/* wakeup spinlock "sleeper" */
+ 
+ #define ENQUEUE_WAKEUP		1
+ #define ENQUEUE_HEAD		2
+@@ -1221,6 +1222,7 @@ enum perf_event_task_context {
+ 
+ struct task_struct {
+ 	volatile long state;	/* -1 unrunnable, 0 runnable, >0 stopped */
++	volatile long saved_state;	/* saved state for "spinlock sleepers" */
+ 	void *stack;
+ 	atomic_t usage;
+ 	unsigned int flags;	/* per process flags, defined below */
+@@ -2178,6 +2180,7 @@ extern void xtime_update(unsigned long ticks);
+ 
+ extern int wake_up_state(struct task_struct *tsk, unsigned int state);
+ extern int wake_up_process(struct task_struct *tsk);
++extern int wake_up_lock_sleeper(struct task_struct * tsk);
+ extern void wake_up_new_task(struct task_struct *tsk);
+ #ifdef CONFIG_SMP
+  extern void kick_process(struct task_struct *tsk);
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 50d5ffe..6f3c921 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -2827,8 +2827,25 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
+ 
+ 	smp_wmb();
+ 	raw_spin_lock_irqsave(&p->pi_lock, flags);
+-	if (!(p->state & state))
++	if (!(p->state & state)) {
++		/*
++		 * The task might be running due to a spinlock sleeper
++		 * wakeup. Check the saved state and set it to running
++		 * if the wakeup condition is true.
++		 */
++		if (!(wake_flags & WF_LOCK_SLEEPER)) {
++			if (p->saved_state & state)
++				p->saved_state = TASK_RUNNING;
++		}
+ 		goto out;
++	}
++
++	/*
++	 * If this is a regular wakeup, then we can unconditionally
++	 * clear the saved state of a "lock sleeper".
++	 */
++	if (!(wake_flags & WF_LOCK_SLEEPER))
++		p->saved_state = TASK_RUNNING;
+ 
+ 	success = 1; /* we're going to change ->state */
+ 	cpu = task_cpu(p);
+@@ -2900,6 +2917,18 @@ int wake_up_process(struct task_struct *p)
+ }
+ EXPORT_SYMBOL(wake_up_process);
+ 
++/**
++ * wake_up_lock_sleeper - Wake up a specific process blocked on a "sleeping lock"
++ * @p: The process to be woken up.
++ *
++ * Same as wake_up_process() above, but wake_flags=WF_LOCK_SLEEPER to indicate
++ * the nature of the wakeup.
++ */
++int wake_up_lock_sleeper(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_ALL, WF_LOCK_SLEEPER);
++}
++
+ int wake_up_state(struct task_struct *p, unsigned int state)
+ {
+ 	return try_to_wake_up(p, state, 0);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0146-sched-prevent-idle-boost.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0146-sched-prevent-idle-boost.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0146-sched-prevent-idle-boost.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0146-sched-prevent-idle-boost.patch.patch)
@@ -0,0 +1,55 @@
+From 7fb83a66b4f687e9d0b96216590b75c83f3c1bc3 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 6 Jun 2011 20:07:38 +0200
+Subject: [PATCH 146/271] sched-prevent-idle-boost.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c |   21 +++++++++++++++++++--
+ 1 file changed, 19 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 6f3c921..4ea4d51 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -5036,6 +5036,24 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
+ 
+ 	rq = __task_rq_lock(p);
+ 
++	/*
++	 * Idle task boosting is a nono in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
+ 	trace_sched_pi_setprio(p, prio);
+ 	oldprio = p->prio;
+ 	prev_class = p->sched_class;
+@@ -5059,11 +5077,10 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
+ 		enqueue_task(rq, p, oldprio < prio ? ENQUEUE_HEAD : 0);
+ 
+ 	check_class_changed(rq, p, prev_class, oldprio);
++out_unlock:
+ 	__task_rq_unlock(rq);
+ }
+-
+ #endif
+-
+ void set_user_nice(struct task_struct *p, long nice)
+ {
+ 	int old_prio, delta, on_rq;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0147-sched-might-sleep-do-not-account-rcu-depth.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0147-sched-might-sleep-do-not-account-rcu-depth.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0147-sched-might-sleep-do-not-account-rcu-depth.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0147-sched-might-sleep-do-not-account-rcu-depth.patch.patch)
@@ -0,0 +1,53 @@
+From 57e213619eae42ba62af04d4b7d7d13bdca88f3f Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 7 Jun 2011 09:19:06 +0200
+Subject: [PATCH 147/271] sched-might-sleep-do-not-account-rcu-depth.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rcupdate.h |    7 +++++++
+ kernel/sched.c           |    3 ++-
+ 2 files changed, 9 insertions(+), 1 deletion(-)
+
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 2cf4226..a0082e2 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -147,6 +147,11 @@ void synchronize_rcu(void);
+  * types of kernel builds, the rcu_read_lock() nesting depth is unknowable.
+  */
+ #define rcu_preempt_depth() (current->rcu_read_lock_nesting)
++#ifndef CONFIG_PREEMPT_RT_FULL
++#define sched_rcu_preempt_depth()	rcu_preempt_depth()
++#else
++static inline int sched_rcu_preempt_depth(void) { return 0; }
++#endif
+ 
+ #else /* #ifdef CONFIG_PREEMPT_RCU */
+ 
+@@ -170,6 +175,8 @@ static inline int rcu_preempt_depth(void)
+ 	return 0;
+ }
+ 
++#define sched_rcu_preempt_depth()	rcu_preempt_depth()
++
+ #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
+ 
+ /* Internal to kernel */
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 4ea4d51..c5a59b5 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -8438,7 +8438,8 @@ void __init sched_init(void)
+ #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
+ static inline int preempt_count_equals(int preempt_offset)
+ {
+-	int nested = (preempt_count() & ~PREEMPT_ACTIVE) + rcu_preempt_depth();
++	int nested = (preempt_count() & ~PREEMPT_ACTIVE) +
++		sched_rcu_preempt_depth();
+ 
+ 	return (nested == preempt_offset);
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0148-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0148-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0148-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0148-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch)
@@ -0,0 +1,53 @@
+From 3f3bf120892af044442920b321d29fcd210c38c1 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz at infradead.org>
+Date: Tue, 16 Mar 2010 14:31:44 -0700
+Subject: [PATCH 148/271] sched: Break out from load_balancing on rq_lock
+ contention
+
+Also limit NEW_IDLE pull
+
+Signed-off-by: Peter Zijlstra <peterz at infradead.org>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched_fair.c |   18 ++++++++++++++++++
+ 1 file changed, 18 insertions(+)
+
+diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
+index 8a39fa3..3747e53 100644
+--- a/kernel/sched_fair.c
++++ b/kernel/sched_fair.c
+@@ -2899,6 +2899,10 @@ balance_tasks(struct rq *this_rq, int this_cpu, struct rq *busiest,
+ 		 */
+ 		if (idle == CPU_NEWLY_IDLE)
+ 			break;
++
++		if (raw_spin_is_contended(&this_rq->lock) ||
++		    raw_spin_is_contended(&busiest->lock))
++			break;
+ #endif
+ 
+ 		/*
+@@ -3039,6 +3043,20 @@ load_balance_fair(struct rq *this_rq, int this_cpu, struct rq *busiest,
+ 		rem_load_move -= moved_load;
+ 		if (rem_load_move < 0)
+ 			break;
++
++#ifdef CONFIG_PREEMPT
++		/*
++		 * NEWIDLE balancing is a source of latency, so preemptible
++		 * kernels will stop after the first task is pulled to minimize
++		 * the critical section.
++		 */
++		if (idle == CPU_NEWLY_IDLE && this_rq->nr_running)
++			break;
++
++		if (raw_spin_is_contended(&this_rq->lock) ||
++		    raw_spin_is_contended(&busiest->lock))
++			break;
++#endif
+ 	}
+ 	rcu_read_unlock();
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0149-sched-cond-resched.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0149-sched-cond-resched.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0149-sched-cond-resched.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0149-sched-cond-resched.patch.patch)
@@ -0,0 +1,38 @@
+From 921fcd2d82888a9b91a0181ea6ff83f45252cf8d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 7 Jun 2011 11:25:03 +0200
+Subject: [PATCH 149/271] sched-cond-resched.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c |   14 +++++++++++---
+ 1 file changed, 11 insertions(+), 3 deletions(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index c5a59b5..8d6ec1a 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -5774,9 +5774,17 @@ static inline int should_resched(void)
+ 
+ static void __cond_resched(void)
+ {
+-	add_preempt_count(PREEMPT_ACTIVE);
+-	__schedule();
+-	sub_preempt_count(PREEMPT_ACTIVE);
++	do {
++		add_preempt_count(PREEMPT_ACTIVE);
++		__schedule();
++		sub_preempt_count(PREEMPT_ACTIVE);
++		/*
++		 * Check again in case we missed a preemption
++		 * opportunity between schedule and now.
++		 */
++		barrier();
++
++	} while (need_resched());
+ }
+ 
+ int __sched _cond_resched(void)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0150-cond-resched-softirq-fix.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0150-cond-resched-softirq-fix.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0150-cond-resched-softirq-fix.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0150-cond-resched-softirq-fix.patch.patch)
@@ -0,0 +1,55 @@
+From 6e6ae2782713ae720db759cda2c7fb5967632c7d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 14 Jul 2011 09:56:44 +0200
+Subject: [PATCH 150/271] cond-resched-softirq-fix.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/sched.h |    4 ++++
+ kernel/sched.c        |    2 ++
+ 2 files changed, 6 insertions(+)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 6c20349..38d78dc 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -2602,12 +2602,16 @@ extern int __cond_resched_lock(spinlock_t *lock);
+ 	__cond_resched_lock(lock);				\
+ })
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ extern int __cond_resched_softirq(void);
+ 
+ #define cond_resched_softirq() ({					\
+ 	__might_sleep(__FILE__, __LINE__, SOFTIRQ_DISABLE_OFFSET);	\
+ 	__cond_resched_softirq();					\
+ })
++#else
++# define cond_resched_softirq()		cond_resched()
++#endif
+ 
+ /*
+  * Does a critical section need to be broken due to another
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 8d6ec1a..82fe8e6 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -5825,6 +5825,7 @@ int __cond_resched_lock(spinlock_t *lock)
+ }
+ EXPORT_SYMBOL(__cond_resched_lock);
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ int __sched __cond_resched_softirq(void)
+ {
+ 	BUG_ON(!in_softirq());
+@@ -5838,6 +5839,7 @@ int __sched __cond_resched_softirq(void)
+ 	return 0;
+ }
+ EXPORT_SYMBOL(__cond_resched_softirq);
++#endif
+ 
+ /**
+  * yield - yield the current processor to other threads.
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0151-sched-no-work-when-pi-blocked.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0151-sched-no-work-when-pi-blocked.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0151-sched-no-work-when-pi-blocked.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0151-sched-no-work-when-pi-blocked.patch.patch)
@@ -0,0 +1,62 @@
+From 8ddd3ff5c27b0465d3d279474ac017c5ac9eec19 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 20:46:52 +0200
+Subject: [PATCH 151/271] sched-no-work-when-pi-blocked.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/sched.h |    8 ++++++++
+ kernel/sched.c        |    5 ++++-
+ 2 files changed, 12 insertions(+), 1 deletion(-)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 38d78dc..99e7f8b 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -2095,12 +2095,20 @@ extern unsigned int sysctl_sched_cfs_bandwidth_slice;
+ extern int rt_mutex_getprio(struct task_struct *p);
+ extern void rt_mutex_setprio(struct task_struct *p, int prio);
+ extern void rt_mutex_adjust_pi(struct task_struct *p);
++static inline bool tsk_is_pi_blocked(struct task_struct *tsk)
++{
++	return tsk->pi_blocked_on != NULL;
++}
+ #else
+ static inline int rt_mutex_getprio(struct task_struct *p)
+ {
+ 	return p->normal_prio;
+ }
+ # define rt_mutex_adjust_pi(p)		do { } while (0)
++static inline bool tsk_is_pi_blocked(struct task_struct *tsk)
++{
++	return false;
++}
+ #endif
+ 
+ extern bool yield_to(struct task_struct *p, bool preempt);
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 82fe8e6..25f2fb7 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -4460,7 +4460,7 @@ need_resched:
+ 
+ static inline void sched_submit_work(struct task_struct *tsk)
+ {
+-	if (!tsk->state)
++	if (!tsk->state || tsk_is_pi_blocked(tsk))
+ 		return;
+ 
+ 	/*
+@@ -4480,6 +4480,9 @@ static inline void sched_submit_work(struct task_struct *tsk)
+ 
+ static inline void sched_update_worker(struct task_struct *tsk)
+ {
++	if (tsk_is_pi_blocked(tsk))
++		return;
++
+ 	if (tsk->flags & PF_WQ_WORKER)
+ 		wq_worker_running(tsk);
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0152-cond-resched-lock-rt-tweak.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0152-cond-resched-lock-rt-tweak.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0152-cond-resched-lock-rt-tweak.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0152-cond-resched-lock-rt-tweak.patch.patch)
@@ -0,0 +1,26 @@
+From 4218fa54c08ed1bb297e3ea11f7cddc0665794b1 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 22:51:33 +0200
+Subject: [PATCH 152/271] cond-resched-lock-rt-tweak.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/sched.h |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 99e7f8b..175aaee 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -2599,7 +2599,7 @@ extern int _cond_resched(void);
+ 
+ extern int __cond_resched_lock(spinlock_t *lock);
+ 
+-#ifdef CONFIG_PREEMPT_COUNT
++#if defined(CONFIG_PREEMPT_COUNT) && !defined(CONFIG_PREEMPT_RT_FULL)
+ #define PREEMPT_LOCK_OFFSET	PREEMPT_OFFSET
+ #else
+ #define PREEMPT_LOCK_OFFSET	0
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0153-sched-disable-ttwu-queue.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0153-sched-disable-ttwu-queue.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0153-sched-disable-ttwu-queue.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0153-sched-disable-ttwu-queue.patch.patch)
@@ -0,0 +1,33 @@
+From 0af957b90152a3547bfce6ba205118aa2b15c4a8 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 13 Sep 2011 16:42:35 +0200
+Subject: [PATCH 153/271] sched-disable-ttwu-queue.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched_features.h |    4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/kernel/sched_features.h b/kernel/sched_features.h
+index 8480224..0007001 100644
+--- a/kernel/sched_features.h
++++ b/kernel/sched_features.h
+@@ -60,11 +60,15 @@ SCHED_FEAT(OWNER_SPIN, 1)
+  */
+ SCHED_FEAT(NONTASK_POWER, 1)
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+  * Queue remote wakeups on the target CPU and process them
+  * using the scheduler IPI. Reduces rq->lock contention/bounces.
+  */
+ SCHED_FEAT(TTWU_QUEUE, 1)
++#else
++SCHED_FEAT(TTWU_QUEUE, 0)
++#endif
+ 
+ SCHED_FEAT(FORCE_SD_OVERLAP, 0)
+ SCHED_FEAT(RT_RUNTIME_SHARE, 1)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0154-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0154-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0154-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0154-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch)
@@ -0,0 +1,34 @@
+From 7b56b9ec7657030124eff23987645f8fa331c190 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 18 Jul 2011 17:03:52 +0200
+Subject: [PATCH 154/271] sched: Disable CONFIG_RT_GROUP_SCHED on RT
+
+Carsten reported problems when running:
+
+	taskset 01 chrt -f 1 sleep 1
+
+from within rc.local on a F15 machine. The task stays running and
+never gets on the run queue because some of the run queues have
+rt_throttled=1 which does not go away. Works nice from a ssh login
+shell. Disabling CONFIG_RT_GROUP_SCHED solves that as well.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ init/Kconfig |    1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/init/Kconfig b/init/Kconfig
+index dbc82d0..720c182 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -731,6 +731,7 @@ config RT_GROUP_SCHED
+ 	bool "Group scheduling for SCHED_RR/FIFO"
+ 	depends on EXPERIMENTAL
+ 	depends on CGROUP_SCHED
++	depends on !PREEMPT_RT_FULL
+ 	default n
+ 	help
+ 	  This feature lets you explicitly allocate real CPU bandwidth
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0155-sched-ttwu-Return-success-when-only-changing-the-sav.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0155-sched-ttwu-Return-success-when-only-changing-the-sav.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0155-sched-ttwu-Return-success-when-only-changing-the-sav.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0155-sched-ttwu-Return-success-when-only-changing-the-sav.patch)
@@ -0,0 +1,41 @@
+From 2e030e32f21934503f75aade806a1bb1ebd30146 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 13 Dec 2011 21:42:19 +0100
+Subject: [PATCH 155/271] sched: ttwu: Return success when only changing the
+ saved_state value
+
+When a task blocks on a rt lock, it saves the current state in
+p->saved_state, so a lock related wake up will not destroy the
+original state.
+
+When a real wakeup happens, while the task is running due to a lock
+wakeup already, we update p->saved_state to TASK_RUNNING, but we do
+not return success, which might cause another wakeup in the waitqueue
+code and the task remains in the waitqueue list. Return success in
+that case as well.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+---
+ kernel/sched.c |    4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 25f2fb7..2c803b2 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -2834,8 +2834,10 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
+ 		 * if the wakeup condition is true.
+ 		 */
+ 		if (!(wake_flags & WF_LOCK_SLEEPER)) {
+-			if (p->saved_state & state)
++			if (p->saved_state & state) {
+ 				p->saved_state = TASK_RUNNING;
++				success = 1;
++			}
+ 		}
+ 		goto out;
+ 	}
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0156-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0156-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0156-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0156-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch)
@@ -0,0 +1,68 @@
+From b5f79e7b9e2d826a757bfdb57b332ec4f69a7304 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:27 -0500
+Subject: [PATCH 156/271] stop_machine: convert stop_machine_run() to
+ PREEMPT_RT
+
+Instead of playing with non-preemption, introduce explicit
+startup serialization. This is more robust and cleaner as
+well.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/stop_machine.c |   20 +++++++++++++++-----
+ 1 file changed, 15 insertions(+), 5 deletions(-)
+
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index 2f194e9..61779f8 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -135,6 +135,7 @@ void stop_one_cpu_nowait(unsigned int cpu, cpu_stop_fn_t fn, void *arg,
+ 
+ /* static data for stop_cpus */
+ static DEFINE_MUTEX(stop_cpus_mutex);
++static DEFINE_MUTEX(stopper_lock);
+ static DEFINE_PER_CPU(struct cpu_stop_work, stop_cpus_work);
+ 
+ static void queue_stop_cpus_work(const struct cpumask *cpumask,
+@@ -153,15 +154,14 @@ static void queue_stop_cpus_work(const struct cpumask *cpumask,
+ 	}
+ 
+ 	/*
+-	 * Disable preemption while queueing to avoid getting
+-	 * preempted by a stopper which might wait for other stoppers
+-	 * to enter @fn which can lead to deadlock.
++	 * Make sure that all work is queued on all cpus before we
++	 * any of the cpus can execute it.
+ 	 */
+-	preempt_disable();
++	mutex_lock(&stopper_lock);
+ 	for_each_cpu(cpu, cpumask)
+ 		cpu_stop_queue_work(&per_cpu(cpu_stopper, cpu),
+ 				    &per_cpu(stop_cpus_work, cpu));
+-	preempt_enable();
++	mutex_unlock(&stopper_lock);
+ }
+ 
+ static int __stop_cpus(const struct cpumask *cpumask,
+@@ -275,6 +275,16 @@ repeat:
+ 
+ 		__set_current_state(TASK_RUNNING);
+ 
++		/*
++		 * Wait until the stopper finished scheduling on all
++		 * cpus
++		 */
++		mutex_lock(&stopper_lock);
++		/*
++		 * Let other cpu threads continue as well
++		 */
++		mutex_unlock(&stopper_lock);
++
+ 		/* cpu stop callbacks are not allowed to sleep */
+ 		preempt_disable();
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0157-stomp-machine-mark-stomper-thread.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0157-stomp-machine-mark-stomper-thread.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0157-stomp-machine-mark-stomper-thread.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0157-stomp-machine-mark-stomper-thread.patch.patch)
@@ -0,0 +1,38 @@
+From ad6e463b3ca9ad6826b0e4812d33d9ff73a28cd6 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 19:53:19 +0200
+Subject: [PATCH 157/271] stomp-machine-mark-stomper-thread.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/sched.h |    1 +
+ kernel/stop_machine.c |    1 +
+ 2 files changed, 2 insertions(+)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 175aaee..6f10df5 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1822,6 +1822,7 @@ extern void thread_group_times(struct task_struct *p, cputime_t *ut, cputime_t *
+ #define PF_FROZEN	0x00010000	/* frozen for system suspend */
+ #define PF_FSTRANS	0x00020000	/* inside a filesystem transaction */
+ #define PF_KSWAPD	0x00040000	/* I am kswapd */
++#define PF_STOMPER	0x00080000	/* I am a stomp machine thread */
+ #define PF_LESS_THROTTLE 0x00100000	/* Throttle me less: I clean memory */
+ #define PF_KTHREAD	0x00200000	/* I am a kernel thread */
+ #define PF_RANDOMIZE	0x00400000	/* randomize virtual address space */
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index 61779f8..484a335 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -327,6 +327,7 @@ static int __cpuinit cpu_stop_cpu_callback(struct notifier_block *nfb,
+ 		if (IS_ERR(p))
+ 			return notifier_from_errno(PTR_ERR(p));
+ 		get_task_struct(p);
++		p->flags |= PF_STOMPER;
+ 		kthread_bind(p, cpu);
+ 		sched_set_stop_task(cpu, p);
+ 		stopper->thread = p;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0158-stomp-machine-raw-lock.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0158-stomp-machine-raw-lock.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0158-stomp-machine-raw-lock.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0158-stomp-machine-raw-lock.patch.patch)
@@ -0,0 +1,180 @@
+From 49142ffd343b0f7166a2d6a5d73bf2567e545673 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 29 Jun 2011 11:01:51 +0200
+Subject: [PATCH 158/271] stomp-machine-raw-lock.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/stop_machine.c |   58 ++++++++++++++++++++++++++++++++++---------------
+ 1 file changed, 41 insertions(+), 17 deletions(-)
+
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index 484a335..561ba3a 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -29,12 +29,12 @@ struct cpu_stop_done {
+ 	atomic_t		nr_todo;	/* nr left to execute */
+ 	bool			executed;	/* actually executed? */
+ 	int			ret;		/* collected return value */
+-	struct completion	completion;	/* fired if nr_todo reaches 0 */
++	struct task_struct	*waiter;	/* woken when nr_todo reaches 0 */
+ };
+ 
+ /* the actual stopper, one per every possible cpu, enabled on online cpus */
+ struct cpu_stopper {
+-	spinlock_t		lock;
++	raw_spinlock_t		lock;
+ 	bool			enabled;	/* is this stopper enabled? */
+ 	struct list_head	works;		/* list of pending works */
+ 	struct task_struct	*thread;	/* stopper thread */
+@@ -47,7 +47,7 @@ static void cpu_stop_init_done(struct cpu_stop_done *done, unsigned int nr_todo)
+ {
+ 	memset(done, 0, sizeof(*done));
+ 	atomic_set(&done->nr_todo, nr_todo);
+-	init_completion(&done->completion);
++	done->waiter = current;
+ }
+ 
+ /* signal completion unless @done is NULL */
+@@ -56,8 +56,10 @@ static void cpu_stop_signal_done(struct cpu_stop_done *done, bool executed)
+ 	if (done) {
+ 		if (executed)
+ 			done->executed = true;
+-		if (atomic_dec_and_test(&done->nr_todo))
+-			complete(&done->completion);
++		if (atomic_dec_and_test(&done->nr_todo)) {
++			wake_up_process(done->waiter);
++			done->waiter = NULL;
++		}
+ 	}
+ }
+ 
+@@ -67,7 +69,7 @@ static void cpu_stop_queue_work(struct cpu_stopper *stopper,
+ {
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&stopper->lock, flags);
++	raw_spin_lock_irqsave(&stopper->lock, flags);
+ 
+ 	if (stopper->enabled) {
+ 		list_add_tail(&work->list, &stopper->works);
+@@ -75,7 +77,23 @@ static void cpu_stop_queue_work(struct cpu_stopper *stopper,
+ 	} else
+ 		cpu_stop_signal_done(work->done, false);
+ 
+-	spin_unlock_irqrestore(&stopper->lock, flags);
++	raw_spin_unlock_irqrestore(&stopper->lock, flags);
++}
++
++static void wait_for_stop_done(struct cpu_stop_done *done)
++{
++	set_current_state(TASK_UNINTERRUPTIBLE);
++	while (atomic_read(&done->nr_todo)) {
++		schedule();
++		set_current_state(TASK_UNINTERRUPTIBLE);
++	}
++	/*
++	 * We need to wait until cpu_stop_signal_done() has cleared
++	 * done->waiter.
++	 */
++	while (done->waiter)
++		cpu_relax();
++	set_current_state(TASK_RUNNING);
+ }
+ 
+ /**
+@@ -109,7 +127,7 @@ int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg)
+ 
+ 	cpu_stop_init_done(&done, 1);
+ 	cpu_stop_queue_work(&per_cpu(cpu_stopper, cpu), &work);
+-	wait_for_completion(&done.completion);
++	wait_for_stop_done(&done);
+ 	return done.executed ? done.ret : -ENOENT;
+ }
+ 
+@@ -171,7 +189,7 @@ static int __stop_cpus(const struct cpumask *cpumask,
+ 
+ 	cpu_stop_init_done(&done, cpumask_weight(cpumask));
+ 	queue_stop_cpus_work(cpumask, fn, arg, &done);
+-	wait_for_completion(&done.completion);
++	wait_for_stop_done(&done);
+ 	return done.executed ? done.ret : -ENOENT;
+ }
+ 
+@@ -259,13 +277,13 @@ repeat:
+ 	}
+ 
+ 	work = NULL;
+-	spin_lock_irq(&stopper->lock);
++	raw_spin_lock_irq(&stopper->lock);
+ 	if (!list_empty(&stopper->works)) {
+ 		work = list_first_entry(&stopper->works,
+ 					struct cpu_stop_work, list);
+ 		list_del_init(&work->list);
+ 	}
+-	spin_unlock_irq(&stopper->lock);
++	raw_spin_unlock_irq(&stopper->lock);
+ 
+ 	if (work) {
+ 		cpu_stop_fn_t fn = work->fn;
+@@ -299,7 +317,13 @@ repeat:
+ 			  kallsyms_lookup((unsigned long)fn, NULL, NULL, NULL,
+ 					  ksym_buf), arg);
+ 
++		/*
++		 * Make sure that the wakeup and setting done->waiter
++		 * to NULL is atomic.
++		 */
++		local_irq_disable();
+ 		cpu_stop_signal_done(done, true);
++		local_irq_enable();
+ 	} else
+ 		schedule();
+ 
+@@ -337,9 +361,9 @@ static int __cpuinit cpu_stop_cpu_callback(struct notifier_block *nfb,
+ 		/* strictly unnecessary, as first user will wake it */
+ 		wake_up_process(stopper->thread);
+ 		/* mark enabled */
+-		spin_lock_irq(&stopper->lock);
++		raw_spin_lock_irq(&stopper->lock);
+ 		stopper->enabled = true;
+-		spin_unlock_irq(&stopper->lock);
++		raw_spin_unlock_irq(&stopper->lock);
+ 		break;
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+@@ -352,11 +376,11 @@ static int __cpuinit cpu_stop_cpu_callback(struct notifier_block *nfb,
+ 		/* kill the stopper */
+ 		kthread_stop(stopper->thread);
+ 		/* drain remaining works */
+-		spin_lock_irq(&stopper->lock);
++		raw_spin_lock_irq(&stopper->lock);
+ 		list_for_each_entry(work, &stopper->works, list)
+ 			cpu_stop_signal_done(work->done, false);
+ 		stopper->enabled = false;
+-		spin_unlock_irq(&stopper->lock);
++		raw_spin_unlock_irq(&stopper->lock);
+ 		/* release the stopper */
+ 		put_task_struct(stopper->thread);
+ 		stopper->thread = NULL;
+@@ -387,7 +411,7 @@ static int __init cpu_stop_init(void)
+ 	for_each_possible_cpu(cpu) {
+ 		struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu);
+ 
+-		spin_lock_init(&stopper->lock);
++		raw_spin_lock_init(&stopper->lock);
+ 		INIT_LIST_HEAD(&stopper->works);
+ 	}
+ 
+@@ -581,7 +605,7 @@ int stop_machine_from_inactive_cpu(int (*fn)(void *), void *data,
+ 	ret = stop_machine_cpu_stop(&smdata);
+ 
+ 	/* Busy wait for completion. */
+-	while (!completion_done(&done.completion))
++	while (atomic_read(&done.nr_todo))
+ 		cpu_relax();
+ 
+ 	mutex_unlock(&stop_cpus_mutex);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0159-hotplug-Lightweight-get-online-cpus.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0159-hotplug-Lightweight-get-online-cpus.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0159-hotplug-Lightweight-get-online-cpus.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0159-hotplug-Lightweight-get-online-cpus.patch)
@@ -0,0 +1,216 @@
+From a5c652463db58de2bbbf68dbd27d8779ff1b6fca Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 15 Jun 2011 12:36:06 +0200
+Subject: [PATCH 159/271] hotplug: Lightweight get online cpus
+
+get_online_cpus() is a heavy weight function which involves a global
+mutex. migrate_disable() wants a simpler construct which prevents only
+a CPU from going doing while a task is in a migrate disabled section.
+
+Implement a per cpu lockless mechanism, which serializes only in the
+real unplug case on a global mutex. That serialization affects only
+tasks on the cpu which should be brought down.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/cpu.h |    4 ++
+ kernel/cpu.c        |  127 +++++++++++++++++++++++++++++++++++++++++++++++++--
+ 2 files changed, 128 insertions(+), 3 deletions(-)
+
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 6cb60fd..c46ec3e 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -167,6 +167,8 @@ extern struct sysdev_class cpu_sysdev_class;
+ 
+ extern void get_online_cpus(void);
+ extern void put_online_cpus(void);
++extern void pin_current_cpu(void);
++extern void unpin_current_cpu(void);
+ #define hotcpu_notifier(fn, pri)	cpu_notifier(fn, pri)
+ #define register_hotcpu_notifier(nb)	register_cpu_notifier(nb)
+ #define unregister_hotcpu_notifier(nb)	unregister_cpu_notifier(nb)
+@@ -189,6 +191,8 @@ static inline void cpu_hotplug_driver_unlock(void)
+ 
+ #define get_online_cpus()	do { } while (0)
+ #define put_online_cpus()	do { } while (0)
++static inline void pin_current_cpu(void) { }
++static inline void unpin_current_cpu(void) { }
+ #define hotcpu_notifier(fn, pri)	do { (void)(fn); } while (0)
+ /* These aren't inline functions due to a GCC bug. */
+ #define register_hotcpu_notifier(nb)	({ (void)(nb); 0; })
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 563f136..df0a2fc 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -58,6 +58,102 @@ static struct {
+ 	.refcount = 0,
+ };
+ 
++struct hotplug_pcp {
++	struct task_struct *unplug;
++	int refcount;
++	struct completion synced;
++};
++
++static DEFINE_PER_CPU(struct hotplug_pcp, hotplug_pcp);
++
++/**
++ * pin_current_cpu - Prevent the current cpu from being unplugged
++ *
++ * Lightweight version of get_online_cpus() to prevent cpu from being
++ * unplugged when code runs in a migration disabled region.
++ *
++ * Must be called with preemption disabled (preempt_count = 1)!
++ */
++void pin_current_cpu(void)
++{
++	struct hotplug_pcp *hp = &__get_cpu_var(hotplug_pcp);
++
++retry:
++	if (!hp->unplug || hp->refcount || preempt_count() > 1 ||
++	    hp->unplug == current || (current->flags & PF_STOMPER)) {
++		hp->refcount++;
++		return;
++	}
++	preempt_enable();
++	mutex_lock(&cpu_hotplug.lock);
++	mutex_unlock(&cpu_hotplug.lock);
++	preempt_disable();
++	goto retry;
++}
++
++/**
++ * unpin_current_cpu - Allow unplug of current cpu
++ *
++ * Must be called with preemption or interrupts disabled!
++ */
++void unpin_current_cpu(void)
++{
++	struct hotplug_pcp *hp = &__get_cpu_var(hotplug_pcp);
++
++	WARN_ON(hp->refcount <= 0);
++
++	/* This is safe. sync_unplug_thread is pinned to this cpu */
++	if (!--hp->refcount && hp->unplug && hp->unplug != current &&
++	    !(current->flags & PF_STOMPER))
++		wake_up_process(hp->unplug);
++}
++
++/*
++ * FIXME: Is this really correct under all circumstances ?
++ */
++static int sync_unplug_thread(void *data)
++{
++	struct hotplug_pcp *hp = data;
++
++	preempt_disable();
++	hp->unplug = current;
++	set_current_state(TASK_UNINTERRUPTIBLE);
++	while (hp->refcount) {
++		schedule_preempt_disabled();
++		set_current_state(TASK_UNINTERRUPTIBLE);
++	}
++	set_current_state(TASK_RUNNING);
++	preempt_enable();
++	complete(&hp->synced);
++	return 0;
++}
++
++/*
++ * Start the sync_unplug_thread on the target cpu and wait for it to
++ * complete.
++ */
++static int cpu_unplug_begin(unsigned int cpu)
++{
++	struct hotplug_pcp *hp = &per_cpu(hotplug_pcp, cpu);
++	struct task_struct *tsk;
++
++	init_completion(&hp->synced);
++	tsk = kthread_create(sync_unplug_thread, hp, "sync_unplug/%d\n", cpu);
++	if (IS_ERR(tsk))
++		return (PTR_ERR(tsk));
++	kthread_bind(tsk, cpu);
++	wake_up_process(tsk);
++	wait_for_completion(&hp->synced);
++	return 0;
++}
++
++static void cpu_unplug_done(unsigned int cpu)
++{
++	struct hotplug_pcp *hp = &per_cpu(hotplug_pcp, cpu);
++
++	hp->unplug = NULL;
++}
++
+ void get_online_cpus(void)
+ {
+ 	might_sleep();
+@@ -211,13 +307,14 @@ static int __ref take_cpu_down(void *_param)
+ /* Requires cpu_add_remove_lock to be held */
+ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
+ {
+-	int err, nr_calls = 0;
++	int mycpu, err, nr_calls = 0;
+ 	void *hcpu = (void *)(long)cpu;
+ 	unsigned long mod = tasks_frozen ? CPU_TASKS_FROZEN : 0;
+ 	struct take_cpu_down_param tcd_param = {
+ 		.mod = mod,
+ 		.hcpu = hcpu,
+ 	};
++	cpumask_var_t cpumask;
+ 
+ 	if (num_online_cpus() == 1)
+ 		return -EBUSY;
+@@ -225,7 +322,20 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
+ 	if (!cpu_online(cpu))
+ 		return -EINVAL;
+ 
+-	cpu_hotplug_begin();
++	/* Move the downtaker off the unplug cpu */
++	if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
++		return -ENOMEM;
++	cpumask_andnot(cpumask, cpu_online_mask, cpumask_of(cpu));
++	set_cpus_allowed_ptr(current, cpumask);
++	free_cpumask_var(cpumask);
++	preempt_disable();
++	mycpu = smp_processor_id();
++	if (mycpu == cpu) {
++		printk(KERN_ERR "Yuck! Still on unplug CPU\n!");
++		preempt_enable();
++		return -EBUSY;
++	}
++	preempt_enable();
+ 
+ 	err = __cpu_notify(CPU_DOWN_PREPARE | mod, hcpu, -1, &nr_calls);
+ 	if (err) {
+@@ -233,7 +343,16 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
+ 		__cpu_notify(CPU_DOWN_FAILED | mod, hcpu, nr_calls, NULL);
+ 		printk("%s: attempt to take down CPU %u failed\n",
+ 				__func__, cpu);
+-		goto out_release;
++		goto out_cancel;
++	}
++
++	cpu_hotplug_begin();
++	err = cpu_unplug_begin(cpu);
++	if (err) {
++		nr_calls--;
++		__cpu_notify(CPU_DOWN_FAILED | mod, hcpu, nr_calls, NULL);
++		printk("cpu_unplug_begin(%d) failed\n", cpu);
++		goto out_cancel;
+ 	}
+ 
+ 	err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
+@@ -264,6 +383,8 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
+ 	check_for_tasks(cpu);
+ 
+ out_release:
++	cpu_unplug_done(cpu);
++out_cancel:
+ 	cpu_hotplug_done();
+ 	if (!err)
+ 		cpu_notify_nofail(CPU_POST_DEAD | mod, hcpu);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0160-hotplug-sync_unplug-No.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0160-hotplug-sync_unplug-No.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0160-hotplug-sync_unplug-No.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0160-hotplug-sync_unplug-No.patch)
@@ -0,0 +1,30 @@
+From cd72fd6de9a59e824a015a4c75b9be346a83cd12 Mon Sep 17 00:00:00 2001
+From: Yong Zhang <yong.zhang0 at gmail.com>
+Date: Sun, 16 Oct 2011 18:56:43 +0800
+Subject: [PATCH 160/271] hotplug: sync_unplug: No " " in task name
+
+Otherwise the output will look a little odd.
+
+Signed-off-by: Yong Zhang <yong.zhang0 at gmail.com>
+Link: http://lkml.kernel.org/r/1318762607-2261-2-git-send-email-yong.zhang0@gmail.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/cpu.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index df0a2fc..171cb6c 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -138,7 +138,7 @@ static int cpu_unplug_begin(unsigned int cpu)
+ 	struct task_struct *tsk;
+ 
+ 	init_completion(&hp->synced);
+-	tsk = kthread_create(sync_unplug_thread, hp, "sync_unplug/%d\n", cpu);
++	tsk = kthread_create(sync_unplug_thread, hp, "sync_unplug/%d", cpu);
+ 	if (IS_ERR(tsk))
+ 		return (PTR_ERR(tsk));
+ 	kthread_bind(tsk, cpu);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0161-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0161-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0161-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0161-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch)
@@ -0,0 +1,41 @@
+From d62d52eb798f9a9c3d62d8a93205f7556df50cd9 Mon Sep 17 00:00:00 2001
+From: Yong Zhang <yong.zhang0 at gmail.com>
+Date: Thu, 28 Jul 2011 11:16:00 +0800
+Subject: [PATCH 161/271] hotplug: Reread hotplug_pcp on pin_current_cpu()
+ retry
+
+When retry happens, it's likely that the task has been migrated to
+another cpu (except unplug failed), but it still derefernces the
+original hotplug_pcp per cpu data.
+
+Update the pointer to hotplug_pcp in the retry path, so it points to
+the current cpu.
+
+Signed-off-by: Yong Zhang <yong.zhang0 at gmail.com>
+Cc: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/r/20110728031600.GA338@windriver.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/cpu.c |    4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 171cb6c..80c72da 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -76,9 +76,11 @@ static DEFINE_PER_CPU(struct hotplug_pcp, hotplug_pcp);
+  */
+ void pin_current_cpu(void)
+ {
+-	struct hotplug_pcp *hp = &__get_cpu_var(hotplug_pcp);
++	struct hotplug_pcp *hp;
+ 
+ retry:
++	hp = &__get_cpu_var(hotplug_pcp);
++
+ 	if (!hp->unplug || hp->refcount || preempt_count() > 1 ||
+ 	    hp->unplug == current || (current->flags & PF_STOMPER)) {
+ 		hp->refcount++;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0162-sched-migrate-disable.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0162-sched-migrate-disable.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0162-sched-migrate-disable.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0162-sched-migrate-disable.patch.patch)
@@ -0,0 +1,217 @@
+From e2afb78f43acf9c2adc574d8361bb4146afc21f7 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 16 Jun 2011 13:26:08 +0200
+Subject: [PATCH 162/271] sched-migrate-disable.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/preempt.h |    8 +++++
+ include/linux/sched.h   |   13 +++++--
+ include/linux/smp.h     |    1 -
+ kernel/sched.c          |   88 ++++++++++++++++++++++++++++++++++++++++++++---
+ lib/smp_processor_id.c  |    6 ++--
+ 5 files changed, 104 insertions(+), 12 deletions(-)
+
+diff --git a/include/linux/preempt.h b/include/linux/preempt.h
+index 29db25f..363e5e2 100644
+--- a/include/linux/preempt.h
++++ b/include/linux/preempt.h
+@@ -108,6 +108,14 @@ do { \
+ 
+ #endif /* CONFIG_PREEMPT_COUNT */
+ 
++#ifdef CONFIG_SMP
++extern void migrate_disable(void);
++extern void migrate_enable(void);
++#else
++# define migrate_disable()		do { } while (0)
++# define migrate_enable()		do { } while (0)
++#endif
++
+ #ifdef CONFIG_PREEMPT_RT_FULL
+ # define preempt_disable_rt()		preempt_disable()
+ # define preempt_enable_rt()		preempt_enable()
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 6f10df5..712e991 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1259,6 +1259,7 @@ struct task_struct {
+ #endif
+ 
+ 	unsigned int policy;
++	int migrate_disable;
+ 	cpumask_t cpus_allowed;
+ 
+ #ifdef CONFIG_PREEMPT_RCU
+@@ -1598,9 +1599,6 @@ struct task_struct {
+ #endif
+ };
+ 
+-/* Future-safe accessor for struct task_struct's cpus_allowed. */
+-#define tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed)
+-
+ #ifdef CONFIG_PREEMPT_RT_FULL
+ static inline bool cur_pf_disabled(void) { return current->pagefault_disabled; }
+ #else
+@@ -2683,6 +2681,15 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+ 
+ #endif /* CONFIG_SMP */
+ 
++/* Future-safe accessor for struct task_struct's cpus_allowed. */
++static inline const struct cpumask *tsk_cpus_allowed(struct task_struct *p)
++{
++	if (p->migrate_disable)
++		return cpumask_of(task_cpu(p));
++
++	return &p->cpus_allowed;
++}
++
+ extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
+ extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
+ 
+diff --git a/include/linux/smp.h b/include/linux/smp.h
+index e6c58d8..94c8430 100644
+--- a/include/linux/smp.h
++++ b/include/linux/smp.h
+@@ -80,7 +80,6 @@ void __smp_call_function_single(int cpuid, struct call_single_data *data,
+ 
+ int smp_call_function_any(const struct cpumask *mask,
+ 			  smp_call_func_t func, void *info, int wait);
+-
+ /*
+  * Generic and arch helpers
+  */
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 2c803b2..cdad99c 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -6223,11 +6223,12 @@ static inline void sched_init_granularity(void)
+ #ifdef CONFIG_SMP
+ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
+ {
+-	if (p->sched_class && p->sched_class->set_cpus_allowed)
+-		p->sched_class->set_cpus_allowed(p, new_mask);
+-
++	if (!p->migrate_disable) {
++		if (p->sched_class && p->sched_class->set_cpus_allowed)
++			p->sched_class->set_cpus_allowed(p, new_mask);
++		p->rt.nr_cpus_allowed = cpumask_weight(new_mask);
++	}
+ 	cpumask_copy(&p->cpus_allowed, new_mask);
+-	p->rt.nr_cpus_allowed = cpumask_weight(new_mask);
+ }
+ 
+ /*
+@@ -6278,7 +6279,7 @@ int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
+ 	do_set_cpus_allowed(p, new_mask);
+ 
+ 	/* Can the task run on the task's current CPU? If so, we're done */
+-	if (cpumask_test_cpu(task_cpu(p), new_mask))
++	if (cpumask_test_cpu(task_cpu(p), new_mask) || p->migrate_disable)
+ 		goto out;
+ 
+ 	dest_cpu = cpumask_any_and(cpu_active_mask, new_mask);
+@@ -6297,6 +6298,83 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
+ 
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	const struct cpumask *mask;
++	unsigned long flags;
++	struct rq *rq;
++
++	preempt_disable();
++	if (p->migrate_disable) {
++		p->migrate_disable++;
++		preempt_enable();
++		return;
++	}
++
++	pin_current_cpu();
++	if (unlikely(!scheduler_running)) {
++		p->migrate_disable = 1;
++		preempt_enable();
++		return;
++	}
++	rq = task_rq_lock(p, &flags);
++	p->migrate_disable = 1;
++	mask = tsk_cpus_allowed(p);
++
++	WARN_ON(!cpumask_test_cpu(smp_processor_id(), mask));
++
++	if (!cpumask_equal(&p->cpus_allowed, mask)) {
++		if (p->sched_class->set_cpus_allowed)
++			p->sched_class->set_cpus_allowed(p, mask);
++		p->rt.nr_cpus_allowed = cpumask_weight(mask);
++	}
++	task_rq_unlock(rq, p, &flags);
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++	const struct cpumask *mask;
++	unsigned long flags;
++	struct rq *rq;
++
++	WARN_ON_ONCE(p->migrate_disable <= 0);
++
++	preempt_disable();
++	if (p->migrate_disable > 1) {
++		p->migrate_disable--;
++		preempt_enable();
++		return;
++	}
++
++	if (unlikely(!scheduler_running)) {
++		p->migrate_disable = 0;
++		unpin_current_cpu();
++		preempt_enable();
++		return;
++	}
++
++	rq = task_rq_lock(p, &flags);
++	p->migrate_disable = 0;
++	mask = tsk_cpus_allowed(p);
++
++	WARN_ON(!cpumask_test_cpu(smp_processor_id(), mask));
++
++	if (!cpumask_equal(&p->cpus_allowed, mask)) {
++		if (p->sched_class->set_cpus_allowed)
++			p->sched_class->set_cpus_allowed(p, mask);
++		p->rt.nr_cpus_allowed = cpumask_weight(mask);
++	}
++
++	task_rq_unlock(rq, p, &flags);
++	unpin_current_cpu();
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
+ /*
+  * Move (not current) task off this cpu, onto dest cpu. We're doing
+  * this because either it can't run here any more (set_cpus_allowed()
+diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c
+index 503f087..60a7569 100644
+--- a/lib/smp_processor_id.c
++++ b/lib/smp_processor_id.c
+@@ -39,9 +39,9 @@ notrace unsigned int debug_smp_processor_id(void)
+ 	if (!printk_ratelimit())
+ 		goto out_enable;
+ 
+-	printk(KERN_ERR "BUG: using smp_processor_id() in preemptible [%08x] "
+-			"code: %s/%d\n",
+-			preempt_count() - 1, current->comm, current->pid);
++	printk(KERN_ERR "BUG: using smp_processor_id() in preemptible [%08x %08x] "
++	       "code: %s/%d\n", preempt_count() - 1,
++	       current->migrate_disable, current->comm, current->pid);
+ 	print_symbol("caller is %s\n", (long)__builtin_return_address(0));
+ 	dump_stack();
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0163-hotplug-use-migrate-disable.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0163-hotplug-use-migrate-disable.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0163-hotplug-use-migrate-disable.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0163-hotplug-use-migrate-disable.patch.patch)
@@ -0,0 +1,42 @@
+From d5f0c2d5e3b534568024bc5a33e124af8a4eb41a Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 19:35:29 +0200
+Subject: [PATCH 163/271] hotplug-use-migrate-disable.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/cpu.c |    6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 80c72da..2179062 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -330,14 +330,13 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
+ 	cpumask_andnot(cpumask, cpu_online_mask, cpumask_of(cpu));
+ 	set_cpus_allowed_ptr(current, cpumask);
+ 	free_cpumask_var(cpumask);
+-	preempt_disable();
++	migrate_disable();
+ 	mycpu = smp_processor_id();
+ 	if (mycpu == cpu) {
+ 		printk(KERN_ERR "Yuck! Still on unplug CPU\n!");
+-		preempt_enable();
++		migrate_enable();
+ 		return -EBUSY;
+ 	}
+-	preempt_enable();
+ 
+ 	err = __cpu_notify(CPU_DOWN_PREPARE | mod, hcpu, -1, &nr_calls);
+ 	if (err) {
+@@ -387,6 +386,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
+ out_release:
+ 	cpu_unplug_done(cpu);
+ out_cancel:
++	migrate_enable();
+ 	cpu_hotplug_done();
+ 	if (!err)
+ 		cpu_notify_nofail(CPU_POST_DEAD | mod, hcpu);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0164-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0164-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0164-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0164-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch)
@@ -0,0 +1,65 @@
+From 16b4a3d8f67ba68c97da8ce4475f30295eddf7e9 Mon Sep 17 00:00:00 2001
+From: Yong Zhang <yong.zhang0 at gmail.com>
+Date: Sun, 16 Oct 2011 18:56:44 +0800
+Subject: [PATCH 164/271] hotplug: Call cpu_unplug_begin() before DOWN_PREPARE
+
+cpu_unplug_begin() should be called before CPU_DOWN_PREPARE, because
+at CPU_DOWN_PREPARE cpu_active is cleared and sched_domain is
+rebuilt. Otherwise the 'sync_unplug' thread will be running on the cpu
+on which it's created and not bound on the cpu which is about to go
+down.
+
+I found that by an incorrect warning on smp_processor_id() called by
+sync_unplug/1, and trace shows below:
+(echo 1 > /sys/device/system/cpu/cpu1/online)
+  bash-1664  [000]    83.136620: _cpu_down: Bind sync_unplug to cpu 1
+  bash-1664  [000]    83.136623: sched_wait_task: comm=sync_unplug/1 pid=1724 prio=120
+  bash-1664  [000]    83.136624: _cpu_down: Wake sync_unplug
+  bash-1664  [000]    83.136629: sched_wakeup: comm=sync_unplug/1 pid=1724 prio=120 success=1 target_cpu=000
+
+Wants to be folded back....
+
+Signed-off-by: Yong Zhang <yong.zhang0 at gmail.com>
+Link: http://lkml.kernel.org/r/1318762607-2261-3-git-send-email-yong.zhang0@gmail.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/cpu.c |   16 +++++++---------
+ 1 file changed, 7 insertions(+), 9 deletions(-)
+
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 2179062..fa40834 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -338,22 +338,20 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
+ 		return -EBUSY;
+ 	}
+ 
+-	err = __cpu_notify(CPU_DOWN_PREPARE | mod, hcpu, -1, &nr_calls);
++	cpu_hotplug_begin();
++	err = cpu_unplug_begin(cpu);
+ 	if (err) {
+-		nr_calls--;
+-		__cpu_notify(CPU_DOWN_FAILED | mod, hcpu, nr_calls, NULL);
+-		printk("%s: attempt to take down CPU %u failed\n",
+-				__func__, cpu);
++		printk("cpu_unplug_begin(%d) failed\n", cpu);
+ 		goto out_cancel;
+ 	}
+ 
+-	cpu_hotplug_begin();
+-	err = cpu_unplug_begin(cpu);
++	err = __cpu_notify(CPU_DOWN_PREPARE | mod, hcpu, -1, &nr_calls);
+ 	if (err) {
+ 		nr_calls--;
+ 		__cpu_notify(CPU_DOWN_FAILED | mod, hcpu, nr_calls, NULL);
+-		printk("cpu_unplug_begin(%d) failed\n", cpu);
+-		goto out_cancel;
++		printk("%s: attempt to take down CPU %u failed\n",
++				__func__, cpu);
++		goto out_release;
+ 	}
+ 
+ 	err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0165-ftrace-migrate-disable-tracing.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0165-ftrace-migrate-disable-tracing.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0165-ftrace-migrate-disable-tracing.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0165-ftrace-migrate-disable-tracing.patch.patch)
@@ -0,0 +1,85 @@
+From a4d7225ec1081258126615afcee55e543e7732b4 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 21:56:42 +0200
+Subject: [PATCH 165/271] ftrace-migrate-disable-tracing.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/ftrace_event.h |    3 ++-
+ kernel/trace/trace.c         |    9 ++++++---
+ kernel/trace/trace_events.c  |    1 +
+ kernel/trace/trace_output.c  |    5 +++++
+ 4 files changed, 14 insertions(+), 4 deletions(-)
+
+diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
+index c3da42d..7c5e176 100644
+--- a/include/linux/ftrace_event.h
++++ b/include/linux/ftrace_event.h
+@@ -49,7 +49,8 @@ struct trace_entry {
+ 	unsigned char		flags;
+ 	unsigned char		preempt_count;
+ 	int			pid;
+-	int			padding;
++	unsigned short		migrate_disable;
++	unsigned short		padding;
+ };
+ 
+ #define FTRACE_MAX_EVENT						\
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 697e49d..c44456b 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1123,6 +1123,8 @@ tracing_generic_entry_update(struct trace_entry *entry, unsigned long flags,
+ 		((pc & HARDIRQ_MASK) ? TRACE_FLAG_HARDIRQ : 0) |
+ 		((pc & SOFTIRQ_MASK) ? TRACE_FLAG_SOFTIRQ : 0) |
+ 		(need_resched() ? TRACE_FLAG_NEED_RESCHED : 0);
++
++	entry->migrate_disable	= (tsk) ? tsk->migrate_disable & 0xFF : 0;
+ }
+ EXPORT_SYMBOL_GPL(tracing_generic_entry_update);
+ 
+@@ -1854,9 +1856,10 @@ static void print_lat_help_header(struct seq_file *m)
+ 	seq_puts(m, "#                | / _----=> need-resched    \n");
+ 	seq_puts(m, "#                || / _---=> hardirq/softirq \n");
+ 	seq_puts(m, "#                ||| / _--=> preempt-depth   \n");
+-	seq_puts(m, "#                |||| /     delay             \n");
+-	seq_puts(m, "#  cmd     pid   ||||| time  |   caller      \n");
+-	seq_puts(m, "#     \\   /      |||||  \\    |   /           \n");
++	seq_puts(m, "#                |||| / _--=> migrate-disable\n");
++	seq_puts(m, "#                ||||| /     delay           \n");
++	seq_puts(m, "#  cmd     pid   |||||| time  |   caller     \n");
++	seq_puts(m, "#     \\   /      |||||  \\   |   /          \n");
+ }
+ 
+ static void print_func_help_header(struct seq_file *m)
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index c212a7f..aca63cc 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -116,6 +116,7 @@ static int trace_define_common_fields(void)
+ 	__common_field(unsigned char, flags);
+ 	__common_field(unsigned char, preempt_count);
+ 	__common_field(int, pid);
++	__common_field(unsigned short, migrate_disable);
+ 	__common_field(int, padding);
+ 
+ 	return ret;
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index 1dcf253..bb9a58d 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -591,6 +591,11 @@ int trace_print_lat_fmt(struct trace_seq *s, struct trace_entry *entry)
+ 	else
+ 		ret = trace_seq_putc(s, '.');
+ 
++	if (entry->migrate_disable)
++		ret = trace_seq_printf(s, "%x", entry->migrate_disable);
++	else
++		ret = trace_seq_putc(s, '.');
++
+ 	return ret;
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0166-tracing-Show-padding-as-unsigned-short.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0166-tracing-Show-padding-as-unsigned-short.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0166-tracing-Show-padding-as-unsigned-short.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0166-tracing-Show-padding-as-unsigned-short.patch)
@@ -0,0 +1,50 @@
+From 7754a1d27ec0650158d7cbd4b4e17bebe04edda4 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Wed, 16 Nov 2011 13:19:35 -0500
+Subject: [PATCH 166/271] tracing: Show padding as unsigned short
+
+RT added two bytes to trace migrate disable counting to the trace events
+and used two bytes of the padding to make the change. The structures and
+all were updated correctly, but the display in the event formats was
+not:
+
+cat /debug/tracing/events/sched/sched_switch/format
+
+name: sched_switch
+ID: 51
+format:
+	field:unsigned short common_type;	offset:0;	size:2;	signed:0;
+	field:unsigned char common_flags;	offset:2;	size:1;	signed:0;
+	field:unsigned char common_preempt_count;	offset:3;	size:1;	signed:0;
+	field:int common_pid;	offset:4;	size:4;	signed:1;
+	field:unsigned short common_migrate_disable;	offset:8;	size:2;	signed:0;
+	field:int common_padding;	offset:10;	size:2;	signed:0;
+
+The field for common_padding has the correct size and offset, but the
+use of "int" might confuse some parsers (and people that are reading
+it). This needs to be changed to "unsigned short".
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Link: http://lkml.kernel.org/r/1321467575.4181.36.camel@frodo
+Cc: stable-rt at vger.kernel.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/trace/trace_events.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index aca63cc..69cc908 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -117,7 +117,7 @@ static int trace_define_common_fields(void)
+ 	__common_field(unsigned char, preempt_count);
+ 	__common_field(int, pid);
+ 	__common_field(unsigned short, migrate_disable);
+-	__common_field(int, padding);
++	__common_field(unsigned short, padding);
+ 
+ 	return ret;
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0167-migrate-disable-rt-variant.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0167-migrate-disable-rt-variant.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0167-migrate-disable-rt-variant.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0167-migrate-disable-rt-variant.patch.patch)
@@ -0,0 +1,33 @@
+From 5822eb0add831b3fbae61f8ed487c9715adca51e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 19:48:20 +0200
+Subject: [PATCH 167/271] migrate-disable-rt-variant.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/preempt.h |    4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/include/linux/preempt.h b/include/linux/preempt.h
+index 363e5e2..5aa7916 100644
+--- a/include/linux/preempt.h
++++ b/include/linux/preempt.h
+@@ -121,11 +121,15 @@ extern void migrate_enable(void);
+ # define preempt_enable_rt()		preempt_enable()
+ # define preempt_disable_nort()		do { } while (0)
+ # define preempt_enable_nort()		do { } while (0)
++# define migrate_disable_rt()		migrate_disable()
++# define migrate_enable_rt()		migrate_enable()
+ #else
+ # define preempt_disable_rt()		do { } while (0)
+ # define preempt_enable_rt()		do { } while (0)
+ # define preempt_disable_nort()		preempt_disable()
+ # define preempt_enable_nort()		preempt_enable()
++# define migrate_disable_rt()		do { } while (0)
++# define migrate_enable_rt()		do { } while (0)
+ #endif
+ 
+ #ifdef CONFIG_PREEMPT_NOTIFIERS
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0168-sched-Optimize-migrate_disable.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0168-sched-Optimize-migrate_disable.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0168-sched-Optimize-migrate_disable.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0168-sched-Optimize-migrate_disable.patch)
@@ -0,0 +1,73 @@
+From 15c21e23456de9e9759871f02a576e33c2ede92a Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Thu, 11 Aug 2011 15:03:35 +0200
+Subject: [PATCH 168/271] sched: Optimize migrate_disable
+
+Change from task_rq_lock() to raw_spin_lock(&rq->lock) to avoid a few
+atomic ops. See comment on why it should be safe.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/n/tip-cbz6hkl5r5mvwtx5s3tor2y6@git.kernel.org
+---
+ kernel/sched.c |   24 ++++++++++++++++++++----
+ 1 file changed, 20 insertions(+), 4 deletions(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index cdad99c..92c8fd9 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -6318,7 +6318,19 @@ void migrate_disable(void)
+ 		preempt_enable();
+ 		return;
+ 	}
+-	rq = task_rq_lock(p, &flags);
++
++	/*
++	 * Since this is always current we can get away with only locking
++	 * rq->lock, the ->cpus_allowed value can normally only be changed
++	 * while holding both p->pi_lock and rq->lock, but seeing that this
++	 * it current, we cannot actually be waking up, so all code that
++	 * relies on serialization against p->pi_lock is out of scope.
++	 *
++	 * Taking rq->lock serializes us against things like
++	 * set_cpus_allowed_ptr() that can still happen concurrently.
++	 */
++	rq = this_rq();
++	raw_spin_lock_irqsave(&rq->lock, flags);
+ 	p->migrate_disable = 1;
+ 	mask = tsk_cpus_allowed(p);
+ 
+@@ -6329,7 +6341,7 @@ void migrate_disable(void)
+ 			p->sched_class->set_cpus_allowed(p, mask);
+ 		p->rt.nr_cpus_allowed = cpumask_weight(mask);
+ 	}
+-	task_rq_unlock(rq, p, &flags);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
+ 	preempt_enable();
+ }
+ EXPORT_SYMBOL_GPL(migrate_disable);
+@@ -6357,7 +6369,11 @@ void migrate_enable(void)
+ 		return;
+ 	}
+ 
+-	rq = task_rq_lock(p, &flags);
++	/*
++	 * See comment in migrate_disable().
++	 */
++	rq = this_rq();
++	raw_spin_lock_irqsave(&rq->lock, flags);
+ 	p->migrate_disable = 0;
+ 	mask = tsk_cpus_allowed(p);
+ 
+@@ -6369,7 +6385,7 @@ void migrate_enable(void)
+ 		p->rt.nr_cpus_allowed = cpumask_weight(mask);
+ 	}
+ 
+-	task_rq_unlock(rq, p, &flags);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
+ 	unpin_current_cpu();
+ 	preempt_enable();
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0169-sched-Generic-migrate_disable.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0169-sched-Generic-migrate_disable.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0169-sched-Generic-migrate_disable.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0169-sched-Generic-migrate_disable.patch)
@@ -0,0 +1,190 @@
+From bc7ca2d81bb7bc98282286500395c6076918028b Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Thu, 11 Aug 2011 15:14:58 +0200
+Subject: [PATCH 169/271] sched: Generic migrate_disable
+
+Make migrate_disable() be a preempt_disable() for !rt kernels. This
+allows generic code to use it but still enforces that these code
+sections stay relatively small.
+
+A preemptible migrate_disable() accessible for general use would allow
+people growing arbitrary per-cpu crap instead of clean these things
+up.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/n/tip-275i87sl8e1jcamtchmehonm@git.kernel.org
+---
+ include/linux/preempt.h |   21 +++++++++------------
+ include/linux/sched.h   |   13 +++++++++++++
+ include/linux/smp.h     |    9 ++-------
+ kernel/sched.c          |    6 ++++--
+ kernel/trace/trace.c    |    2 +-
+ lib/smp_processor_id.c  |    2 +-
+ 6 files changed, 30 insertions(+), 23 deletions(-)
+
+diff --git a/include/linux/preempt.h b/include/linux/preempt.h
+index 5aa7916..6450c01 100644
+--- a/include/linux/preempt.h
++++ b/include/linux/preempt.h
+@@ -108,28 +108,25 @@ do { \
+ 
+ #endif /* CONFIG_PREEMPT_COUNT */
+ 
+-#ifdef CONFIG_SMP
+-extern void migrate_disable(void);
+-extern void migrate_enable(void);
+-#else
+-# define migrate_disable()		do { } while (0)
+-# define migrate_enable()		do { } while (0)
+-#endif
+-
+ #ifdef CONFIG_PREEMPT_RT_FULL
+ # define preempt_disable_rt()		preempt_disable()
+ # define preempt_enable_rt()		preempt_enable()
+ # define preempt_disable_nort()		do { } while (0)
+ # define preempt_enable_nort()		do { } while (0)
+-# define migrate_disable_rt()		migrate_disable()
+-# define migrate_enable_rt()		migrate_enable()
++# ifdef CONFIG_SMP
++   extern void migrate_disable(void);
++   extern void migrate_enable(void);
++# else /* CONFIG_SMP */
++#  define migrate_disable()		do { } while (0)
++#  define migrate_enable()		do { } while (0)
++# endif /* CONFIG_SMP */
+ #else
+ # define preempt_disable_rt()		do { } while (0)
+ # define preempt_enable_rt()		do { } while (0)
+ # define preempt_disable_nort()		preempt_disable()
+ # define preempt_enable_nort()		preempt_enable()
+-# define migrate_disable_rt()		do { } while (0)
+-# define migrate_enable_rt()		do { } while (0)
++# define migrate_disable()		preempt_disable()
++# define migrate_enable()		preempt_enable()
+ #endif
+ 
+ #ifdef CONFIG_PREEMPT_NOTIFIERS
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 712e991..32e9e3f 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1259,7 +1259,9 @@ struct task_struct {
+ #endif
+ 
+ 	unsigned int policy;
++#ifdef CONFIG_PREEMPT_RT_FULL
+ 	int migrate_disable;
++#endif
+ 	cpumask_t cpus_allowed;
+ 
+ #ifdef CONFIG_PREEMPT_RCU
+@@ -2681,11 +2683,22 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+ 
+ #endif /* CONFIG_SMP */
+ 
++static inline int __migrate_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_PREEMPT_RT_FULL
++	return p->migrate_disable;
++#else
++	return 0;
++#endif
++}
++
+ /* Future-safe accessor for struct task_struct's cpus_allowed. */
+ static inline const struct cpumask *tsk_cpus_allowed(struct task_struct *p)
+ {
++#ifdef CONFIG_PREEMPT_RT_FULL
+ 	if (p->migrate_disable)
+ 		return cpumask_of(task_cpu(p));
++#endif
+ 
+ 	return &p->cpus_allowed;
+ }
+diff --git a/include/linux/smp.h b/include/linux/smp.h
+index 94c8430..78fd0a2 100644
+--- a/include/linux/smp.h
++++ b/include/linux/smp.h
+@@ -172,13 +172,8 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func,
+ #define get_cpu()		({ preempt_disable(); smp_processor_id(); })
+ #define put_cpu()		preempt_enable()
+ 
+-#ifndef CONFIG_PREEMPT_RT_FULL
+-# define get_cpu_light()	get_cpu()
+-# define put_cpu_light()	put_cpu()
+-#else
+-# define get_cpu_light()	({ migrate_disable(); smp_processor_id(); })
+-# define put_cpu_light()	migrate_enable()
+-#endif
++#define get_cpu_light()		({ migrate_disable(); smp_processor_id(); })
++#define put_cpu_light()		migrate_enable()
+ 
+ /*
+  * Callback to arch code if there's nosmp or maxcpus=0 on the
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 92c8fd9..46c7c53 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -6223,7 +6223,7 @@ static inline void sched_init_granularity(void)
+ #ifdef CONFIG_SMP
+ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
+ {
+-	if (!p->migrate_disable) {
++	if (!__migrate_disabled(p)) {
+ 		if (p->sched_class && p->sched_class->set_cpus_allowed)
+ 			p->sched_class->set_cpus_allowed(p, new_mask);
+ 		p->rt.nr_cpus_allowed = cpumask_weight(new_mask);
+@@ -6279,7 +6279,7 @@ int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
+ 	do_set_cpus_allowed(p, new_mask);
+ 
+ 	/* Can the task run on the task's current CPU? If so, we're done */
+-	if (cpumask_test_cpu(task_cpu(p), new_mask) || p->migrate_disable)
++	if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p))
+ 		goto out;
+ 
+ 	dest_cpu = cpumask_any_and(cpu_active_mask, new_mask);
+@@ -6298,6 +6298,7 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
+ void migrate_disable(void)
+ {
+ 	struct task_struct *p = current;
+@@ -6390,6 +6391,7 @@ void migrate_enable(void)
+ 	preempt_enable();
+ }
+ EXPORT_SYMBOL_GPL(migrate_enable);
++#endif /* CONFIG_PREEMPT_RT_FULL */
+ 
+ /*
+  * Move (not current) task off this cpu, onto dest cpu. We're doing
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index c44456b..ff03353 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1124,7 +1124,7 @@ tracing_generic_entry_update(struct trace_entry *entry, unsigned long flags,
+ 		((pc & SOFTIRQ_MASK) ? TRACE_FLAG_SOFTIRQ : 0) |
+ 		(need_resched() ? TRACE_FLAG_NEED_RESCHED : 0);
+ 
+-	entry->migrate_disable	= (tsk) ? tsk->migrate_disable & 0xFF : 0;
++	entry->migrate_disable	= (tsk) ? __migrate_disabled(tsk) & 0xFF : 0;
+ }
+ EXPORT_SYMBOL_GPL(tracing_generic_entry_update);
+ 
+diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c
+index 60a7569..b5e9241 100644
+--- a/lib/smp_processor_id.c
++++ b/lib/smp_processor_id.c
+@@ -41,7 +41,7 @@ notrace unsigned int debug_smp_processor_id(void)
+ 
+ 	printk(KERN_ERR "BUG: using smp_processor_id() in preemptible [%08x %08x] "
+ 	       "code: %s/%d\n", preempt_count() - 1,
+-	       current->migrate_disable, current->comm, current->pid);
++	       __migrate_disabled(current), current->comm, current->pid);
+ 	print_symbol("caller is %s\n", (long)__builtin_return_address(0));
+ 	dump_stack();
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0170-sched-rt-Fix-migrate_enable-thinko.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0170-sched-rt-Fix-migrate_enable-thinko.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0170-sched-rt-Fix-migrate_enable-thinko.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0170-sched-rt-Fix-migrate_enable-thinko.patch)
@@ -0,0 +1,70 @@
+From 297491403bea41d839f74da0eca4da852ed52c4e Mon Sep 17 00:00:00 2001
+From: Mike Galbraith <efault at gmx.de>
+Date: Tue, 23 Aug 2011 16:12:43 +0200
+Subject: [PATCH 170/271] sched, rt: Fix migrate_enable() thinko
+
+Assigning mask = tsk_cpus_allowed(p) after p->migrate_disable = 0 ensures
+that we won't see a mask change.. no push/pull, we stack tasks on one CPU.
+
+Also add a couple fields to sched_debug for the next guy.
+
+[ Build fix from Stratos Psomadakis <psomas at gentoo.org> ]
+
+Signed-off-by: Mike Galbraith <efault at gmx.de>
+Cc: Paul E. McKenney <paulmck at us.ibm.com>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Link: http://lkml.kernel.org/r/1314108763.6689.4.camel@marge.simson.net
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c       |    4 +++-
+ kernel/sched_debug.c |    7 +++++++
+ 2 files changed, 10 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 46c7c53..dd735c8 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -6375,12 +6375,14 @@ void migrate_enable(void)
+ 	 */
+ 	rq = this_rq();
+ 	raw_spin_lock_irqsave(&rq->lock, flags);
+-	p->migrate_disable = 0;
+ 	mask = tsk_cpus_allowed(p);
++	p->migrate_disable = 0;
+ 
+ 	WARN_ON(!cpumask_test_cpu(smp_processor_id(), mask));
+ 
+ 	if (!cpumask_equal(&p->cpus_allowed, mask)) {
++		/* Get the mask now that migration is enabled */
++		mask = tsk_cpus_allowed(p);
+ 		if (p->sched_class->set_cpus_allowed)
+ 			p->sched_class->set_cpus_allowed(p, mask);
+ 		p->rt.nr_cpus_allowed = cpumask_weight(mask);
+diff --git a/kernel/sched_debug.c b/kernel/sched_debug.c
+index a6710a1..528032b 100644
+--- a/kernel/sched_debug.c
++++ b/kernel/sched_debug.c
+@@ -235,6 +235,9 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
+ 	P(rt_throttled);
+ 	PN(rt_time);
+ 	PN(rt_runtime);
++#ifdef CONFIG_SMP
++	P(rt_nr_migratory);
++#endif
+ 
+ #undef PN
+ #undef P
+@@ -484,6 +487,10 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
+ 	P(se.load.weight);
+ 	P(policy);
+ 	P(prio);
++#ifdef CONFIG_PREEMPT_RT_FULL
++	P(migrate_disable);
++#endif
++	P(rt.nr_cpus_allowed);
+ #undef PN
+ #undef __PN
+ #undef P
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0171-sched-teach-migrate_disable-about-atomic-contexts.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0171-sched-teach-migrate_disable-about-atomic-contexts.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0171-sched-teach-migrate_disable-about-atomic-contexts.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0171-sched-teach-migrate_disable-about-atomic-contexts.patch)
@@ -0,0 +1,92 @@
+From d31c3d5a98e8088fd2358eb8f1f429733254b1f7 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Fri, 2 Sep 2011 14:29:27 +0200
+Subject: [PATCH 171/271] sched: teach migrate_disable about atomic contexts
+
+ <NMI>  [<ffffffff812dafd8>] spin_bug+0x94/0xa8
+ [<ffffffff812db07f>] do_raw_spin_lock+0x43/0xea
+ [<ffffffff814fa9be>] _raw_spin_lock_irqsave+0x6b/0x85
+ [<ffffffff8106ff9e>] ? migrate_disable+0x75/0x12d
+ [<ffffffff81078aaf>] ? pin_current_cpu+0x36/0xb0
+ [<ffffffff8106ff9e>] migrate_disable+0x75/0x12d
+ [<ffffffff81115b9d>] pagefault_disable+0xe/0x1f
+ [<ffffffff81047027>] copy_from_user_nmi+0x74/0xe6
+ [<ffffffff810489d7>] perf_callchain_user+0xf3/0x135
+
+Now clearly we can't go around taking locks from NMI context, cure
+this by short-circuiting migrate_disable() when we're in an atomic
+context already.
+
+Add some extra debugging to avoid things like:
+
+  preempt_disable()
+  migrate_disable();
+
+  preempt_enable();
+  migrate_enable();
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/r/1314967297.1301.14.camel@twins
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Link: http://lkml.kernel.org/n/tip-wbot4vsmwhi8vmbf83hsclk6@git.kernel.org
+---
+ include/linux/sched.h |    3 +++
+ kernel/sched.c        |   21 +++++++++++++++++++++
+ 2 files changed, 24 insertions(+)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 32e9e3f..af6cb0c 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1261,6 +1261,9 @@ struct task_struct {
+ 	unsigned int policy;
+ #ifdef CONFIG_PREEMPT_RT_FULL
+ 	int migrate_disable;
++#ifdef CONFIG_SCHED_DEBUG
++	int migrate_disable_atomic;
++#endif
+ #endif
+ 	cpumask_t cpus_allowed;
+ 
+diff --git a/kernel/sched.c b/kernel/sched.c
+index dd735c8..89f873e 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -6306,6 +6306,17 @@ void migrate_disable(void)
+ 	unsigned long flags;
+ 	struct rq *rq;
+ 
++	if (in_atomic()) {
++#ifdef CONFIG_SCHED_DEBUG
++		p->migrate_disable_atomic++;
++#endif
++		return;
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	WARN_ON_ONCE(p->migrate_disable_atomic);
++#endif
++
+ 	preempt_disable();
+ 	if (p->migrate_disable) {
+ 		p->migrate_disable++;
+@@ -6354,6 +6365,16 @@ void migrate_enable(void)
+ 	unsigned long flags;
+ 	struct rq *rq;
+ 
++	if (in_atomic()) {
++#ifdef CONFIG_SCHED_DEBUG
++		p->migrate_disable_atomic--;
++#endif
++		return;
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	WARN_ON_ONCE(p->migrate_disable_atomic);
++#endif
+ 	WARN_ON_ONCE(p->migrate_disable <= 0);
+ 
+ 	preempt_disable();
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0172-sched-Postpone-actual-migration-disalbe-to-schedule.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0172-sched-Postpone-actual-migration-disalbe-to-schedule.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0172-sched-Postpone-actual-migration-disalbe-to-schedule.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0172-sched-Postpone-actual-migration-disalbe-to-schedule.patch)
@@ -0,0 +1,310 @@
+From c80f7ab4b80e89dec33c4b002fdd3d50e7f1e6a9 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Tue, 27 Sep 2011 08:40:23 -0400
+Subject: [PATCH 172/271] sched: Postpone actual migration disalbe to schedule
+
+The migrate_disable() can cause a bit of a overhead to the RT kernel,
+as changing the affinity is expensive to do at every lock encountered.
+As a running task can not migrate, the actual disabling of migration
+does not need to occur until the task is about to schedule out.
+
+In most cases, a task that disables migration will enable it before
+it schedules making this change improve performance tremendously.
+
+[ Frank Rowand: UP compile fix ]
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Clark Williams <williams at redhat.com>
+Link: http://lkml.kernel.org/r/20110927124422.779693167@goodmis.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c |  251 +++++++++++++++++++++++++++++---------------------------
+ 1 file changed, 132 insertions(+), 119 deletions(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 89f873e..9bf8918 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -4356,6 +4356,135 @@ static inline void schedule_debug(struct task_struct *prev)
+ 	schedstat_inc(this_rq(), sched_count);
+ }
+ 
++#if defined(CONFIG_PREEMPT_RT_FULL) && defined(CONFIG_SMP)
++#define MIGRATE_DISABLE_SET_AFFIN	(1<<30) /* Can't make a negative */
++#define migrate_disabled_updated(p)	((p)->migrate_disable & MIGRATE_DISABLE_SET_AFFIN)
++#define migrate_disable_count(p)	((p)->migrate_disable & ~MIGRATE_DISABLE_SET_AFFIN)
++
++static inline void update_migrate_disable(struct task_struct *p)
++{
++	const struct cpumask *mask;
++
++	if (likely(!p->migrate_disable))
++		return;
++
++	/* Did we already update affinity? */
++	if (unlikely(migrate_disabled_updated(p)))
++		return;
++
++	/*
++	 * Since this is always current we can get away with only locking
++	 * rq->lock, the ->cpus_allowed value can normally only be changed
++	 * while holding both p->pi_lock and rq->lock, but seeing that this
++	 * is current, we cannot actually be waking up, so all code that
++	 * relies on serialization against p->pi_lock is out of scope.
++	 *
++	 * Having rq->lock serializes us against things like
++	 * set_cpus_allowed_ptr() that can still happen concurrently.
++	 */
++	mask = tsk_cpus_allowed(p);
++
++	WARN_ON(!cpumask_test_cpu(smp_processor_id(), mask));
++
++	if (!cpumask_equal(&p->cpus_allowed, mask)) {
++		if (p->sched_class->set_cpus_allowed)
++			p->sched_class->set_cpus_allowed(p, mask);
++		p->rt.nr_cpus_allowed = cpumask_weight(mask);
++
++		/* Let migrate_enable know to fix things back up */
++		p->migrate_disable |= MIGRATE_DISABLE_SET_AFFIN;
++	}
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++
++	if (in_atomic()) {
++#ifdef CONFIG_SCHED_DEBUG
++		p->migrate_disable_atomic++;
++#endif
++		return;
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	WARN_ON_ONCE(p->migrate_disable_atomic);
++#endif
++
++	preempt_disable();
++	if (p->migrate_disable) {
++		p->migrate_disable++;
++		preempt_enable();
++		return;
++	}
++
++	pin_current_cpu();
++	p->migrate_disable = 1;
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++	const struct cpumask *mask;
++	unsigned long flags;
++	struct rq *rq;
++
++	if (in_atomic()) {
++#ifdef CONFIG_SCHED_DEBUG
++		p->migrate_disable_atomic--;
++#endif
++		return;
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	WARN_ON_ONCE(p->migrate_disable_atomic);
++#endif
++	WARN_ON_ONCE(p->migrate_disable <= 0);
++
++	preempt_disable();
++	if (migrate_disable_count(p) > 1) {
++		p->migrate_disable--;
++		preempt_enable();
++		return;
++	}
++
++	if (unlikely(migrate_disabled_updated(p))) {
++		/*
++		 * See comment in update_migrate_disable() about locking.
++		 */
++		rq = this_rq();
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		mask = tsk_cpus_allowed(p);
++		/*
++		 * Clearing migrate_disable causes tsk_cpus_allowed to
++		 * show the tasks original cpu affinity.
++		 */
++		p->migrate_disable = 0;
++
++		WARN_ON(!cpumask_test_cpu(smp_processor_id(), mask));
++
++		if (unlikely(!cpumask_equal(&p->cpus_allowed, mask))) {
++			/* Get the mask now that migration is enabled */
++			mask = tsk_cpus_allowed(p);
++			if (p->sched_class->set_cpus_allowed)
++				p->sched_class->set_cpus_allowed(p, mask);
++			p->rt.nr_cpus_allowed = cpumask_weight(mask);
++		}
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	} else
++		p->migrate_disable = 0;
++
++	unpin_current_cpu();
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++#else
++static inline void update_migrate_disable(struct task_struct *p) { }
++#define migrate_disabled_updated(p)		0
++#endif
++
+ static void put_prev_task(struct rq *rq, struct task_struct *prev)
+ {
+ 	if (prev->on_rq || rq->skip_clock_update < 0)
+@@ -4415,6 +4544,8 @@ need_resched:
+ 
+ 	raw_spin_lock_irq(&rq->lock);
+ 
++	update_migrate_disable(prev);
++
+ 	switch_count = &prev->nivcsw;
+ 	if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) {
+ 		if (unlikely(signal_pending_state(prev->state, prev))) {
+@@ -6223,7 +6354,7 @@ static inline void sched_init_granularity(void)
+ #ifdef CONFIG_SMP
+ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
+ {
+-	if (!__migrate_disabled(p)) {
++	if (!migrate_disabled_updated(p)) {
+ 		if (p->sched_class && p->sched_class->set_cpus_allowed)
+ 			p->sched_class->set_cpus_allowed(p, new_mask);
+ 		p->rt.nr_cpus_allowed = cpumask_weight(new_mask);
+@@ -6298,124 +6429,6 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
+ 
+-#ifdef CONFIG_PREEMPT_RT_FULL
+-void migrate_disable(void)
+-{
+-	struct task_struct *p = current;
+-	const struct cpumask *mask;
+-	unsigned long flags;
+-	struct rq *rq;
+-
+-	if (in_atomic()) {
+-#ifdef CONFIG_SCHED_DEBUG
+-		p->migrate_disable_atomic++;
+-#endif
+-		return;
+-	}
+-
+-#ifdef CONFIG_SCHED_DEBUG
+-	WARN_ON_ONCE(p->migrate_disable_atomic);
+-#endif
+-
+-	preempt_disable();
+-	if (p->migrate_disable) {
+-		p->migrate_disable++;
+-		preempt_enable();
+-		return;
+-	}
+-
+-	pin_current_cpu();
+-	if (unlikely(!scheduler_running)) {
+-		p->migrate_disable = 1;
+-		preempt_enable();
+-		return;
+-	}
+-
+-	/*
+-	 * Since this is always current we can get away with only locking
+-	 * rq->lock, the ->cpus_allowed value can normally only be changed
+-	 * while holding both p->pi_lock and rq->lock, but seeing that this
+-	 * it current, we cannot actually be waking up, so all code that
+-	 * relies on serialization against p->pi_lock is out of scope.
+-	 *
+-	 * Taking rq->lock serializes us against things like
+-	 * set_cpus_allowed_ptr() that can still happen concurrently.
+-	 */
+-	rq = this_rq();
+-	raw_spin_lock_irqsave(&rq->lock, flags);
+-	p->migrate_disable = 1;
+-	mask = tsk_cpus_allowed(p);
+-
+-	WARN_ON(!cpumask_test_cpu(smp_processor_id(), mask));
+-
+-	if (!cpumask_equal(&p->cpus_allowed, mask)) {
+-		if (p->sched_class->set_cpus_allowed)
+-			p->sched_class->set_cpus_allowed(p, mask);
+-		p->rt.nr_cpus_allowed = cpumask_weight(mask);
+-	}
+-	raw_spin_unlock_irqrestore(&rq->lock, flags);
+-	preempt_enable();
+-}
+-EXPORT_SYMBOL_GPL(migrate_disable);
+-
+-void migrate_enable(void)
+-{
+-	struct task_struct *p = current;
+-	const struct cpumask *mask;
+-	unsigned long flags;
+-	struct rq *rq;
+-
+-	if (in_atomic()) {
+-#ifdef CONFIG_SCHED_DEBUG
+-		p->migrate_disable_atomic--;
+-#endif
+-		return;
+-	}
+-
+-#ifdef CONFIG_SCHED_DEBUG
+-	WARN_ON_ONCE(p->migrate_disable_atomic);
+-#endif
+-	WARN_ON_ONCE(p->migrate_disable <= 0);
+-
+-	preempt_disable();
+-	if (p->migrate_disable > 1) {
+-		p->migrate_disable--;
+-		preempt_enable();
+-		return;
+-	}
+-
+-	if (unlikely(!scheduler_running)) {
+-		p->migrate_disable = 0;
+-		unpin_current_cpu();
+-		preempt_enable();
+-		return;
+-	}
+-
+-	/*
+-	 * See comment in migrate_disable().
+-	 */
+-	rq = this_rq();
+-	raw_spin_lock_irqsave(&rq->lock, flags);
+-	mask = tsk_cpus_allowed(p);
+-	p->migrate_disable = 0;
+-
+-	WARN_ON(!cpumask_test_cpu(smp_processor_id(), mask));
+-
+-	if (!cpumask_equal(&p->cpus_allowed, mask)) {
+-		/* Get the mask now that migration is enabled */
+-		mask = tsk_cpus_allowed(p);
+-		if (p->sched_class->set_cpus_allowed)
+-			p->sched_class->set_cpus_allowed(p, mask);
+-		p->rt.nr_cpus_allowed = cpumask_weight(mask);
+-	}
+-
+-	raw_spin_unlock_irqrestore(&rq->lock, flags);
+-	unpin_current_cpu();
+-	preempt_enable();
+-}
+-EXPORT_SYMBOL_GPL(migrate_enable);
+-#endif /* CONFIG_PREEMPT_RT_FULL */
+-
+ /*
+  * Move (not current) task off this cpu, onto dest cpu. We're doing
+  * this because either it can't run here any more (set_cpus_allowed()
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0173-sched-Do-not-compare-cpu-masks-in-scheduler.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0173-sched-Do-not-compare-cpu-masks-in-scheduler.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0173-sched-Do-not-compare-cpu-masks-in-scheduler.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0173-sched-Do-not-compare-cpu-masks-in-scheduler.patch)
@@ -0,0 +1,43 @@
+From f29ca45cda1d2087a3bf0059dc13e67cb3ab234b Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Tue, 27 Sep 2011 08:40:24 -0400
+Subject: [PATCH 173/271] sched: Do not compare cpu masks in scheduler
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Clark Williams <williams at redhat.com>
+Link: http://lkml.kernel.org/r/20110927124423.128129033@goodmis.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c |   14 +++++---------
+ 1 file changed, 5 insertions(+), 9 deletions(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 9bf8918..f856bca 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -4384,16 +4384,12 @@ static inline void update_migrate_disable(struct task_struct *p)
+ 	 */
+ 	mask = tsk_cpus_allowed(p);
+ 
+-	WARN_ON(!cpumask_test_cpu(smp_processor_id(), mask));
++	if (p->sched_class->set_cpus_allowed)
++		p->sched_class->set_cpus_allowed(p, mask);
++	p->rt.nr_cpus_allowed = cpumask_weight(mask);
+ 
+-	if (!cpumask_equal(&p->cpus_allowed, mask)) {
+-		if (p->sched_class->set_cpus_allowed)
+-			p->sched_class->set_cpus_allowed(p, mask);
+-		p->rt.nr_cpus_allowed = cpumask_weight(mask);
+-
+-		/* Let migrate_enable know to fix things back up */
+-		p->migrate_disable |= MIGRATE_DISABLE_SET_AFFIN;
+-	}
++	/* Let migrate_enable know to fix things back up */
++	p->migrate_disable |= MIGRATE_DISABLE_SET_AFFIN;
+ }
+ 
+ void migrate_disable(void)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0174-sched-Have-migrate_disable-ignore-bounded-threads.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0174-sched-Have-migrate_disable-ignore-bounded-threads.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0174-sched-Have-migrate_disable-ignore-bounded-threads.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0174-sched-Have-migrate_disable-ignore-bounded-threads.patch)
@@ -0,0 +1,73 @@
+From ebd7646db8a7d39f2eb8182b769bd38ebaf2787c Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Tue, 27 Sep 2011 08:40:25 -0400
+Subject: [PATCH 174/271] sched: Have migrate_disable ignore bounded threads
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Clark Williams <williams at redhat.com>
+Link: http://lkml.kernel.org/r/20110927124423.567944215@goodmis.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c |   23 +++++++++--------------
+ 1 file changed, 9 insertions(+), 14 deletions(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index f856bca..c687aec 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -4396,7 +4396,7 @@ void migrate_disable(void)
+ {
+ 	struct task_struct *p = current;
+ 
+-	if (in_atomic()) {
++	if (in_atomic() || p->flags & PF_THREAD_BOUND) {
+ #ifdef CONFIG_SCHED_DEBUG
+ 		p->migrate_disable_atomic++;
+ #endif
+@@ -4427,7 +4427,7 @@ void migrate_enable(void)
+ 	unsigned long flags;
+ 	struct rq *rq;
+ 
+-	if (in_atomic()) {
++	if (in_atomic() || p->flags & PF_THREAD_BOUND) {
+ #ifdef CONFIG_SCHED_DEBUG
+ 		p->migrate_disable_atomic--;
+ #endif
+@@ -4448,26 +4448,21 @@ void migrate_enable(void)
+ 
+ 	if (unlikely(migrate_disabled_updated(p))) {
+ 		/*
+-		 * See comment in update_migrate_disable() about locking.
++		 * Undo whatever update_migrate_disable() did, also see there
++		 * about locking.
+ 		 */
+ 		rq = this_rq();
+ 		raw_spin_lock_irqsave(&rq->lock, flags);
+-		mask = tsk_cpus_allowed(p);
++
+ 		/*
+ 		 * Clearing migrate_disable causes tsk_cpus_allowed to
+ 		 * show the tasks original cpu affinity.
+ 		 */
+ 		p->migrate_disable = 0;
+-
+-		WARN_ON(!cpumask_test_cpu(smp_processor_id(), mask));
+-
+-		if (unlikely(!cpumask_equal(&p->cpus_allowed, mask))) {
+-			/* Get the mask now that migration is enabled */
+-			mask = tsk_cpus_allowed(p);
+-			if (p->sched_class->set_cpus_allowed)
+-				p->sched_class->set_cpus_allowed(p, mask);
+-			p->rt.nr_cpus_allowed = cpumask_weight(mask);
+-		}
++		mask = tsk_cpus_allowed(p);
++		if (p->sched_class->set_cpus_allowed)
++			p->sched_class->set_cpus_allowed(p, mask);
++		p->rt.nr_cpus_allowed = cpumask_weight(mask);
+ 		raw_spin_unlock_irqrestore(&rq->lock, flags);
+ 	} else
+ 		p->migrate_disable = 0;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0175-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0175-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0175-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0175-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch)
@@ -0,0 +1,31 @@
+From 0def821d2d9294bb01ee69ddd49505dc0d02ef15 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 4 Nov 2011 20:48:36 +0100
+Subject: [PATCH 175/271] sched-clear-pf-thread-bound-on-fallback-rq.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c |    7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index c687aec..316205e 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -2570,7 +2570,12 @@ static int select_fallback_rq(int cpu, struct task_struct *p)
+ 		printk(KERN_INFO "process %d (%s) no longer affine to cpu%d\n",
+ 				task_pid_nr(p), p->comm, cpu);
+ 	}
+-
++	/*
++	 * Clear PF_THREAD_BOUND, otherwise we wreckage
++	 * migrate_disable/enable. See optimization for
++	 * PF_THREAD_BOUND tasks there.
++	 */
++	p->flags &= ~PF_THREAD_BOUND;
+ 	return dest_cpu;
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0176-ftrace-crap.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0176-ftrace-crap.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0176-ftrace-crap.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0176-ftrace-crap.patch.patch)
@@ -0,0 +1,96 @@
+From 65ce1893740ebae587772e8702b4f0722c5e998b Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 9 Sep 2011 16:55:53 +0200
+Subject: [PATCH 176/271] ftrace-crap.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/trace/trace.c |   26 ++++++++++++++++++++++++--
+ kernel/trace/trace.h |    1 -
+ 2 files changed, 24 insertions(+), 3 deletions(-)
+
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index ff03353..f4de7ab 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -359,11 +359,13 @@ static DECLARE_DELAYED_WORK(wakeup_work, wakeup_work_handler);
+  */
+ void trace_wake_up(void)
+ {
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 	const unsigned long delay = msecs_to_jiffies(2);
+ 
+ 	if (trace_flags & TRACE_ITER_BLOCK)
+ 		return;
+ 	schedule_delayed_work(&wakeup_work, delay);
++#endif
+ }
+ 
+ static int __init set_buf_size(char *str)
+@@ -719,6 +721,12 @@ update_max_tr_single(struct trace_array *tr, struct task_struct *tsk, int cpu)
+ }
+ #endif /* CONFIG_TRACER_MAX_TRACE */
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
++static void default_wait_pipe(struct trace_iterator *iter);
++#else
++#define default_wait_pipe	poll_wait_pipe
++#endif
++
+ /**
+  * register_tracer - register a tracer with the ftrace system.
+  * @type - the plugin for the tracer
+@@ -3196,6 +3204,7 @@ static int tracing_release_pipe(struct inode *inode, struct file *file)
+ 	return 0;
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ static unsigned int
+ tracing_poll_pipe(struct file *filp, poll_table *poll_table)
+ {
+@@ -3217,8 +3226,7 @@ tracing_poll_pipe(struct file *filp, poll_table *poll_table)
+ 	}
+ }
+ 
+-
+-void default_wait_pipe(struct trace_iterator *iter)
++static void default_wait_pipe(struct trace_iterator *iter)
+ {
+ 	DEFINE_WAIT(wait);
+ 
+@@ -3229,6 +3237,20 @@ void default_wait_pipe(struct trace_iterator *iter)
+ 
+ 	finish_wait(&trace_wait, &wait);
+ }
++#else
++static unsigned int
++tracing_poll_pipe(struct file *filp, poll_table *poll_table)
++{
++	struct trace_iterator *iter = filp->private_data;
++
++	if ((trace_flags & TRACE_ITER_BLOCK) || !trace_empty(iter))
++		return POLLIN | POLLRDNORM;
++	poll_wait_pipe(iter);
++	if (!trace_empty(iter))
++		return POLLIN | POLLRDNORM;
++	return 0;
++}
++#endif
+ 
+ /*
+  * This is a make-shift waitqueue.
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 092e1f8..69b8700 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -345,7 +345,6 @@ void trace_init_global_iter(struct trace_iterator *iter);
+ 
+ void tracing_iter_reset(struct trace_iterator *iter, int cpu);
+ 
+-void default_wait_pipe(struct trace_iterator *iter);
+ void poll_wait_pipe(struct trace_iterator *iter);
+ 
+ void ftrace(struct trace_array *tr,
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0177-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0177-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0177-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0177-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch)
@@ -0,0 +1,437 @@
+From 897123206e1dc0410e5054fbf88c373531143843 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Tue, 27 Sep 2011 13:56:50 -0400
+Subject: [PATCH 177/271] ring-buffer: Convert reader_lock from raw_spin_lock
+ into spin_lock
+
+The reader_lock is mostly taken in normal context with interrupts enabled.
+But because ftrace_dump() can happen anywhere, it is used as a spin lock
+and in some cases a check to in_nmi() is performed to determine if the
+ftrace_dump() was initiated from an NMI and if it is, the lock is not taken.
+
+But having the lock as a raw_spin_lock() causes issues with the real-time
+kernel as the lock is held during allocation and freeing of the buffer.
+As memory locks convert into mutexes, keeping the reader_lock as a spin_lock
+causes problems.
+
+Converting the reader_lock is not straight forward as we must still deal
+with the ftrace_dump() happening not only from an NMI but also from
+true interrupt context in PREEPMT_RT.
+
+Two wrapper functions are created to take and release the reader lock:
+
+  int read_buffer_lock(cpu_buffer, unsigned long *flags)
+  void read_buffer_unlock(cpu_buffer, unsigned long flags, int locked)
+
+The read_buffer_lock() returns 1 if it actually took the lock, disables
+interrupts and updates the flags. The only time it returns 0 is in the
+case of a ftrace_dump() happening in an unsafe context.
+
+The read_buffer_unlock() checks the return of locked and will simply
+unlock the spin lock if it was successfully taken.
+
+Instead of just having this in specific cases that the NMI might call
+into, all instances of the reader_lock is converted to the wrapper
+functions to make this a bit simpler to read and less error prone.
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Clark Williams <clark at redhat.com>
+Link: http://lkml.kernel.org/r/1317146210.26514.33.camel@gandalf.stny.rr.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/trace/ring_buffer.c |  151 ++++++++++++++++++++++++--------------------
+ 1 file changed, 81 insertions(+), 70 deletions(-)
+
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index f5b7b5c..354017f 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -478,7 +478,7 @@ struct ring_buffer_per_cpu {
+ 	int				cpu;
+ 	atomic_t			record_disabled;
+ 	struct ring_buffer		*buffer;
+-	raw_spinlock_t			reader_lock;	/* serialize readers */
++	spinlock_t			reader_lock;	/* serialize readers */
+ 	arch_spinlock_t			lock;
+ 	struct lock_class_key		lock_key;
+ 	struct list_head		*pages;
+@@ -1049,6 +1049,44 @@ static int rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer,
+ 	return -ENOMEM;
+ }
+ 
++static inline int ok_to_lock(void)
++{
++	if (in_nmi())
++		return 0;
++#ifdef CONFIG_PREEMPT_RT_FULL
++	if (in_atomic())
++		return 0;
++#endif
++	return 1;
++}
++
++static int
++read_buffer_lock(struct ring_buffer_per_cpu *cpu_buffer,
++		 unsigned long *flags)
++{
++	/*
++	 * If an NMI die dumps out the content of the ring buffer
++	 * do not grab locks. We also permanently disable the ring
++	 * buffer too. A one time deal is all you get from reading
++	 * the ring buffer from an NMI.
++	 */
++	if (!ok_to_lock()) {
++		if (spin_trylock_irqsave(&cpu_buffer->reader_lock, *flags))
++			return 1;
++		tracing_off_permanent();
++		return 0;
++	}
++	spin_lock_irqsave(&cpu_buffer->reader_lock, *flags);
++	return 1;
++}
++
++static void
++read_buffer_unlock(struct ring_buffer_per_cpu *cpu_buffer,
++		   unsigned long flags, int locked)
++{
++	if (locked)
++		spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++}
+ static struct ring_buffer_per_cpu *
+ rb_allocate_cpu_buffer(struct ring_buffer *buffer, int cpu)
+ {
+@@ -1064,7 +1102,7 @@ rb_allocate_cpu_buffer(struct ring_buffer *buffer, int cpu)
+ 
+ 	cpu_buffer->cpu = cpu;
+ 	cpu_buffer->buffer = buffer;
+-	raw_spin_lock_init(&cpu_buffer->reader_lock);
++	spin_lock_init(&cpu_buffer->reader_lock);
+ 	lockdep_set_class(&cpu_buffer->reader_lock, buffer->reader_lock_key);
+ 	cpu_buffer->lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
+ 
+@@ -1259,9 +1297,11 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned nr_pages)
+ {
+ 	struct buffer_page *bpage;
+ 	struct list_head *p;
++	unsigned long flags;
+ 	unsigned i;
++	int locked;
+ 
+-	raw_spin_lock_irq(&cpu_buffer->reader_lock);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+ 	rb_head_page_deactivate(cpu_buffer);
+ 
+ 	for (i = 0; i < nr_pages; i++) {
+@@ -1279,7 +1319,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned nr_pages)
+ 	rb_check_pages(cpu_buffer);
+ 
+ out:
+-	raw_spin_unlock_irq(&cpu_buffer->reader_lock);
++	read_buffer_unlock(cpu_buffer, flags, locked);
+ }
+ 
+ static void
+@@ -1288,9 +1328,11 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer,
+ {
+ 	struct buffer_page *bpage;
+ 	struct list_head *p;
++	unsigned long flags;
+ 	unsigned i;
++	int locked;
+ 
+-	raw_spin_lock_irq(&cpu_buffer->reader_lock);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+ 	rb_head_page_deactivate(cpu_buffer);
+ 
+ 	for (i = 0; i < nr_pages; i++) {
+@@ -1305,7 +1347,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer,
+ 	rb_check_pages(cpu_buffer);
+ 
+ out:
+-	raw_spin_unlock_irq(&cpu_buffer->reader_lock);
++	read_buffer_unlock(cpu_buffer, flags, locked);
+ }
+ 
+ /**
+@@ -2689,7 +2731,7 @@ unsigned long ring_buffer_oldest_event_ts(struct ring_buffer *buffer, int cpu)
+ 		return 0;
+ 
+ 	cpu_buffer = buffer->buffers[cpu];
+-	raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++	spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ 	/*
+ 	 * if the tail is on reader_page, oldest time stamp is on the reader
+ 	 * page
+@@ -2699,7 +2741,7 @@ unsigned long ring_buffer_oldest_event_ts(struct ring_buffer *buffer, int cpu)
+ 	else
+ 		bpage = rb_set_head_page(cpu_buffer);
+ 	ret = bpage->page->time_stamp;
+-	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++	spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ 
+ 	return ret;
+ }
+@@ -2863,15 +2905,16 @@ void ring_buffer_iter_reset(struct ring_buffer_iter *iter)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	unsigned long flags;
++	int locked;
+ 
+ 	if (!iter)
+ 		return;
+ 
+ 	cpu_buffer = iter->cpu_buffer;
+ 
+-	raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+ 	rb_iter_reset(iter);
+-	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++	read_buffer_unlock(cpu_buffer, flags, locked);
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_iter_reset);
+ 
+@@ -3289,21 +3332,6 @@ rb_iter_peek(struct ring_buffer_iter *iter, u64 *ts)
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_iter_peek);
+ 
+-static inline int rb_ok_to_lock(void)
+-{
+-	/*
+-	 * If an NMI die dumps out the content of the ring buffer
+-	 * do not grab locks. We also permanently disable the ring
+-	 * buffer too. A one time deal is all you get from reading
+-	 * the ring buffer from an NMI.
+-	 */
+-	if (likely(!in_nmi()))
+-		return 1;
+-
+-	tracing_off_permanent();
+-	return 0;
+-}
+-
+ /**
+  * ring_buffer_peek - peek at the next event to be read
+  * @buffer: The ring buffer to read
+@@ -3321,22 +3349,17 @@ ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts,
+ 	struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu];
+ 	struct ring_buffer_event *event;
+ 	unsigned long flags;
+-	int dolock;
++	int locked;
+ 
+ 	if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ 		return NULL;
+ 
+-	dolock = rb_ok_to_lock();
+  again:
+-	local_irq_save(flags);
+-	if (dolock)
+-		raw_spin_lock(&cpu_buffer->reader_lock);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+ 	event = rb_buffer_peek(cpu_buffer, ts, lost_events);
+ 	if (event && event->type_len == RINGBUF_TYPE_PADDING)
+ 		rb_advance_reader(cpu_buffer);
+-	if (dolock)
+-		raw_spin_unlock(&cpu_buffer->reader_lock);
+-	local_irq_restore(flags);
++	read_buffer_unlock(cpu_buffer, flags, locked);
+ 
+ 	if (event && event->type_len == RINGBUF_TYPE_PADDING)
+ 		goto again;
+@@ -3358,11 +3381,12 @@ ring_buffer_iter_peek(struct ring_buffer_iter *iter, u64 *ts)
+ 	struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer;
+ 	struct ring_buffer_event *event;
+ 	unsigned long flags;
++	int locked;
+ 
+  again:
+-	raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+ 	event = rb_iter_peek(iter, ts);
+-	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++	read_buffer_unlock(cpu_buffer, flags, locked);
+ 
+ 	if (event && event->type_len == RINGBUF_TYPE_PADDING)
+ 		goto again;
+@@ -3388,9 +3412,7 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts,
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	struct ring_buffer_event *event = NULL;
+ 	unsigned long flags;
+-	int dolock;
+-
+-	dolock = rb_ok_to_lock();
++	int locked;
+ 
+  again:
+ 	/* might be called in atomic */
+@@ -3400,9 +3422,7 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts,
+ 		goto out;
+ 
+ 	cpu_buffer = buffer->buffers[cpu];
+-	local_irq_save(flags);
+-	if (dolock)
+-		raw_spin_lock(&cpu_buffer->reader_lock);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+ 
+ 	event = rb_buffer_peek(cpu_buffer, ts, lost_events);
+ 	if (event) {
+@@ -3410,9 +3430,8 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts,
+ 		rb_advance_reader(cpu_buffer);
+ 	}
+ 
+-	if (dolock)
+-		raw_spin_unlock(&cpu_buffer->reader_lock);
+-	local_irq_restore(flags);
++	read_buffer_unlock(cpu_buffer, flags, locked);
++
+ 
+  out:
+ 	preempt_enable();
+@@ -3497,17 +3516,18 @@ ring_buffer_read_start(struct ring_buffer_iter *iter)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	unsigned long flags;
++	int locked;
+ 
+ 	if (!iter)
+ 		return;
+ 
+ 	cpu_buffer = iter->cpu_buffer;
+ 
+-	raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+ 	arch_spin_lock(&cpu_buffer->lock);
+ 	rb_iter_reset(iter);
+ 	arch_spin_unlock(&cpu_buffer->lock);
+-	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++	read_buffer_unlock(cpu_buffer, flags, locked);
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_read_start);
+ 
+@@ -3541,8 +3561,9 @@ ring_buffer_read(struct ring_buffer_iter *iter, u64 *ts)
+ 	struct ring_buffer_event *event;
+ 	struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer;
+ 	unsigned long flags;
++	int locked;
+ 
+-	raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+  again:
+ 	event = rb_iter_peek(iter, ts);
+ 	if (!event)
+@@ -3553,7 +3574,7 @@ ring_buffer_read(struct ring_buffer_iter *iter, u64 *ts)
+ 
+ 	rb_advance_iter(iter);
+  out:
+-	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++	read_buffer_unlock(cpu_buffer, flags, locked);
+ 
+ 	return event;
+ }
+@@ -3618,13 +3639,14 @@ void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu];
+ 	unsigned long flags;
++	int locked;
+ 
+ 	if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ 		return;
+ 
+ 	atomic_inc(&cpu_buffer->record_disabled);
+ 
+-	raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+ 
+ 	if (RB_WARN_ON(cpu_buffer, local_read(&cpu_buffer->committing)))
+ 		goto out;
+@@ -3636,7 +3658,7 @@ void ring_buffer_reset_cpu(struct ring_buffer *buffer, int cpu)
+ 	arch_spin_unlock(&cpu_buffer->lock);
+ 
+  out:
+-	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++	read_buffer_unlock(cpu_buffer, flags, locked);
+ 
+ 	atomic_dec(&cpu_buffer->record_disabled);
+ }
+@@ -3663,22 +3685,16 @@ int ring_buffer_empty(struct ring_buffer *buffer)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	unsigned long flags;
+-	int dolock;
++	int locked;
+ 	int cpu;
+ 	int ret;
+ 
+-	dolock = rb_ok_to_lock();
+-
+ 	/* yes this is racy, but if you don't like the race, lock the buffer */
+ 	for_each_buffer_cpu(buffer, cpu) {
+ 		cpu_buffer = buffer->buffers[cpu];
+-		local_irq_save(flags);
+-		if (dolock)
+-			raw_spin_lock(&cpu_buffer->reader_lock);
++		locked = read_buffer_lock(cpu_buffer, &flags);
+ 		ret = rb_per_cpu_empty(cpu_buffer);
+-		if (dolock)
+-			raw_spin_unlock(&cpu_buffer->reader_lock);
+-		local_irq_restore(flags);
++		read_buffer_unlock(cpu_buffer, flags, locked);
+ 
+ 		if (!ret)
+ 			return 0;
+@@ -3697,22 +3713,16 @@ int ring_buffer_empty_cpu(struct ring_buffer *buffer, int cpu)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	unsigned long flags;
+-	int dolock;
++	int locked;
+ 	int ret;
+ 
+ 	if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ 		return 1;
+ 
+-	dolock = rb_ok_to_lock();
+-
+ 	cpu_buffer = buffer->buffers[cpu];
+-	local_irq_save(flags);
+-	if (dolock)
+-		raw_spin_lock(&cpu_buffer->reader_lock);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+ 	ret = rb_per_cpu_empty(cpu_buffer);
+-	if (dolock)
+-		raw_spin_unlock(&cpu_buffer->reader_lock);
+-	local_irq_restore(flags);
++	read_buffer_unlock(cpu_buffer, flags, locked);
+ 
+ 	return ret;
+ }
+@@ -3887,6 +3897,7 @@ int ring_buffer_read_page(struct ring_buffer *buffer,
+ 	unsigned int commit;
+ 	unsigned int read;
+ 	u64 save_timestamp;
++	int locked;
+ 	int ret = -1;
+ 
+ 	if (!cpumask_test_cpu(cpu, buffer->cpumask))
+@@ -3908,7 +3919,7 @@ int ring_buffer_read_page(struct ring_buffer *buffer,
+ 	if (!bpage)
+ 		goto out;
+ 
+-	raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++	locked = read_buffer_lock(cpu_buffer, &flags);
+ 
+ 	reader = rb_get_reader_page(cpu_buffer);
+ 	if (!reader)
+@@ -4032,7 +4043,7 @@ int ring_buffer_read_page(struct ring_buffer *buffer,
+ 		memset(&bpage->data[commit], 0, BUF_PAGE_SIZE - commit);
+ 
+  out_unlock:
+-	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++	read_buffer_unlock(cpu_buffer, flags, locked);
+ 
+  out:
+ 	return ret;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0178-net-netif_rx_ni-migrate-disable.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0178-net-netif_rx_ni-migrate-disable.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0178-net-netif_rx_ni-migrate-disable.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0178-net-netif_rx_ni-migrate-disable.patch.patch)
@@ -0,0 +1,31 @@
+From 7f43d9cea697574c83f4eba936cfcfbdd1054695 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 16:29:27 +0200
+Subject: [PATCH 178/271] net-netif_rx_ni-migrate-disable.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ net/core/dev.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 1297da7..2c63eea 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3028,11 +3028,11 @@ int netif_rx_ni(struct sk_buff *skb)
+ {
+ 	int err;
+ 
+-	preempt_disable();
++	migrate_disable();
+ 	err = netif_rx(skb);
+ 	if (local_softirq_pending())
+ 		thread_do_softirq();
+-	preempt_enable();
++	migrate_enable();
+ 
+ 	return err;
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0179-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0179-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0179-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0179-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch)
@@ -0,0 +1,119 @@
+From 66edd8709d647cb183338e671ddd6bb6a96008cf Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 3 Jul 2009 13:16:38 -0500
+Subject: [PATCH 179/271] softirq: Sanitize softirq pending for NOHZ/RT
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/interrupt.h |    2 ++
+ kernel/softirq.c          |   61 +++++++++++++++++++++++++++++++++++++++++++++
+ kernel/time/tick-sched.c  |    8 +-----
+ 3 files changed, 64 insertions(+), 7 deletions(-)
+
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index b9162dc..74e28d9 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -471,6 +471,8 @@ static inline void __raise_softirq_irqoff(unsigned int nr)
+ extern void raise_softirq_irqoff(unsigned int nr);
+ extern void raise_softirq(unsigned int nr);
+ 
++extern void softirq_check_pending_idle(void);
++
+ /* This is the worklist that queues up per-cpu softirq work.
+  *
+  * send_remote_sendirq() adds work to these lists, and
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index c6c5824..8332622 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -61,6 +61,67 @@ char *softirq_to_name[NR_SOFTIRQS] = {
+ 	"TASKLET", "SCHED", "HRTIMER", "RCU"
+ };
+ 
++#ifdef CONFIG_NO_HZ
++# ifdef CONFIG_PREEMPT_RT_FULL
++/*
++ * On preempt-rt a softirq might be blocked on a lock. There might be
++ * no other runnable task on this CPU because the lock owner runs on
++ * some other CPU. So we have to go into idle with the pending bit
++ * set. Therefor we need to check this otherwise we warn about false
++ * positives which confuses users and defeats the whole purpose of
++ * this test.
++ *
++ * This code is called with interrupts disabled.
++ */
++void softirq_check_pending_idle(void)
++{
++	static int rate_limit;
++	u32 warnpending = 0, pending = local_softirq_pending();
++
++	if (rate_limit >= 10)
++		return;
++
++	if (pending) {
++		struct task_struct *tsk;
++
++		tsk = __get_cpu_var(ksoftirqd);
++		/*
++		 * The wakeup code in rtmutex.c wakes up the task
++		 * _before_ it sets pi_blocked_on to NULL under
++		 * tsk->pi_lock. So we need to check for both: state
++		 * and pi_blocked_on.
++		 */
++		raw_spin_lock(&tsk->pi_lock);
++
++		if (!tsk->pi_blocked_on && !(tsk->state == TASK_RUNNING))
++			warnpending = 1;
++
++		raw_spin_unlock(&tsk->pi_lock);
++	}
++
++	if (warnpending) {
++		printk(KERN_ERR "NOHZ: local_softirq_pending %02x\n",
++		       pending);
++		rate_limit++;
++	}
++}
++# else
++/*
++ * On !PREEMPT_RT we just printk rate limited:
++ */
++void softirq_check_pending_idle(void)
++{
++	static int rate_limit;
++
++	if (rate_limit < 10) {
++		printk(KERN_ERR "NOHZ: local_softirq_pending %02x\n",
++		       local_softirq_pending());
++		rate_limit++;
++	}
++}
++# endif
++#endif
++
+ /*
+  * we cannot loop indefinitely here to avoid userspace starvation,
+  * but we also don't want to introduce a worst case 1/HZ latency
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index a2c2a64..a6b26d2 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -336,13 +336,7 @@ void tick_nohz_stop_sched_tick(int inidle)
+ 		goto end;
+ 
+ 	if (unlikely(local_softirq_pending() && cpu_online(cpu))) {
+-		static int ratelimit;
+-
+-		if (ratelimit < 10) {
+-			printk(KERN_ERR "NOHZ: local_softirq_pending %02x\n",
+-			       (unsigned int) local_softirq_pending());
+-			ratelimit++;
+-		}
++		softirq_check_pending_idle();
+ 		goto end;
+ 	}
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0180-lockdep-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0180-lockdep-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0180-lockdep-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0180-lockdep-rt.patch.patch)
@@ -0,0 +1,64 @@
+From b7470706fd9de7db19b9dd83c10337a6b8a4f0de Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 18:51:23 +0200
+Subject: [PATCH 180/271] lockdep-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/irqflags.h |   10 +++++++---
+ kernel/lockdep.c         |    2 ++
+ 2 files changed, 9 insertions(+), 3 deletions(-)
+
+diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
+index 37b13c4..a52b35d 100644
+--- a/include/linux/irqflags.h
++++ b/include/linux/irqflags.h
+@@ -25,8 +25,6 @@
+ # define trace_softirqs_enabled(p)	((p)->softirqs_enabled)
+ # define trace_hardirq_enter()	do { current->hardirq_context++; } while (0)
+ # define trace_hardirq_exit()	do { current->hardirq_context--; } while (0)
+-# define lockdep_softirq_enter()	do { current->softirq_context++; } while (0)
+-# define lockdep_softirq_exit()	do { current->softirq_context--; } while (0)
+ # define INIT_TRACE_IRQFLAGS	.softirqs_enabled = 1,
+ #else
+ # define trace_hardirqs_on()		do { } while (0)
+@@ -39,9 +37,15 @@
+ # define trace_softirqs_enabled(p)	0
+ # define trace_hardirq_enter()		do { } while (0)
+ # define trace_hardirq_exit()		do { } while (0)
++# define INIT_TRACE_IRQFLAGS
++#endif
++
++#if defined(CONFIG_TRACE_IRQFLAGS) && !defined(CONFIG_PREEMPT_RT_FULL)
++# define lockdep_softirq_enter() do { current->softirq_context++; } while (0)
++# define lockdep_softirq_exit()	 do { current->softirq_context--; } while (0)
++#else
+ # define lockdep_softirq_enter()	do { } while (0)
+ # define lockdep_softirq_exit()		do { } while (0)
+-# define INIT_TRACE_IRQFLAGS
+ #endif
+ 
+ #if defined(CONFIG_IRQSOFF_TRACER) || \
+diff --git a/kernel/lockdep.c b/kernel/lockdep.c
+index b2e08c9..334d3c0 100644
+--- a/kernel/lockdep.c
++++ b/kernel/lockdep.c
+@@ -3494,6 +3494,7 @@ static void check_flags(unsigned long flags)
+ 		}
+ 	}
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 	/*
+ 	 * We dont accurately track softirq state in e.g.
+ 	 * hardirq contexts (such as on 4KSTACKS), so only
+@@ -3508,6 +3509,7 @@ static void check_flags(unsigned long flags)
+ 			DEBUG_LOCKS_WARN_ON(!current->softirqs_enabled);
+ 		}
+ 	}
++#endif
+ 
+ 	if (!debug_locks)
+ 		print_irqtrace_events(current);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0181-mutex-no-spin-on-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0181-mutex-no-spin-on-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0181-mutex-no-spin-on-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0181-mutex-no-spin-on-rt.patch.patch)
@@ -0,0 +1,23 @@
+From dde7419bb0acbbc9e6c2e3af68a1f6ee6f911ce7 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 21:51:45 +0200
+Subject: [PATCH 181/271] mutex-no-spin-on-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/Kconfig.locks |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
+index 5068e2a..7bd0598 100644
+--- a/kernel/Kconfig.locks
++++ b/kernel/Kconfig.locks
+@@ -199,4 +199,4 @@ config INLINE_WRITE_UNLOCK_IRQRESTORE
+ 	def_bool !DEBUG_SPINLOCK && ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE
+ 
+ config MUTEX_SPIN_ON_OWNER
+-	def_bool SMP && !DEBUG_MUTEXES
++	def_bool SMP && !DEBUG_MUTEXES && !PREEMPT_RT_FULL
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0182-softirq-local-lock.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0182-softirq-local-lock.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0182-softirq-local-lock.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0182-softirq-local-lock.patch.patch)
@@ -0,0 +1,343 @@
+From 7294c7b2baee98dc63f2cbee366e46b5825e8a1d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 28 Jun 2011 15:57:18 +0200
+Subject: [PATCH 182/271] softirq-local-lock.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/hardirq.h   |   16 ++++-
+ include/linux/interrupt.h |   11 +++
+ include/linux/sched.h     |    1 +
+ init/main.c               |    1 +
+ kernel/softirq.c          |  170 ++++++++++++++++++++++++++++++++++++++++++++-
+ 5 files changed, 194 insertions(+), 5 deletions(-)
+
+diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
+index f743883..2f5d318 100644
+--- a/include/linux/hardirq.h
++++ b/include/linux/hardirq.h
+@@ -60,7 +60,11 @@
+ #define HARDIRQ_OFFSET	(1UL << HARDIRQ_SHIFT)
+ #define NMI_OFFSET	(1UL << NMI_SHIFT)
+ 
+-#define SOFTIRQ_DISABLE_OFFSET	(2 * SOFTIRQ_OFFSET)
++#ifndef CONFIG_PREEMPT_RT_FULL
++# define SOFTIRQ_DISABLE_OFFSET	(2 * SOFTIRQ_OFFSET)
++#else
++# define SOFTIRQ_DISABLE_OFFSET (0)
++#endif
+ 
+ #ifndef PREEMPT_ACTIVE
+ #define PREEMPT_ACTIVE_BITS	1
+@@ -73,10 +77,17 @@
+ #endif
+ 
+ #define hardirq_count()	(preempt_count() & HARDIRQ_MASK)
+-#define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
+ #define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
+ 				 | NMI_MASK))
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
++# define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
++# define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
++#else
++# define softirq_count()	(0U)
++extern int in_serving_softirq(void);
++#endif
++
+ /*
+  * Are we doing bottom half or hardware interrupt processing?
+  * Are we in a softirq context? Interrupt context?
+@@ -86,7 +97,6 @@
+ #define in_irq()		(hardirq_count())
+ #define in_softirq()		(softirq_count())
+ #define in_interrupt()		(irq_count())
+-#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
+ 
+ /*
+  * Are we in NMI context?
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index 74e28d9..20d8dcc 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -458,7 +458,12 @@ struct softirq_action
+ 
+ asmlinkage void do_softirq(void);
+ asmlinkage void __do_softirq(void);
++
++#ifndef CONFIG_PREEMPT_RT_FULL
+ static inline void thread_do_softirq(void) { do_softirq(); }
++#else
++extern void thread_do_softirq(void);
++#endif
+ 
+ extern void open_softirq(int nr, void (*action)(struct softirq_action *));
+ extern void softirq_init(void);
+@@ -650,6 +655,12 @@ void tasklet_hrtimer_cancel(struct tasklet_hrtimer *ttimer)
+ 	tasklet_kill(&ttimer->tasklet);
+ }
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++extern void softirq_early_init(void);
++#else
++static inline void softirq_early_init(void) { }
++#endif
++
+ /*
+  * Autoprobing for irqs:
+  *
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index af6cb0c..a84a901 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1601,6 +1601,7 @@ struct task_struct {
+ #endif
+ #ifdef CONFIG_PREEMPT_RT_BASE
+ 	struct rcu_head put_rcu;
++	int softirq_nestcnt;
+ #endif
+ };
+ 
+diff --git a/init/main.c b/init/main.c
+index 6569987..d432bea 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -490,6 +490,7 @@ asmlinkage void __init start_kernel(void)
+  * Interrupts are still disabled. Do necessary setups, then
+  * enable them
+  */
++	softirq_early_init();
+ 	tick_init();
+ 	boot_cpu_init();
+ 	page_address_init();
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index 8332622..2c10a79 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -24,6 +24,7 @@
+ #include <linux/ftrace.h>
+ #include <linux/smp.h>
+ #include <linux/tick.h>
++#include <linux/locallock.h>
+ 
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/irq.h>
+@@ -165,6 +166,7 @@ static void handle_pending_softirqs(u32 pending, int cpu)
+ 	local_irq_disable();
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+  * preempt_count and SOFTIRQ_OFFSET usage:
+  * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
+@@ -368,6 +370,162 @@ asmlinkage void do_softirq(void)
+ 
+ #endif
+ 
++static inline void local_bh_disable_nort(void) { local_bh_disable(); }
++static inline void _local_bh_enable_nort(void) { _local_bh_enable(); }
++
++#else /* !PREEMPT_RT_FULL */
++
++/*
++ * On RT we serialize softirq execution with a cpu local lock
++ */
++static DEFINE_LOCAL_IRQ_LOCK(local_softirq_lock);
++static DEFINE_PER_CPU(struct task_struct *, local_softirq_runner);
++
++static void __do_softirq(void);
++
++void __init softirq_early_init(void)
++{
++	local_irq_lock_init(local_softirq_lock);
++}
++
++void local_bh_disable(void)
++{
++	migrate_disable();
++	current->softirq_nestcnt++;
++}
++EXPORT_SYMBOL(local_bh_disable);
++
++void local_bh_enable(void)
++{
++	if (WARN_ON(current->softirq_nestcnt == 0))
++		return;
++
++	if ((current->softirq_nestcnt == 1) &&
++	    local_softirq_pending() &&
++	    local_trylock(local_softirq_lock)) {
++
++		local_irq_disable();
++		if (local_softirq_pending())
++			__do_softirq();
++		local_irq_enable();
++		local_unlock(local_softirq_lock);
++		WARN_ON(current->softirq_nestcnt != 1);
++	}
++	current->softirq_nestcnt--;
++	migrate_enable();
++}
++EXPORT_SYMBOL(local_bh_enable);
++
++void local_bh_enable_ip(unsigned long ip)
++{
++	local_bh_enable();
++}
++EXPORT_SYMBOL(local_bh_enable_ip);
++
++/* For tracing */
++int notrace __in_softirq(void)
++{
++	if (__get_cpu_var(local_softirq_lock).owner == current)
++		return __get_cpu_var(local_softirq_lock).nestcnt;
++	return 0;
++}
++
++int in_serving_softirq(void)
++{
++	int res;
++
++	preempt_disable();
++	res = __get_cpu_var(local_softirq_runner) == current;
++	preempt_enable();
++	return res;
++}
++
++/*
++ * Called with bh and local interrupts disabled. For full RT cpu must
++ * be pinned.
++ */
++static void __do_softirq(void)
++{
++	u32 pending = local_softirq_pending();
++	int cpu = smp_processor_id();
++
++	current->softirq_nestcnt++;
++
++	/* Reset the pending bitmask before enabling irqs */
++	set_softirq_pending(0);
++
++	__get_cpu_var(local_softirq_runner) = current;
++
++	lockdep_softirq_enter();
++
++	handle_pending_softirqs(pending, cpu);
++
++	pending = local_softirq_pending();
++	if (pending)
++		wakeup_softirqd();
++
++	lockdep_softirq_exit();
++	__get_cpu_var(local_softirq_runner) = NULL;
++
++	current->softirq_nestcnt--;
++}
++
++static int __thread_do_softirq(int cpu)
++{
++	/*
++	 * Prevent the current cpu from going offline.
++	 * pin_current_cpu() can reenable preemption and block on the
++	 * hotplug mutex. When it returns, the current cpu is
++	 * pinned. It might be the wrong one, but the offline check
++	 * below catches that.
++	 */
++	pin_current_cpu();
++	/*
++	 * If called from ksoftirqd (cpu >= 0) we need to check
++	 * whether we are on the wrong cpu due to cpu offlining. If
++	 * called via thread_do_softirq() no action required.
++	 */
++	if (cpu >= 0 && cpu_is_offline(cpu)) {
++		unpin_current_cpu();
++		return -1;
++	}
++	preempt_enable();
++	local_lock(local_softirq_lock);
++	local_irq_disable();
++	/*
++	 * We cannot switch stacks on RT as we want to be able to
++	 * schedule!
++	 */
++	if (local_softirq_pending())
++		__do_softirq();
++	local_unlock(local_softirq_lock);
++	unpin_current_cpu();
++	preempt_disable();
++	local_irq_enable();
++	return 0;
++}
++
++/*
++ * Called from netif_rx_ni(). Preemption enabled.
++ */
++void thread_do_softirq(void)
++{
++	if (!in_serving_softirq()) {
++		preempt_disable();
++		__thread_do_softirq(-1);
++		preempt_enable();
++	}
++}
++
++static int ksoftirqd_do_softirq(int cpu)
++{
++	return __thread_do_softirq(cpu);
++}
++
++static inline void local_bh_disable_nort(void) { }
++static inline void _local_bh_enable_nort(void) { }
++
++#endif /* PREEMPT_RT_FULL */
+ /*
+  * Enter an interrupt context.
+  */
+@@ -381,9 +539,9 @@ void irq_enter(void)
+ 		 * Prevent raise_softirq from needlessly waking up ksoftirqd
+ 		 * here, as softirq will be serviced on return from interrupt.
+ 		 */
+-		local_bh_disable();
++		local_bh_disable_nort();
+ 		tick_check_idle(cpu);
+-		_local_bh_enable();
++		_local_bh_enable_nort();
+ 	}
+ 
+ 	__irq_enter();
+@@ -392,6 +550,7 @@ void irq_enter(void)
+ #ifdef __ARCH_IRQ_EXIT_IRQS_DISABLED
+ static inline void invoke_softirq(void)
+ {
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 	if (!force_irqthreads)
+ 		__do_softirq();
+ 	else {
+@@ -400,10 +559,14 @@ static inline void invoke_softirq(void)
+ 		wakeup_softirqd();
+ 		__local_bh_enable(SOFTIRQ_OFFSET);
+ 	}
++#else
++	wakeup_softirqd();
++#endif
+ }
+ #else
+ static inline void invoke_softirq(void)
+ {
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 	if (!force_irqthreads)
+ 		do_softirq();
+ 	else {
+@@ -412,6 +575,9 @@ static inline void invoke_softirq(void)
+ 		wakeup_softirqd();
+ 		__local_bh_enable(SOFTIRQ_OFFSET);
+ 	}
++#else
++	wakeup_softirqd();
++#endif
+ }
+ #endif
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0183-softirq-Export-in_serving_softirq.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0183-softirq-Export-in_serving_softirq.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0183-softirq-Export-in_serving_softirq.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0183-softirq-Export-in_serving_softirq.patch)
@@ -0,0 +1,33 @@
+From c0f99855495893c3c8281474ede568ced3dee991 Mon Sep 17 00:00:00 2001
+From: John Kacur <jkacur at redhat.com>
+Date: Mon, 14 Nov 2011 02:44:43 +0100
+Subject: [PATCH 183/271] softirq: Export in_serving_softirq()
+
+ERROR: "in_serving_softirq" [net/sched/cls_cgroup.ko] undefined!
+
+The above can be fixed by exporting in_serving_softirq
+
+Signed-off-by: John Kacur <jkacur at redhat.com>
+Cc: Paul McKenney <paulmck at linux.vnet.ibm.com>
+Cc: stable-rt at vger.kernel.org
+Link: http://lkml.kernel.org/r/1321235083-21756-2-git-send-email-jkacur@redhat.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/softirq.c |    1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index 2c10a79..f107c07 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -439,6 +439,7 @@ int in_serving_softirq(void)
+ 	preempt_enable();
+ 	return res;
+ }
++EXPORT_SYMBOL(in_serving_softirq);
+ 
+ /*
+  * Called with bh and local interrupts disabled. For full RT cpu must
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0184-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0184-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0184-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0184-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch)
@@ -0,0 +1,45 @@
+From 025ec101e71cab9678031c784551be44433bb309 Mon Sep 17 00:00:00 2001
+From: Yong Zhang <yong.zhang0 at gmail.com>
+Date: Thu, 13 Oct 2011 17:19:09 +0800
+Subject: [PATCH 184/271] hardirq.h: Define softirq_count() as OUL to kill
+ build warning
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+kernel/lockdep.c: In function ‘print_bad_irq_dependency’:
+kernel/lockdep.c:1476:3: warning: format ‘%lu’ expects type ‘long unsigned int’, but argument 7 has type ‘unsigned int’
+kernel/lockdep.c: In function ‘print_usage_bug’:
+kernel/lockdep.c:2193:3: warning: format ‘%lu’ expects type ‘long unsigned int’, but argument 7 has type ‘unsigned int’
+
+kernel/lockdep.i show this:
+ printk("%s/%d [HC%u[%lu]:SC%u[%lu]:HE%u:SE%u] is trying to acquire:\n",
+  curr->comm, task_pid_nr(curr),
+  curr->hardirq_context, ((current_thread_info()->preempt_count) & (((1UL << (10))-1) << ((0 + 8) + 8))) >> ((0 + 8) + 8),
+  curr->softirq_context, (0U) >> (0 + 8),
+  curr->hardirqs_enabled,
+  curr->softirqs_enabled);
+
+Signed-off-by: Yong Zhang <yong.zhang0 at gmail.com>
+Link: http://lkml.kernel.org/r/20111013091909.GA32739@zhy
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/hardirq.h |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
+index 2f5d318..7059ce2 100644
+--- a/include/linux/hardirq.h
++++ b/include/linux/hardirq.h
+@@ -84,7 +84,7 @@
+ # define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
+ # define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
+ #else
+-# define softirq_count()	(0U)
++# define softirq_count()	(0UL)
+ extern int in_serving_softirq(void);
+ #endif
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0185-softirq-Fix-unplug-deadlock.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0185-softirq-Fix-unplug-deadlock.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0185-softirq-Fix-unplug-deadlock.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0185-softirq-Fix-unplug-deadlock.patch)
@@ -0,0 +1,68 @@
+From 1e5f781db53b65d0a833433618ce18160bf73c7b Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Fri, 30 Sep 2011 15:52:14 +0200
+Subject: [PATCH 185/271] softirq: Fix unplug deadlock
+
+If ksoftirqd gets woken during hot-unplug, __thread_do_softirq() will
+call pin_current_cpu() which will block on the held cpu_hotplug.lock.
+Moving the offline check in __thread_do_softirq() before the
+pin_current_cpu() call doesn't work, since the wakeup can happen
+before we mark the cpu offline.
+
+So here we have the ksoftirq thread stuck until hotplug finishes, but
+then the ksoftirq CPU_DOWN notifier issues kthread_stop() which will
+wait for the ksoftirq thread to go away -- while holding the hotplug
+lock.
+
+Sort this by delaying the kthread_stop() until CPU_POST_DEAD, which is
+outside of the cpu_hotplug.lock, but still serialized by the
+cpu_add_remove_lock.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Cc: rostedt <rostedt at goodmis.org>
+Cc: Clark Williams <williams at redhat.com>
+Link: http://lkml.kernel.org/r/1317391156.12973.3.camel@twins
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/softirq.c |    8 ++------
+ 1 file changed, 2 insertions(+), 6 deletions(-)
+
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index f107c07..56de566 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -1086,9 +1086,8 @@ static int __cpuinit cpu_callback(struct notifier_block *nfb,
+ 	int hotcpu = (unsigned long)hcpu;
+ 	struct task_struct *p;
+ 
+-	switch (action) {
++	switch (action & ~CPU_TASKS_FROZEN) {
+ 	case CPU_UP_PREPARE:
+-	case CPU_UP_PREPARE_FROZEN:
+ 		p = kthread_create_on_node(run_ksoftirqd,
+ 					   hcpu,
+ 					   cpu_to_node(hotcpu),
+@@ -1101,19 +1100,16 @@ static int __cpuinit cpu_callback(struct notifier_block *nfb,
+   		per_cpu(ksoftirqd, hotcpu) = p;
+  		break;
+ 	case CPU_ONLINE:
+-	case CPU_ONLINE_FROZEN:
+ 		wake_up_process(per_cpu(ksoftirqd, hotcpu));
+ 		break;
+ #ifdef CONFIG_HOTPLUG_CPU
+ 	case CPU_UP_CANCELED:
+-	case CPU_UP_CANCELED_FROZEN:
+ 		if (!per_cpu(ksoftirqd, hotcpu))
+ 			break;
+ 		/* Unbind so it can run.  Fall thru. */
+ 		kthread_bind(per_cpu(ksoftirqd, hotcpu),
+ 			     cpumask_any(cpu_online_mask));
+-	case CPU_DEAD:
+-	case CPU_DEAD_FROZEN: {
++	case CPU_POST_DEAD: {
+ 		static const struct sched_param param = {
+ 			.sched_priority = MAX_RT_PRIO-1
+ 		};
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0186-softirq-disable-softirq-stacks-for-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0186-softirq-disable-softirq-stacks-for-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0186-softirq-disable-softirq-stacks-for-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0186-softirq-disable-softirq-stacks-for-rt.patch.patch)
@@ -0,0 +1,196 @@
+From f2aa9a72082af69fa7e436c5ed9cac8c07bba10e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 18 Jul 2011 13:59:17 +0200
+Subject: [PATCH 186/271] softirq-disable-softirq-stacks-for-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/powerpc/kernel/irq.c     |    3 ++-
+ arch/powerpc/kernel/misc_32.S |    2 ++
+ arch/powerpc/kernel/misc_64.S |    2 ++
+ arch/sh/kernel/irq.c          |    2 ++
+ arch/sparc/kernel/irq_64.c    |    2 ++
+ arch/x86/kernel/entry_64.S    |    2 ++
+ arch/x86/kernel/irq_32.c      |    2 ++
+ arch/x86/kernel/irq_64.c      |    3 ++-
+ include/linux/interrupt.h     |    3 +--
+ 9 files changed, 17 insertions(+), 4 deletions(-)
+
+diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
+index 745c1e7..e0ee531 100644
+--- a/arch/powerpc/kernel/irq.c
++++ b/arch/powerpc/kernel/irq.c
+@@ -440,6 +440,7 @@ void irq_ctx_init(void)
+ 	}
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ static inline void do_softirq_onstack(void)
+ {
+ 	struct thread_info *curtp, *irqtp;
+@@ -476,7 +477,7 @@ void do_softirq(void)
+ 
+ 	local_irq_restore(flags);
+ }
+-
++#endif
+ 
+ /*
+  * IRQ controller and virtual interrupts
+diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
+index 7cd07b4..46c6073 100644
+--- a/arch/powerpc/kernel/misc_32.S
++++ b/arch/powerpc/kernel/misc_32.S
+@@ -36,6 +36,7 @@
+ 
+ 	.text
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ _GLOBAL(call_do_softirq)
+ 	mflr	r0
+ 	stw	r0,4(r1)
+@@ -46,6 +47,7 @@ _GLOBAL(call_do_softirq)
+ 	lwz	r0,4(r1)
+ 	mtlr	r0
+ 	blr
++#endif
+ 
+ _GLOBAL(call_handle_irq)
+ 	mflr	r0
+diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
+index 616921e..2961d75 100644
+--- a/arch/powerpc/kernel/misc_64.S
++++ b/arch/powerpc/kernel/misc_64.S
+@@ -29,6 +29,7 @@
+ 
+ 	.text
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ _GLOBAL(call_do_softirq)
+ 	mflr	r0
+ 	std	r0,16(r1)
+@@ -39,6 +40,7 @@ _GLOBAL(call_do_softirq)
+ 	ld	r0,16(r1)
+ 	mtlr	r0
+ 	blr
++#endif
+ 
+ _GLOBAL(call_handle_irq)
+ 	ld	r8,0(r6)
+diff --git a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c
+index a3ee919..9127bc0 100644
+--- a/arch/sh/kernel/irq.c
++++ b/arch/sh/kernel/irq.c
+@@ -149,6 +149,7 @@ void irq_ctx_exit(int cpu)
+ 	hardirq_ctx[cpu] = NULL;
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ asmlinkage void do_softirq(void)
+ {
+ 	unsigned long flags;
+@@ -191,6 +192,7 @@ asmlinkage void do_softirq(void)
+ 
+ 	local_irq_restore(flags);
+ }
++#endif
+ #else
+ static inline void handle_one_irq(unsigned int irq)
+ {
+diff --git a/arch/sparc/kernel/irq_64.c b/arch/sparc/kernel/irq_64.c
+index d45b710..c3a3737 100644
+--- a/arch/sparc/kernel/irq_64.c
++++ b/arch/sparc/kernel/irq_64.c
+@@ -699,6 +699,7 @@ void __irq_entry handler_irq(int pil, struct pt_regs *regs)
+ 	set_irq_regs(old_regs);
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ void do_softirq(void)
+ {
+ 	unsigned long flags;
+@@ -724,6 +725,7 @@ void do_softirq(void)
+ 
+ 	local_irq_restore(flags);
+ }
++#endif
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+ void fixup_irqs(void)
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index faf8d5e..fb0f578 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -1192,6 +1192,7 @@ ENTRY(kernel_execve)
+ 	CFI_ENDPROC
+ END(kernel_execve)
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /* Call softirq on interrupt stack. Interrupts are off. */
+ ENTRY(call_softirq)
+ 	CFI_STARTPROC
+@@ -1211,6 +1212,7 @@ ENTRY(call_softirq)
+ 	ret
+ 	CFI_ENDPROC
+ END(call_softirq)
++#endif
+ 
+ #ifdef CONFIG_XEN
+ zeroentry xen_hypervisor_callback xen_do_hypervisor_callback
+diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
+index 7209070..84417a2 100644
+--- a/arch/x86/kernel/irq_32.c
++++ b/arch/x86/kernel/irq_32.c
+@@ -149,6 +149,7 @@ void __cpuinit irq_ctx_init(int cpu)
+ 	       cpu, per_cpu(hardirq_ctx, cpu),  per_cpu(softirq_ctx, cpu));
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ asmlinkage void do_softirq(void)
+ {
+ 	unsigned long flags;
+@@ -179,6 +180,7 @@ asmlinkage void do_softirq(void)
+ 
+ 	local_irq_restore(flags);
+ }
++#endif
+ 
+ bool handle_irq(unsigned irq, struct pt_regs *regs)
+ {
+diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
+index 69bca46..3fbc07d 100644
+--- a/arch/x86/kernel/irq_64.c
++++ b/arch/x86/kernel/irq_64.c
+@@ -65,7 +65,7 @@ bool handle_irq(unsigned irq, struct pt_regs *regs)
+ 	return true;
+ }
+ 
+-
++#ifndef CONFIG_PREEMPT_RT_FULL
+ extern void call_softirq(void);
+ 
+ asmlinkage void do_softirq(void)
+@@ -85,3 +85,4 @@ asmlinkage void do_softirq(void)
+ 	}
+ 	local_irq_restore(flags);
+ }
++#endif
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index 20d8dcc..1a74cf7 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -456,10 +456,9 @@ struct softirq_action
+ 	void	(*action)(struct softirq_action *);
+ };
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ asmlinkage void do_softirq(void);
+ asmlinkage void __do_softirq(void);
+-
+-#ifndef CONFIG_PREEMPT_RT_FULL
+ static inline void thread_do_softirq(void) { do_softirq(); }
+ #else
+ extern void thread_do_softirq(void);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0187-softirq-make-fifo.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0187-softirq-make-fifo.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0187-softirq-make-fifo.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0187-softirq-make-fifo.patch.patch)
@@ -0,0 +1,64 @@
+From aa7ec9052d287e201889565c4d876cc1c87015f4 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 21 Jul 2011 21:06:43 +0200
+Subject: [PATCH 187/271] softirq-make-fifo.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/softirq.c |   19 +++++++++++++++++++
+ 1 file changed, 19 insertions(+)
+
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index 56de566..dd80cb4 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -372,6 +372,8 @@ asmlinkage void do_softirq(void)
+ 
+ static inline void local_bh_disable_nort(void) { local_bh_disable(); }
+ static inline void _local_bh_enable_nort(void) { _local_bh_enable(); }
++static inline void ksoftirqd_set_sched_params(void) { }
++static inline void ksoftirqd_clr_sched_params(void) { }
+ 
+ #else /* !PREEMPT_RT_FULL */
+ 
+@@ -526,6 +528,20 @@ static int ksoftirqd_do_softirq(int cpu)
+ static inline void local_bh_disable_nort(void) { }
+ static inline void _local_bh_enable_nort(void) { }
+ 
++static inline void ksoftirqd_set_sched_params(void)
++{
++	struct sched_param param = { .sched_priority = 1 };
++
++	sched_setscheduler(current, SCHED_FIFO, &param);
++}
++
++static inline void ksoftirqd_clr_sched_params(void)
++{
++	struct sched_param param = { .sched_priority = 0 };
++
++	sched_setscheduler(current, SCHED_NORMAL, &param);
++}
++
+ #endif /* PREEMPT_RT_FULL */
+ /*
+  * Enter an interrupt context.
+@@ -985,6 +1001,8 @@ void __init softirq_init(void)
+ 
+ static int run_ksoftirqd(void * __bind_cpu)
+ {
++	ksoftirqd_set_sched_params();
++
+ 	set_current_state(TASK_INTERRUPTIBLE);
+ 
+ 	while (!kthread_should_stop()) {
+@@ -1010,6 +1028,7 @@ static int run_ksoftirqd(void * __bind_cpu)
+ 
+ wait_to_die:
+ 	preempt_enable();
++	ksoftirqd_clr_sched_params();
+ 	/* Wait for kthread_stop */
+ 	set_current_state(TASK_INTERRUPTIBLE);
+ 	while (!kthread_should_stop()) {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0188-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0188-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0188-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0188-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch)
@@ -0,0 +1,410 @@
+From 068a7ede0cad9a9ccac0007999ccbfc51899e997 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Tue, 29 Nov 2011 20:18:22 -0500
+Subject: [PATCH 188/271] tasklet: Prevent tasklets from going into infinite
+ spin in RT
+
+When CONFIG_PREEMPT_RT_FULL is enabled, tasklets run as threads,
+and spinlocks turn are mutexes. But this can cause issues with
+tasks disabling tasklets. A tasklet runs under ksoftirqd, and
+if a tasklets are disabled with tasklet_disable(), the tasklet
+count is increased. When a tasklet runs, it checks this counter
+and if it is set, it adds itself back on the softirq queue and
+returns.
+
+The problem arises in RT because ksoftirq will see that a softirq
+is ready to run (the tasklet softirq just re-armed itself), and will
+not sleep, but instead run the softirqs again. The tasklet softirq
+will still see that the count is non-zero and will not execute
+the tasklet and requeue itself on the softirq again, which will
+cause ksoftirqd to run it again and again and again.
+
+It gets worse because ksoftirqd runs as a real-time thread.
+If it preempted the task that disabled tasklets, and that task
+has migration disabled, or can't run for other reasons, the tasklet
+softirq will never run because the count will never be zero, and
+ksoftirqd will go into an infinite loop. As an RT task, it this
+becomes a big problem.
+
+This is a hack solution to have tasklet_disable stop tasklets, and
+when a tasklet runs, instead of requeueing the tasklet softirqd
+it delays it. When tasklet_enable() is called, and tasklets are
+waiting, then the tasklet_enable() will kick the tasklets to continue.
+This prevents the lock up from ksoftirq going into an infinite loop.
+
+[ rostedt at goodmis.org: ported to 3.0-rt ]
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/interrupt.h |   39 ++++-----
+ kernel/softirq.c          |  208 ++++++++++++++++++++++++++++++++-------------
+ 2 files changed, 170 insertions(+), 77 deletions(-)
+
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index 1a74cf7..bb4b441 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -517,8 +517,9 @@ extern void __send_remote_softirq(struct call_single_data *cp, int cpu,
+      to be executed on some cpu at least once after this.
+    * If the tasklet is already scheduled, but its execution is still not
+      started, it will be executed only once.
+-   * If this tasklet is already running on another CPU (or schedule is called
+-     from tasklet itself), it is rescheduled for later.
++   * If this tasklet is already running on another CPU, it is rescheduled
++     for later.
++   * Schedule must not be called from the tasklet itself (a lockup occurs)
+    * Tasklet is strictly serialized wrt itself, but not
+      wrt another tasklets. If client needs some intertask synchronization,
+      he makes it with spinlocks.
+@@ -543,27 +544,36 @@ struct tasklet_struct name = { NULL, 0, ATOMIC_INIT(1), func, data }
+ enum
+ {
+ 	TASKLET_STATE_SCHED,	/* Tasklet is scheduled for execution */
+-	TASKLET_STATE_RUN	/* Tasklet is running (SMP only) */
++	TASKLET_STATE_RUN,	/* Tasklet is running (SMP only) */
++	TASKLET_STATE_PENDING	/* Tasklet is pending */
+ };
+ 
+-#ifdef CONFIG_SMP
++#define TASKLET_STATEF_SCHED	(1 << TASKLET_STATE_SCHED)
++#define TASKLET_STATEF_RUN	(1 << TASKLET_STATE_RUN)
++#define TASKLET_STATEF_PENDING	(1 << TASKLET_STATE_PENDING)
++
++#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
+ static inline int tasklet_trylock(struct tasklet_struct *t)
+ {
+ 	return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
+ }
+ 
++static inline int tasklet_tryunlock(struct tasklet_struct *t)
++{
++	return cmpxchg(&t->state, TASKLET_STATEF_RUN, 0) == TASKLET_STATEF_RUN;
++}
++
+ static inline void tasklet_unlock(struct tasklet_struct *t)
+ {
+ 	smp_mb__before_clear_bit(); 
+ 	clear_bit(TASKLET_STATE_RUN, &(t)->state);
+ }
+ 
+-static inline void tasklet_unlock_wait(struct tasklet_struct *t)
+-{
+-	while (test_bit(TASKLET_STATE_RUN, &(t)->state)) { barrier(); }
+-}
++extern void tasklet_unlock_wait(struct tasklet_struct *t);
++
+ #else
+ #define tasklet_trylock(t) 1
++#define tasklet_tryunlock(t)	1
+ #define tasklet_unlock_wait(t) do { } while (0)
+ #define tasklet_unlock(t) do { } while (0)
+ #endif
+@@ -612,17 +622,8 @@ static inline void tasklet_disable(struct tasklet_struct *t)
+ 	smp_mb();
+ }
+ 
+-static inline void tasklet_enable(struct tasklet_struct *t)
+-{
+-	smp_mb__before_atomic_dec();
+-	atomic_dec(&t->count);
+-}
+-
+-static inline void tasklet_hi_enable(struct tasklet_struct *t)
+-{
+-	smp_mb__before_atomic_dec();
+-	atomic_dec(&t->count);
+-}
++extern  void tasklet_enable(struct tasklet_struct *t);
++extern  void tasklet_hi_enable(struct tasklet_struct *t);
+ 
+ extern void tasklet_kill(struct tasklet_struct *t);
+ extern void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu);
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index dd80cb4..92b4ca3 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -21,6 +21,7 @@
+ #include <linux/freezer.h>
+ #include <linux/kthread.h>
+ #include <linux/rcupdate.h>
++#include <linux/delay.h>
+ #include <linux/ftrace.h>
+ #include <linux/smp.h>
+ #include <linux/tick.h>
+@@ -664,15 +665,45 @@ struct tasklet_head
+ static DEFINE_PER_CPU(struct tasklet_head, tasklet_vec);
+ static DEFINE_PER_CPU(struct tasklet_head, tasklet_hi_vec);
+ 
++static void inline
++__tasklet_common_schedule(struct tasklet_struct *t, struct tasklet_head *head, unsigned int nr)
++{
++	if (tasklet_trylock(t)) {
++again:
++		/* We may have been preempted before tasklet_trylock
++		 * and __tasklet_action may have already run.
++		 * So double check the sched bit while the takslet
++		 * is locked before adding it to the list.
++		 */
++		if (test_bit(TASKLET_STATE_SCHED, &t->state)) {
++			t->next = NULL;
++			*head->tail = t;
++			head->tail = &(t->next);
++			raise_softirq_irqoff(nr);
++			tasklet_unlock(t);
++		} else {
++			/* This is subtle. If we hit the corner case above
++			 * It is possible that we get preempted right here,
++			 * and another task has successfully called
++			 * tasklet_schedule(), then this function, and
++			 * failed on the trylock. Thus we must be sure
++			 * before releasing the tasklet lock, that the
++			 * SCHED_BIT is clear. Otherwise the tasklet
++			 * may get its SCHED_BIT set, but not added to the
++			 * list
++			 */
++			if (!tasklet_tryunlock(t))
++				goto again;
++		}
++	}
++}
++
+ void __tasklet_schedule(struct tasklet_struct *t)
+ {
+ 	unsigned long flags;
+ 
+ 	local_irq_save(flags);
+-	t->next = NULL;
+-	*__this_cpu_read(tasklet_vec.tail) = t;
+-	__this_cpu_write(tasklet_vec.tail, &(t->next));
+-	raise_softirq_irqoff(TASKLET_SOFTIRQ);
++	__tasklet_common_schedule(t, &__get_cpu_var(tasklet_vec), TASKLET_SOFTIRQ);
+ 	local_irq_restore(flags);
+ }
+ 
+@@ -683,10 +714,7 @@ void __tasklet_hi_schedule(struct tasklet_struct *t)
+ 	unsigned long flags;
+ 
+ 	local_irq_save(flags);
+-	t->next = NULL;
+-	*__this_cpu_read(tasklet_hi_vec.tail) = t;
+-	__this_cpu_write(tasklet_hi_vec.tail,  &(t->next));
+-	raise_softirq_irqoff(HI_SOFTIRQ);
++	__tasklet_common_schedule(t, &__get_cpu_var(tasklet_hi_vec), HI_SOFTIRQ);
+ 	local_irq_restore(flags);
+ }
+ 
+@@ -694,50 +722,119 @@ EXPORT_SYMBOL(__tasklet_hi_schedule);
+ 
+ void __tasklet_hi_schedule_first(struct tasklet_struct *t)
+ {
+-	BUG_ON(!irqs_disabled());
+-
+-	t->next = __this_cpu_read(tasklet_hi_vec.head);
+-	__this_cpu_write(tasklet_hi_vec.head, t);
+-	__raise_softirq_irqoff(HI_SOFTIRQ);
++	__tasklet_hi_schedule(t);
+ }
+ 
+ EXPORT_SYMBOL(__tasklet_hi_schedule_first);
+ 
+-static void tasklet_action(struct softirq_action *a)
++void  tasklet_enable(struct tasklet_struct *t)
+ {
+-	struct tasklet_struct *list;
++	if (!atomic_dec_and_test(&t->count))
++		return;
++	if (test_and_clear_bit(TASKLET_STATE_PENDING, &t->state))
++		tasklet_schedule(t);
++}
+ 
+-	local_irq_disable();
+-	list = __this_cpu_read(tasklet_vec.head);
+-	__this_cpu_write(tasklet_vec.head, NULL);
+-	__this_cpu_write(tasklet_vec.tail, &__get_cpu_var(tasklet_vec).head);
+-	local_irq_enable();
++EXPORT_SYMBOL(tasklet_enable);
++
++void  tasklet_hi_enable(struct tasklet_struct *t)
++{
++	if (!atomic_dec_and_test(&t->count))
++		return;
++	if (test_and_clear_bit(TASKLET_STATE_PENDING, &t->state))
++		tasklet_hi_schedule(t);
++}
++
++EXPORT_SYMBOL(tasklet_hi_enable);
++
++static void
++__tasklet_action(struct softirq_action *a, struct tasklet_struct *list)
++{
++	int loops = 1000000;
+ 
+ 	while (list) {
+ 		struct tasklet_struct *t = list;
+ 
+ 		list = list->next;
+ 
+-		if (tasklet_trylock(t)) {
+-			if (!atomic_read(&t->count)) {
+-				if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
+-					BUG();
+-				t->func(t->data);
+-				tasklet_unlock(t);
+-				continue;
+-			}
+-			tasklet_unlock(t);
++		/*
++		 * Should always succeed - after a tasklist got on the
++		 * list (after getting the SCHED bit set from 0 to 1),
++		 * nothing but the tasklet softirq it got queued to can
++		 * lock it:
++		 */
++		if (!tasklet_trylock(t)) {
++			WARN_ON(1);
++			continue;
+ 		}
+ 
+-		local_irq_disable();
+ 		t->next = NULL;
+-		*__this_cpu_read(tasklet_vec.tail) = t;
+-		__this_cpu_write(tasklet_vec.tail, &(t->next));
+-		__raise_softirq_irqoff(TASKLET_SOFTIRQ);
+-		local_irq_enable();
++
++		/*
++		 * If we cannot handle the tasklet because it's disabled,
++		 * mark it as pending. tasklet_enable() will later
++		 * re-schedule the tasklet.
++		 */
++		if (unlikely(atomic_read(&t->count))) {
++out_disabled:
++			/* implicit unlock: */
++			wmb();
++			t->state = TASKLET_STATEF_PENDING;
++			continue;
++		}
++
++		/*
++		 * After this point on the tasklet might be rescheduled
++		 * on another CPU, but it can only be added to another
++		 * CPU's tasklet list if we unlock the tasklet (which we
++		 * dont do yet).
++		 */
++		if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
++			WARN_ON(1);
++
++again:
++		t->func(t->data);
++
++		/*
++		 * Try to unlock the tasklet. We must use cmpxchg, because
++		 * another CPU might have scheduled or disabled the tasklet.
++		 * We only allow the STATE_RUN -> 0 transition here.
++		 */
++		while (!tasklet_tryunlock(t)) {
++			/*
++			 * If it got disabled meanwhile, bail out:
++			 */
++			if (atomic_read(&t->count))
++				goto out_disabled;
++			/*
++			 * If it got scheduled meanwhile, re-execute
++			 * the tasklet function:
++			 */
++			if (test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
++				goto again;
++			if (!--loops) {
++				printk("hm, tasklet state: %08lx\n", t->state);
++				WARN_ON(1);
++				tasklet_unlock(t);
++				break;
++			}
++		}
+ 	}
+ }
+ 
++static void tasklet_action(struct softirq_action *a)
++{
++	struct tasklet_struct *list;
++
++	local_irq_disable();
++	list = __get_cpu_var(tasklet_vec).head;
++	__get_cpu_var(tasklet_vec).head = NULL;
++	__get_cpu_var(tasklet_vec).tail = &__get_cpu_var(tasklet_vec).head;
++	local_irq_enable();
++
++	__tasklet_action(a, list);
++}
++
+ static void tasklet_hi_action(struct softirq_action *a)
+ {
+ 	struct tasklet_struct *list;
+@@ -748,29 +845,7 @@ static void tasklet_hi_action(struct softirq_action *a)
+ 	__this_cpu_write(tasklet_hi_vec.tail, &__get_cpu_var(tasklet_hi_vec).head);
+ 	local_irq_enable();
+ 
+-	while (list) {
+-		struct tasklet_struct *t = list;
+-
+-		list = list->next;
+-
+-		if (tasklet_trylock(t)) {
+-			if (!atomic_read(&t->count)) {
+-				if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
+-					BUG();
+-				t->func(t->data);
+-				tasklet_unlock(t);
+-				continue;
+-			}
+-			tasklet_unlock(t);
+-		}
+-
+-		local_irq_disable();
+-		t->next = NULL;
+-		*__this_cpu_read(tasklet_hi_vec.tail) = t;
+-		__this_cpu_write(tasklet_hi_vec.tail, &(t->next));
+-		__raise_softirq_irqoff(HI_SOFTIRQ);
+-		local_irq_enable();
+-	}
++	__tasklet_action(a, list);
+ }
+ 
+ 
+@@ -793,7 +868,7 @@ void tasklet_kill(struct tasklet_struct *t)
+ 
+ 	while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
+ 		do {
+-			yield();
++			msleep(1);
+ 		} while (test_bit(TASKLET_STATE_SCHED, &t->state));
+ 	}
+ 	tasklet_unlock_wait(t);
+@@ -999,6 +1074,23 @@ void __init softirq_init(void)
+ 	open_softirq(HI_SOFTIRQ, tasklet_hi_action);
+ }
+ 
++#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
++void tasklet_unlock_wait(struct tasklet_struct *t)
++{
++	while (test_bit(TASKLET_STATE_RUN, &(t)->state)) {
++		/*
++		 * Hack for now to avoid this busy-loop:
++		 */
++#ifdef CONFIG_PREEMPT_RT_FULL
++		msleep(1);
++#else
++		barrier();
++#endif
++	}
++}
++EXPORT_SYMBOL(tasklet_unlock_wait);
++#endif
++
+ static int run_ksoftirqd(void * __bind_cpu)
+ {
+ 	ksoftirqd_set_sched_params();
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0189-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0189-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0189-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0189-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch)
@@ -0,0 +1,159 @@
+From 3d44f43031f26b72b0c992b4c9fa6086cfb7b10c Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 31 Jan 2012 13:01:27 +0100
+Subject: [PATCH 189/271] genirq: Allow disabling of softirq processing in irq
+ thread context
+
+The processing of softirqs in irq thread context is a performance gain
+for the non-rt workloads of a system, but it's counterproductive for
+interrupts which are explicitely related to the realtime
+workload. Allow such interrupts to prevent softirq processing in their
+thread context.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+---
+ include/linux/interrupt.h |    2 ++
+ include/linux/irq.h       |    5 ++++-
+ kernel/irq/manage.c       |   13 ++++++++++++-
+ kernel/irq/settings.h     |   12 ++++++++++++
+ kernel/softirq.c          |    7 +++++++
+ 5 files changed, 37 insertions(+), 2 deletions(-)
+
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index bb4b441..f70a65b 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -61,6 +61,7 @@
+  * IRQF_NO_THREAD - Interrupt cannot be threaded
+  * IRQF_EARLY_RESUME - Resume IRQ early during syscore instead of at device
+  *                resume time.
++ * IRQF_NO_SOFTIRQ_CALL - Do not process softirqs in the irq thread context (RT)
+  */
+ #define IRQF_DISABLED		0x00000020
+ #define IRQF_SAMPLE_RANDOM	0x00000040
+@@ -75,6 +76,7 @@
+ #define IRQF_FORCE_RESUME	0x00008000
+ #define IRQF_NO_THREAD		0x00010000
+ #define IRQF_EARLY_RESUME	0x00020000
++#define IRQF_NO_SOFTIRQ_CALL	0x00040000
+ 
+ #define IRQF_TIMER		(__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD)
+ 
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index bff29c5..3838b53 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -67,6 +67,7 @@ typedef	void (*irq_preflow_handler_t)(struct irq_data *data);
+  * IRQ_MOVE_PCNTXT		- Interrupt can be migrated from process context
+  * IRQ_NESTED_TRHEAD		- Interrupt nests into another thread
+  * IRQ_PER_CPU_DEVID		- Dev_id is a per-cpu variable
++ * IRQ_NO_SOFTIRQ_CALL		- No softirq processing in the irq thread context (RT)
+  */
+ enum {
+ 	IRQ_TYPE_NONE		= 0x00000000,
+@@ -90,12 +91,14 @@ enum {
+ 	IRQ_NESTED_THREAD	= (1 << 15),
+ 	IRQ_NOTHREAD		= (1 << 16),
+ 	IRQ_PER_CPU_DEVID	= (1 << 17),
++	IRQ_NO_SOFTIRQ_CALL	= (1 << 18),
+ };
+ 
+ #define IRQF_MODIFY_MASK	\
+ 	(IRQ_TYPE_SENSE_MASK | IRQ_NOPROBE | IRQ_NOREQUEST | \
+ 	 IRQ_NOAUTOEN | IRQ_MOVE_PCNTXT | IRQ_LEVEL | IRQ_NO_BALANCING | \
+-	 IRQ_PER_CPU | IRQ_NESTED_THREAD | IRQ_NOTHREAD | IRQ_PER_CPU_DEVID)
++	 IRQ_PER_CPU | IRQ_NESTED_THREAD | IRQ_NOTHREAD | IRQ_PER_CPU_DEVID | \
++	 IRQ_NO_SOFTIRQ_CALL)
+ 
+ #define IRQ_NO_BALANCING_MASK	(IRQ_PER_CPU | IRQ_NO_BALANCING)
+ 
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index b3e6228..87dc053 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -742,7 +742,15 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
+ 	local_bh_disable();
+ 	ret = action->thread_fn(action->irq, action->dev_id);
+ 	irq_finalize_oneshot(desc, action, false);
+-	local_bh_enable();
++	/*
++	 * Interrupts which have real time requirements can be set up
++	 * to avoid softirq processing in the thread handler. This is
++	 * safe as these interrupts do not raise soft interrupts.
++	 */
++	if (irq_settings_no_softirq_call(desc))
++		_local_bh_enable();
++	else
++		local_bh_enable();
+ 	return ret;
+ }
+ 
+@@ -1072,6 +1080,9 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
+ 			irqd_set(&desc->irq_data, IRQD_NO_BALANCING);
+ 		}
+ 
++		if (new->flags & IRQF_NO_SOFTIRQ_CALL)
++			irq_settings_set_no_softirq_call(desc);
++
+ 		/* Set default affinity mask once everything is setup */
+ 		setup_affinity(irq, desc, mask);
+ 
+diff --git a/kernel/irq/settings.h b/kernel/irq/settings.h
+index 1162f10..0d2c381 100644
+--- a/kernel/irq/settings.h
++++ b/kernel/irq/settings.h
+@@ -14,6 +14,7 @@ enum {
+ 	_IRQ_NO_BALANCING	= IRQ_NO_BALANCING,
+ 	_IRQ_NESTED_THREAD	= IRQ_NESTED_THREAD,
+ 	_IRQ_PER_CPU_DEVID	= IRQ_PER_CPU_DEVID,
++	_IRQ_NO_SOFTIRQ_CALL	= IRQ_NO_SOFTIRQ_CALL,
+ 	_IRQF_MODIFY_MASK	= IRQF_MODIFY_MASK,
+ };
+ 
+@@ -26,6 +27,7 @@ enum {
+ #define IRQ_NOAUTOEN		GOT_YOU_MORON
+ #define IRQ_NESTED_THREAD	GOT_YOU_MORON
+ #define IRQ_PER_CPU_DEVID	GOT_YOU_MORON
++#define IRQ_NO_SOFTIRQ_CALL	GOT_YOU_MORON
+ #undef IRQF_MODIFY_MASK
+ #define IRQF_MODIFY_MASK	GOT_YOU_MORON
+ 
+@@ -36,6 +38,16 @@ irq_settings_clr_and_set(struct irq_desc *desc, u32 clr, u32 set)
+ 	desc->status_use_accessors |= (set & _IRQF_MODIFY_MASK);
+ }
+ 
++static inline bool irq_settings_no_softirq_call(struct irq_desc *desc)
++{
++	return desc->status_use_accessors & _IRQ_NO_SOFTIRQ_CALL;
++}
++
++static inline void irq_settings_set_no_softirq_call(struct irq_desc *desc)
++{
++	desc->status_use_accessors |= _IRQ_NO_SOFTIRQ_CALL;
++}
++
+ static inline bool irq_settings_is_per_cpu(struct irq_desc *desc)
+ {
+ 	return desc->status_use_accessors & _IRQ_PER_CPU;
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index 92b4ca3..bef08f3 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -425,6 +425,13 @@ void local_bh_enable_ip(unsigned long ip)
+ }
+ EXPORT_SYMBOL(local_bh_enable_ip);
+ 
++void _local_bh_enable(void)
++{
++	current->softirq_nestcnt--;
++	migrate_enable();
++}
++EXPORT_SYMBOL(_local_bh_enable);
++
+ /* For tracing */
+ int notrace __in_softirq(void)
+ {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0190-local-vars-migrate-disable.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0190-local-vars-migrate-disable.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0190-local-vars-migrate-disable.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0190-local-vars-migrate-disable.patch.patch)
@@ -0,0 +1,52 @@
+From 420804f601e4766e2a4c61617fb6aad1a36e64b1 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 28 Jun 2011 20:42:16 +0200
+Subject: [PATCH 190/271] local-vars-migrate-disable.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/percpu.h |   28 ++++++++++++++++++++++++----
+ 1 file changed, 24 insertions(+), 4 deletions(-)
+
+diff --git a/include/linux/percpu.h b/include/linux/percpu.h
+index 3941ea4..58163cd 100644
+--- a/include/linux/percpu.h
++++ b/include/linux/percpu.h
+@@ -48,10 +48,30 @@
+ 	preempt_enable();				\
+ } while (0)
+ 
+-#define get_local_var(var)	get_cpu_var(var)
+-#define put_local_var(var)	put_cpu_var(var)
+-#define get_local_ptr(var)	get_cpu_ptr(var)
+-#define put_local_ptr(var)	put_cpu_ptr(var)
++#ifndef CONFIG_PREEMPT_RT_FULL
++# define get_local_var(var)	get_cpu_var(var)
++# define put_local_var(var)	put_cpu_var(var)
++# define get_local_ptr(var)	get_cpu_ptr(var)
++# define put_local_ptr(var)	put_cpu_ptr(var)
++#else
++# define get_local_var(var) (*({			\
++	migrate_disable();				\
++	&__get_cpu_var(var); }))
++
++# define put_local_var(var) do {			\
++	(void)&(var);					\
++	migrate_enable();				\
++} while (0)
++
++# define get_local_ptr(var) ({				\
++	migrate_disable();				\
++	this_cpu_ptr(var); })
++
++# define put_local_ptr(var) do {			\
++	(void)(var);					\
++	migrate_enable();				\
++} while (0)
++#endif
+ 
+ /* minimum unit size, also is the maximum supported allocation size */
+ #define PCPU_MIN_UNIT_SIZE		PFN_ALIGN(32 << 10)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0191-md-raid5-Make-raid5_percpu-handling-RT-aware.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0191-md-raid5-Make-raid5_percpu-handling-RT-aware.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0191-md-raid5-Make-raid5_percpu-handling-RT-aware.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0191-md-raid5-Make-raid5_percpu-handling-RT-aware.patch)
@@ -0,0 +1,68 @@
+From e9092a457dc56054064e8e8a41619bfd154e7367 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 6 Apr 2010 16:51:31 +0200
+Subject: [PATCH 191/271] md: raid5: Make raid5_percpu handling RT aware
+
+__raid_run_ops() disables preemption with get_cpu() around the access
+to the raid5_percpu variables. That causes scheduling while atomic
+spews on RT.
+
+Serialize the access to the percpu data with a lock and keep the code
+preemptible.
+
+Reported-by: Udo van den Heuvel <udovdh at xs4all.nl>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Tested-by: Udo van den Heuvel <udovdh at xs4all.nl>
+---
+ drivers/md/raid5.c |    7 +++++--
+ drivers/md/raid5.h |    1 +
+ 2 files changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 858fdbb..c2401e8 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -1245,8 +1245,9 @@ static void __raid_run_ops(struct stripe_head *sh, unsigned long ops_request)
+ 	struct raid5_percpu *percpu;
+ 	unsigned long cpu;
+ 
+-	cpu = get_cpu();
++	cpu = get_cpu_light();
+ 	percpu = per_cpu_ptr(conf->percpu, cpu);
++	spin_lock(&percpu->lock);
+ 	if (test_bit(STRIPE_OP_BIOFILL, &ops_request)) {
+ 		ops_run_biofill(sh);
+ 		overlap_clear++;
+@@ -1298,7 +1299,8 @@ static void __raid_run_ops(struct stripe_head *sh, unsigned long ops_request)
+ 			if (test_and_clear_bit(R5_Overlap, &dev->flags))
+ 				wake_up(&sh->raid_conf->wait_for_overlap);
+ 		}
+-	put_cpu();
++	spin_unlock(&percpu->lock);
++	put_cpu_light();
+ }
+ 
+ #ifdef CONFIG_MULTICORE_RAID456
+@@ -4539,6 +4541,7 @@ static int raid5_alloc_percpu(struct r5conf *conf)
+ 			break;
+ 		}
+ 		per_cpu_ptr(conf->percpu, cpu)->scribble = scribble;
++		spin_lock_init(&per_cpu_ptr(conf->percpu, cpu)->lock);
+ 	}
+ #ifdef CONFIG_HOTPLUG_CPU
+ 	conf->cpu_notify.notifier_call = raid456_cpu_notify;
+diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
+index e10c553..010a969 100644
+--- a/drivers/md/raid5.h
++++ b/drivers/md/raid5.h
+@@ -405,6 +405,7 @@ struct r5conf {
+ 	int			recovery_disabled;
+ 	/* per cpu variables */
+ 	struct raid5_percpu {
++		spinlock_t	lock;	     /* Protection for -RT */
+ 		struct page	*spare_page; /* Used when checking P/Q in raid6 */
+ 		void		*scribble;   /* space for constructing buffer
+ 					      * lists and performing address
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0192-rtmutex-lock-killable.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0192-rtmutex-lock-killable.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0192-rtmutex-lock-killable.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0192-rtmutex-lock-killable.patch.patch)
@@ -0,0 +1,88 @@
+From f56e061a1c52b1faf6c383275e442969c1b5766e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 9 Jun 2011 11:43:52 +0200
+Subject: [PATCH 192/271] rtmutex-lock-killable.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rtmutex.h |    1 +
+ kernel/rtmutex.c        |   33 +++++++++++++++++++++++++++------
+ 2 files changed, 28 insertions(+), 6 deletions(-)
+
+diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
+index de17134..3561eb2 100644
+--- a/include/linux/rtmutex.h
++++ b/include/linux/rtmutex.h
+@@ -90,6 +90,7 @@ extern void rt_mutex_destroy(struct rt_mutex *lock);
+ extern void rt_mutex_lock(struct rt_mutex *lock);
+ extern int rt_mutex_lock_interruptible(struct rt_mutex *lock,
+ 						int detect_deadlock);
++extern int rt_mutex_lock_killable(struct rt_mutex *lock, int detect_deadlock);
+ extern int rt_mutex_timed_lock(struct rt_mutex *lock,
+ 					struct hrtimer_sleeper *timeout,
+ 					int detect_deadlock);
+diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
+index f9d8482..723fd3a 100644
+--- a/kernel/rtmutex.c
++++ b/kernel/rtmutex.c
+@@ -799,12 +799,12 @@ EXPORT_SYMBOL_GPL(rt_mutex_lock);
+ /**
+  * rt_mutex_lock_interruptible - lock a rt_mutex interruptible
+  *
+- * @lock: 		the rt_mutex to be locked
++ * @lock:		the rt_mutex to be locked
+  * @detect_deadlock:	deadlock detection on/off
+  *
+  * Returns:
+- *  0 		on success
+- * -EINTR 	when interrupted by a signal
++ *  0		on success
++ * -EINTR	when interrupted by a signal
+  * -EDEADLK	when the lock would deadlock (when deadlock detection is on)
+  */
+ int __sched rt_mutex_lock_interruptible(struct rt_mutex *lock,
+@@ -818,17 +818,38 @@ int __sched rt_mutex_lock_interruptible(struct rt_mutex *lock,
+ EXPORT_SYMBOL_GPL(rt_mutex_lock_interruptible);
+ 
+ /**
++ * rt_mutex_lock_killable - lock a rt_mutex killable
++ *
++ * @lock:		the rt_mutex to be locked
++ * @detect_deadlock:	deadlock detection on/off
++ *
++ * Returns:
++ *  0		on success
++ * -EINTR	when interrupted by a signal
++ * -EDEADLK	when the lock would deadlock (when deadlock detection is on)
++ */
++int __sched rt_mutex_lock_killable(struct rt_mutex *lock,
++				   int detect_deadlock)
++{
++	might_sleep();
++
++	return rt_mutex_fastlock(lock, TASK_KILLABLE,
++				 detect_deadlock, rt_mutex_slowlock);
++}
++EXPORT_SYMBOL_GPL(rt_mutex_lock_killable);
++
++/**
+  * rt_mutex_timed_lock - lock a rt_mutex interruptible
+  *			the timeout structure is provided
+  *			by the caller
+  *
+- * @lock: 		the rt_mutex to be locked
++ * @lock:		the rt_mutex to be locked
+  * @timeout:		timeout structure or NULL (no timeout)
+  * @detect_deadlock:	deadlock detection on/off
+  *
+  * Returns:
+- *  0 		on success
+- * -EINTR 	when interrupted by a signal
++ *  0		on success
++ * -EINTR	when interrupted by a signal
+  * -ETIMEDOUT	when the timeout expired
+  * -EDEADLK	when the lock would deadlock (when deadlock detection is on)
+  */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0193-rtmutex-futex-prepare-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0193-rtmutex-futex-prepare-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0193-rtmutex-futex-prepare-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0193-rtmutex-futex-prepare-rt.patch.patch)
@@ -0,0 +1,225 @@
+From 63729a815de553f61266c16c06b6b586bdf40eb4 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 10 Jun 2011 11:04:15 +0200
+Subject: [PATCH 193/271] rtmutex-futex-prepare-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/futex.c          |   77 ++++++++++++++++++++++++++++++++++++++---------
+ kernel/rtmutex.c        |   31 ++++++++++++++++---
+ kernel/rtmutex_common.h |    2 ++
+ 3 files changed, 91 insertions(+), 19 deletions(-)
+
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 866c9d5..840fcea 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -1423,6 +1423,16 @@ retry_private:
+ 				requeue_pi_wake_futex(this, &key2, hb2);
+ 				drop_count++;
+ 				continue;
++			} else if (ret == -EAGAIN) {
++				/*
++				 * Waiter was woken by timeout or
++				 * signal and has set pi_blocked_on to
++				 * PI_WAKEUP_INPROGRESS before we
++				 * tried to enqueue it on the rtmutex.
++				 */
++				this->pi_state = NULL;
++				free_pi_state(pi_state);
++				continue;
+ 			} else if (ret) {
+ 				/* -EDEADLK */
+ 				this->pi_state = NULL;
+@@ -2267,7 +2277,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ 	struct hrtimer_sleeper timeout, *to = NULL;
+ 	struct rt_mutex_waiter rt_waiter;
+ 	struct rt_mutex *pi_mutex = NULL;
+-	struct futex_hash_bucket *hb;
++	struct futex_hash_bucket *hb, *hb2;
+ 	union futex_key key2 = FUTEX_KEY_INIT;
+ 	struct futex_q q = futex_q_init;
+ 	int res, ret;
+@@ -2311,20 +2321,55 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ 	/* Queue the futex_q, drop the hb lock, wait for wakeup. */
+ 	futex_wait_queue_me(hb, &q, to);
+ 
+-	spin_lock(&hb->lock);
+-	ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
+-	spin_unlock(&hb->lock);
+-	if (ret)
+-		goto out_put_keys;
++	/*
++	 * On RT we must avoid races with requeue and trying to block
++	 * on two mutexes (hb->lock and uaddr2's rtmutex) by
++	 * serializing access to pi_blocked_on with pi_lock.
++	 */
++	raw_spin_lock_irq(&current->pi_lock);
++	if (current->pi_blocked_on) {
++		/*
++		 * We have been requeued or are in the process of
++		 * being requeued.
++		 */
++		raw_spin_unlock_irq(&current->pi_lock);
++	} else {
++		/*
++		 * Setting pi_blocked_on to PI_WAKEUP_INPROGRESS
++		 * prevents a concurrent requeue from moving us to the
++		 * uaddr2 rtmutex. After that we can safely acquire
++		 * (and possibly block on) hb->lock.
++		 */
++		current->pi_blocked_on = PI_WAKEUP_INPROGRESS;
++		raw_spin_unlock_irq(&current->pi_lock);
++
++		spin_lock(&hb->lock);
++
++		/*
++		 * Clean up pi_blocked_on. We might leak it otherwise
++		 * when we succeeded with the hb->lock in the fast
++		 * path.
++		 */
++		raw_spin_lock_irq(&current->pi_lock);
++		current->pi_blocked_on = NULL;
++		raw_spin_unlock_irq(&current->pi_lock);
++
++		ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
++		spin_unlock(&hb->lock);
++		if (ret)
++			goto out_put_keys;
++	}
+ 
+ 	/*
+-	 * In order for us to be here, we know our q.key == key2, and since
+-	 * we took the hb->lock above, we also know that futex_requeue() has
+-	 * completed and we no longer have to concern ourselves with a wakeup
+-	 * race with the atomic proxy lock acquisition by the requeue code. The
+-	 * futex_requeue dropped our key1 reference and incremented our key2
+-	 * reference count.
++	 * In order to be here, we have either been requeued, are in
++	 * the process of being requeued, or requeue successfully
++	 * acquired uaddr2 on our behalf.  If pi_blocked_on was
++	 * non-null above, we may be racing with a requeue.  Do not
++	 * rely on q->lock_ptr to be hb2->lock until after blocking on
++	 * hb->lock or hb2->lock. The futex_requeue dropped our key1
++	 * reference and incremented our key2 reference count.
+ 	 */
++	hb2 = hash_futex(&key2);
+ 
+ 	/* Check if the requeue code acquired the second futex for us. */
+ 	if (!q.rt_waiter) {
+@@ -2333,9 +2378,10 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ 		 * did a lock-steal - fix up the PI-state in that case.
+ 		 */
+ 		if (q.pi_state && (q.pi_state->owner != current)) {
+-			spin_lock(q.lock_ptr);
++			spin_lock(&hb2->lock);
++			BUG_ON(&hb2->lock != q.lock_ptr);
+ 			ret = fixup_pi_state_owner(uaddr2, &q, current);
+-			spin_unlock(q.lock_ptr);
++			spin_unlock(&hb2->lock);
+ 		}
+ 	} else {
+ 		/*
+@@ -2348,7 +2394,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ 		ret = rt_mutex_finish_proxy_lock(pi_mutex, to, &rt_waiter, 1);
+ 		debug_rt_mutex_free_waiter(&rt_waiter);
+ 
+-		spin_lock(q.lock_ptr);
++		spin_lock(&hb2->lock);
++		BUG_ON(&hb2->lock != q.lock_ptr);
+ 		/*
+ 		 * Fixup the pi_state owner and possibly acquire the lock if we
+ 		 * haven't already.
+diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
+index 723fd3a..13b3c92 100644
+--- a/kernel/rtmutex.c
++++ b/kernel/rtmutex.c
+@@ -67,6 +67,11 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock)
+ 		clear_rt_mutex_waiters(lock);
+ }
+ 
++static int rt_mutex_real_waiter(struct rt_mutex_waiter *waiter)
++{
++	return waiter && waiter != PI_WAKEUP_INPROGRESS;
++}
++
+ /*
+  * We can speed up the acquire/release, if the architecture
+  * supports cmpxchg and if there's no debugging state to be set up
+@@ -196,7 +201,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
+ 	 * reached or the state of the chain has changed while we
+ 	 * dropped the locks.
+ 	 */
+-	if (!waiter)
++	if (!rt_mutex_real_waiter(waiter))
+ 		goto out_unlock_pi;
+ 
+ 	/*
+@@ -399,6 +404,23 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
+ 	int chain_walk = 0, res;
+ 
+ 	raw_spin_lock_irqsave(&task->pi_lock, flags);
++
++	/*
++	 * In the case of futex requeue PI, this will be a proxy
++	 * lock. The task will wake unaware that it is enqueueed on
++	 * this lock. Avoid blocking on two locks and corrupting
++	 * pi_blocked_on via the PI_WAKEUP_INPROGRESS
++	 * flag. futex_wait_requeue_pi() sets this when it wakes up
++	 * before requeue (due to a signal or timeout). Do not enqueue
++	 * the task if PI_WAKEUP_INPROGRESS is set.
++	 */
++	if (task != current && task->pi_blocked_on == PI_WAKEUP_INPROGRESS) {
++		raw_spin_unlock_irqrestore(&task->pi_lock, flags);
++		return -EAGAIN;
++	}
++
++	BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on));
++
+ 	__rt_mutex_adjust_prio(task);
+ 	waiter->task = task;
+ 	waiter->lock = lock;
+@@ -423,7 +445,7 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
+ 		plist_add(&waiter->pi_list_entry, &owner->pi_waiters);
+ 
+ 		__rt_mutex_adjust_prio(owner);
+-		if (owner->pi_blocked_on)
++		if (rt_mutex_real_waiter(owner->pi_blocked_on))
+ 			chain_walk = 1;
+ 		raw_spin_unlock_irqrestore(&owner->pi_lock, flags);
+ 	}
+@@ -517,7 +539,7 @@ static void remove_waiter(struct rt_mutex *lock,
+ 		}
+ 		__rt_mutex_adjust_prio(owner);
+ 
+-		if (owner->pi_blocked_on)
++		if (rt_mutex_real_waiter(owner->pi_blocked_on))
+ 			chain_walk = 1;
+ 
+ 		raw_spin_unlock_irqrestore(&owner->pi_lock, flags);
+@@ -551,7 +573,8 @@ void rt_mutex_adjust_pi(struct task_struct *task)
+ 	raw_spin_lock_irqsave(&task->pi_lock, flags);
+ 
+ 	waiter = task->pi_blocked_on;
+-	if (!waiter || waiter->list_entry.prio == task->prio) {
++	if (!rt_mutex_real_waiter(waiter) ||
++	    waiter->list_entry.prio == task->prio) {
+ 		raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ 		return;
+ 	}
+diff --git a/kernel/rtmutex_common.h b/kernel/rtmutex_common.h
+index 53a66c8..b43d832 100644
+--- a/kernel/rtmutex_common.h
++++ b/kernel/rtmutex_common.h
+@@ -103,6 +103,8 @@ static inline struct task_struct *rt_mutex_owner(struct rt_mutex *lock)
+ /*
+  * PI-futex support (proxy locking functions, etc.):
+  */
++#define PI_WAKEUP_INPROGRESS	((struct rt_mutex_waiter *) 1)
++
+ extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock);
+ extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
+ 				       struct task_struct *proxy_owner);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0194-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0194-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0194-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0194-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch)
@@ -0,0 +1,119 @@
+From ad4305e2e682122728ecb257f751fcb0b3e1b836 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Tue, 10 Apr 2012 14:34:13 -0400
+Subject: [PATCH 194/271] futex: Fix bug on when a requeued RT task times out
+
+Requeue with timeout causes a bug with PREEMPT_RT_FULL.
+
+The bug comes from a timed out condition.
+
+	TASK 1				TASK 2
+	------				------
+    futex_wait_requeue_pi()
+	futex_wait_queue_me()
+	<timed out>
+
+					double_lock_hb();
+
+	raw_spin_lock(pi_lock);
+	if (current->pi_blocked_on) {
+	} else {
+	    current->pi_blocked_on = PI_WAKE_INPROGRESS;
+	    run_spin_unlock(pi_lock);
+	    spin_lock(hb->lock); <-- blocked!
+
+					plist_for_each_entry_safe(this) {
+					    rt_mutex_start_proxy_lock();
+						task_blocks_on_rt_mutex();
+						BUG_ON(task->pi_blocked_on)!!!!
+
+The BUG_ON() actually has a check for PI_WAKE_INPROGRESS, but the
+problem is that, after TASK 1 sets PI_WAKE_INPROGRESS, it then tries to
+grab the hb->lock, which it fails to do so. As the hb->lock is a mutex,
+it will block and set the "pi_blocked_on" to the hb->lock.
+
+When TASK 2 goes to requeue it, the check for PI_WAKE_INPROGESS fails
+because the task1's pi_blocked_on is no longer set to that, but instead,
+set to the hb->lock.
+
+The fix:
+
+When calling rt_mutex_start_proxy_lock() a check is made to see
+if the proxy tasks pi_blocked_on is set. If so, exit out early.
+Otherwise set it to a new flag PI_REQUEUE_INPROGRESS, which notifies
+the proxy task that it is being requeued, and will handle things
+appropriately.
+
+Cc: stable-rt at vger.kernel.org
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/rtmutex.c        |   32 +++++++++++++++++++++++++++++++-
+ kernel/rtmutex_common.h |    1 +
+ 2 files changed, 32 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
+index 13b3c92..f8dcb7b 100644
+--- a/kernel/rtmutex.c
++++ b/kernel/rtmutex.c
+@@ -69,7 +69,8 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock)
+ 
+ static int rt_mutex_real_waiter(struct rt_mutex_waiter *waiter)
+ {
+-	return waiter && waiter != PI_WAKEUP_INPROGRESS;
++	return waiter && waiter != PI_WAKEUP_INPROGRESS &&
++		waiter != PI_REQUEUE_INPROGRESS;
+ }
+ 
+ /*
+@@ -1010,6 +1011,35 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
+ 		return 1;
+ 	}
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++	/*
++	 * In PREEMPT_RT there's an added race.
++	 * If the task, that we are about to requeue, times out,
++	 * it can set the PI_WAKEUP_INPROGRESS. This tells the requeue
++	 * to skip this task. But right after the task sets
++	 * its pi_blocked_on to PI_WAKEUP_INPROGRESS it can then
++	 * block on the spin_lock(&hb->lock), which in RT is an rtmutex.
++	 * This will replace the PI_WAKEUP_INPROGRESS with the actual
++	 * lock that it blocks on. We *must not* place this task
++	 * on this proxy lock in that case.
++	 *
++	 * To prevent this race, we first take the task's pi_lock
++	 * and check if it has updated its pi_blocked_on. If it has,
++	 * we assume that it woke up and we return -EAGAIN.
++	 * Otherwise, we set the task's pi_blocked_on to
++	 * PI_REQUEUE_INPROGRESS, so that if the task is waking up
++	 * it will know that we are in the process of requeuing it.
++	 */
++	raw_spin_lock(&task->pi_lock);
++	if (task->pi_blocked_on) {
++		raw_spin_unlock(&task->pi_lock);
++		raw_spin_unlock(&lock->wait_lock);
++		return -EAGAIN;
++	}
++	task->pi_blocked_on = PI_REQUEUE_INPROGRESS;
++	raw_spin_unlock(&task->pi_lock);
++#endif
++
+ 	ret = task_blocks_on_rt_mutex(lock, waiter, task, detect_deadlock);
+ 
+ 	if (ret && !rt_mutex_owner(lock)) {
+diff --git a/kernel/rtmutex_common.h b/kernel/rtmutex_common.h
+index b43d832..47290ec 100644
+--- a/kernel/rtmutex_common.h
++++ b/kernel/rtmutex_common.h
+@@ -104,6 +104,7 @@ static inline struct task_struct *rt_mutex_owner(struct rt_mutex *lock)
+  * PI-futex support (proxy locking functions, etc.):
+  */
+ #define PI_WAKEUP_INPROGRESS	((struct rt_mutex_waiter *) 1)
++#define PI_REQUEUE_INPROGRESS	((struct rt_mutex_waiter *) 2)
+ 
+ extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock);
+ extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0195-rt-mutex-add-sleeping-spinlocks-support.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0195-rt-mutex-add-sleeping-spinlocks-support.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0195-rt-mutex-add-sleeping-spinlocks-support.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0195-rt-mutex-add-sleeping-spinlocks-support.patch.patch)
@@ -0,0 +1,625 @@
+From 0bc35e04a6aa0d5064dffb36c9781a6b71c02195 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 10 Jun 2011 11:21:25 +0200
+Subject: [PATCH 195/271] rt-mutex-add-sleeping-spinlocks-support.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rtmutex.h |   35 ++++-
+ kernel/futex.c          |    3 +-
+ kernel/rtmutex.c        |  382 ++++++++++++++++++++++++++++++++++++++++++++---
+ kernel/rtmutex_common.h |    9 ++
+ 4 files changed, 403 insertions(+), 26 deletions(-)
+
+diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
+index 3561eb2..928d93e 100644
+--- a/include/linux/rtmutex.h
++++ b/include/linux/rtmutex.h
+@@ -29,9 +29,10 @@ struct rt_mutex {
+ 	raw_spinlock_t		wait_lock;
+ 	struct plist_head	wait_list;
+ 	struct task_struct	*owner;
+-#ifdef CONFIG_DEBUG_RT_MUTEXES
+ 	int			save_state;
+-	const char 		*name, *file;
++#ifdef CONFIG_DEBUG_RT_MUTEXES
++	const char		*file;
++	const char		*name;
+ 	int			line;
+ 	void			*magic;
+ #endif
+@@ -56,19 +57,39 @@ struct hrtimer_sleeper;
+ #ifdef CONFIG_DEBUG_RT_MUTEXES
+ # define __DEBUG_RT_MUTEX_INITIALIZER(mutexname) \
+ 	, .name = #mutexname, .file = __FILE__, .line = __LINE__
+-# define rt_mutex_init(mutex)			__rt_mutex_init(mutex, __func__)
++
++# define rt_mutex_init(mutex)					\
++	do {							\
++		raw_spin_lock_init(&(mutex)->wait_lock);	\
++		__rt_mutex_init(mutex, #mutex);			\
++	} while (0)
++
+  extern void rt_mutex_debug_task_free(struct task_struct *tsk);
+ #else
+ # define __DEBUG_RT_MUTEX_INITIALIZER(mutexname)
+-# define rt_mutex_init(mutex)			__rt_mutex_init(mutex, NULL)
++
++# define rt_mutex_init(mutex)					\
++	do {							\
++		raw_spin_lock_init(&(mutex)->wait_lock);	\
++		__rt_mutex_init(mutex, #mutex);			\
++	} while (0)
++
+ # define rt_mutex_debug_task_free(t)			do { } while (0)
+ #endif
+ 
+-#define __RT_MUTEX_INITIALIZER(mutexname) \
+-	{ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock) \
++#define __RT_MUTEX_INITIALIZER_PLAIN(mutexname) \
++	.wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock) \
+ 	, .wait_list = PLIST_HEAD_INIT(mutexname.wait_list) \
+ 	, .owner = NULL \
+-	__DEBUG_RT_MUTEX_INITIALIZER(mutexname)}
++	__DEBUG_RT_MUTEX_INITIALIZER(mutexname)
++
++
++#define __RT_MUTEX_INITIALIZER(mutexname) \
++	{ __RT_MUTEX_INITIALIZER_PLAIN(mutexname) }
++
++#define __RT_MUTEX_INITIALIZER_SAVE_STATE(mutexname) \
++	{ __RT_MUTEX_INITIALIZER_PLAIN(mutexname)    \
++	  , .save_state = 1 }
+ 
+ #define DEFINE_RT_MUTEX(mutexname) \
+ 	struct rt_mutex mutexname = __RT_MUTEX_INITIALIZER(mutexname)
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 840fcea..2771a63 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -2299,8 +2299,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ 	 * The waiter is allocated on our stack, manipulated by the requeue
+ 	 * code while we sleep on uaddr.
+ 	 */
+-	debug_rt_mutex_init_waiter(&rt_waiter);
+-	rt_waiter.task = NULL;
++	rt_mutex_init_waiter(&rt_waiter, false);
+ 
+ 	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, VERIFY_WRITE);
+ 	if (unlikely(ret != 0))
+diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
+index f8dcb7b..a7723d2 100644
+--- a/kernel/rtmutex.c
++++ b/kernel/rtmutex.c
+@@ -8,6 +8,12 @@
+  *  Copyright (C) 2005 Kihon Technologies Inc., Steven Rostedt
+  *  Copyright (C) 2006 Esben Nielsen
+  *
++ * Adaptive Spinlocks:
++ *  Copyright (C) 2008 Novell, Inc., Gregory Haskins, Sven Dietrich,
++ *                                   and Peter Morreale,
++ * Adaptive Spinlocks simplification:
++ *  Copyright (C) 2008 Red Hat, Inc., Steven Rostedt <srostedt at redhat.com>
++ *
+  *  See Documentation/rt-mutex-design.txt for details.
+  */
+ #include <linux/spinlock.h>
+@@ -96,6 +102,12 @@ static inline void mark_rt_mutex_waiters(struct rt_mutex *lock)
+ }
+ #endif
+ 
++static inline void init_lists(struct rt_mutex *lock)
++{
++	if (unlikely(!lock->wait_list.node_list.prev))
++		plist_head_init(&lock->wait_list);
++}
++
+ /*
+  * Calculate task priority from the waiter list priority
+  *
+@@ -142,6 +154,14 @@ static void rt_mutex_adjust_prio(struct task_struct *task)
+ 	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ }
+ 
++static void rt_mutex_wake_waiter(struct rt_mutex_waiter *waiter)
++{
++	if (waiter->savestate)
++		wake_up_lock_sleeper(waiter->task);
++	else
++		wake_up_process(waiter->task);
++}
++
+ /*
+  * Max number of times we'll walk the boosting chain:
+  */
+@@ -253,13 +273,15 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
+ 	/* Release the task */
+ 	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ 	if (!rt_mutex_owner(lock)) {
++		struct rt_mutex_waiter *lock_top_waiter;
++
+ 		/*
+ 		 * If the requeue above changed the top waiter, then we need
+ 		 * to wake the new top waiter up to try to get the lock.
+ 		 */
+-
+-		if (top_waiter != rt_mutex_top_waiter(lock))
+-			wake_up_process(rt_mutex_top_waiter(lock)->task);
++		lock_top_waiter = rt_mutex_top_waiter(lock);
++		if (top_waiter != lock_top_waiter)
++			rt_mutex_wake_waiter(lock_top_waiter);
+ 		raw_spin_unlock(&lock->wait_lock);
+ 		goto out_put_task;
+ 	}
+@@ -304,6 +326,25 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
+ 	return ret;
+ }
+ 
++
++#define STEAL_NORMAL  0
++#define STEAL_LATERAL 1
++
++/*
++ * Note that RT tasks are excluded from lateral-steals to prevent the
++ * introduction of an unbounded latency
++ */
++static inline int lock_is_stealable(struct task_struct *task,
++				    struct task_struct *pendowner, int mode)
++{
++    if (mode == STEAL_NORMAL || rt_task(task)) {
++	    if (task->prio >= pendowner->prio)
++		    return 0;
++    } else if (task->prio > pendowner->prio)
++	    return 0;
++    return 1;
++}
++
+ /*
+  * Try to take an rt-mutex
+  *
+@@ -313,8 +354,9 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
+  * @task:   the task which wants to acquire the lock
+  * @waiter: the waiter that is queued to the lock's wait list. (could be NULL)
+  */
+-static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
+-		struct rt_mutex_waiter *waiter)
++static int
++__try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
++		       struct rt_mutex_waiter *waiter, int mode)
+ {
+ 	/*
+ 	 * We have to be careful here if the atomic speedups are
+@@ -347,12 +389,14 @@ static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
+ 	 * 3) it is top waiter
+ 	 */
+ 	if (rt_mutex_has_waiters(lock)) {
+-		if (task->prio >= rt_mutex_top_waiter(lock)->list_entry.prio) {
+-			if (!waiter || waiter != rt_mutex_top_waiter(lock))
+-				return 0;
+-		}
++		struct task_struct *pown = rt_mutex_top_waiter(lock)->task;
++
++		if (task != pown && !lock_is_stealable(task, pown, mode))
++			return 0;
+ 	}
+ 
++	/* We got the lock. */
++
+ 	if (waiter || rt_mutex_has_waiters(lock)) {
+ 		unsigned long flags;
+ 		struct rt_mutex_waiter *top;
+@@ -377,7 +421,6 @@ static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
+ 		raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ 	}
+ 
+-	/* We got the lock. */
+ 	debug_rt_mutex_lock(lock);
+ 
+ 	rt_mutex_set_owner(lock, task);
+@@ -387,6 +430,13 @@ static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
+ 	return 1;
+ }
+ 
++static inline int
++try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
++		     struct rt_mutex_waiter *waiter)
++{
++	return __try_to_take_rt_mutex(lock, task, waiter, STEAL_NORMAL);
++}
++
+ /*
+  * Task blocks on lock.
+  *
+@@ -501,7 +551,7 @@ static void wakeup_next_waiter(struct rt_mutex *lock)
+ 
+ 	raw_spin_unlock_irqrestore(&current->pi_lock, flags);
+ 
+-	wake_up_process(waiter->task);
++	rt_mutex_wake_waiter(waiter);
+ }
+ 
+ /*
+@@ -580,18 +630,315 @@ void rt_mutex_adjust_pi(struct task_struct *task)
+ 		return;
+ 	}
+ 
+-	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+-
+ 	/* gets dropped in rt_mutex_adjust_prio_chain()! */
+ 	get_task_struct(task);
++	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ 	rt_mutex_adjust_prio_chain(task, 0, NULL, NULL, task);
+ }
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++/*
++ * preemptible spin_lock functions:
++ */
++static inline void rt_spin_lock_fastlock(struct rt_mutex *lock,
++					 void  (*slowfn)(struct rt_mutex *lock))
++{
++	might_sleep();
++
++	if (likely(rt_mutex_cmpxchg(lock, NULL, current)))
++		rt_mutex_deadlock_account_lock(lock, current);
++	else
++		slowfn(lock);
++}
++
++static inline void rt_spin_lock_fastunlock(struct rt_mutex *lock,
++					   void  (*slowfn)(struct rt_mutex *lock))
++{
++	if (likely(rt_mutex_cmpxchg(lock, current, NULL)))
++		rt_mutex_deadlock_account_unlock(current);
++	else
++		slowfn(lock);
++}
++
++#ifdef CONFIG_SMP
++/*
++ * Note that owner is a speculative pointer and dereferencing relies
++ * on rcu_read_lock() and the check against the lock owner.
++ */
++static int adaptive_wait(struct rt_mutex *lock,
++			 struct task_struct *owner)
++{
++	int res = 0;
++
++	rcu_read_lock();
++	for (;;) {
++		if (owner != rt_mutex_owner(lock))
++			break;
++		/*
++		 * Ensure that owner->on_cpu is dereferenced _after_
++		 * checking the above to be valid.
++		 */
++		barrier();
++		if (!owner->on_cpu) {
++			res = 1;
++			break;
++		}
++		cpu_relax();
++	}
++	rcu_read_unlock();
++	return res;
++}
++#else
++static int adaptive_wait(struct rt_mutex *lock,
++			 struct task_struct *orig_owner)
++{
++	return 1;
++}
++#endif
++
++# define pi_lock(lock)			raw_spin_lock_irq(lock)
++# define pi_unlock(lock)		raw_spin_unlock_irq(lock)
++
++/*
++ * Slow path lock function spin_lock style: this variant is very
++ * careful not to miss any non-lock wakeups.
++ *
++ * We store the current state under p->pi_lock in p->saved_state and
++ * the try_to_wake_up() code handles this accordingly.
++ */
++static void  noinline __sched rt_spin_lock_slowlock(struct rt_mutex *lock)
++{
++	struct task_struct *lock_owner, *self = current;
++	struct rt_mutex_waiter waiter, *top_waiter;
++	int ret;
++
++	rt_mutex_init_waiter(&waiter, true);
++
++	raw_spin_lock(&lock->wait_lock);
++	init_lists(lock);
++
++	if (__try_to_take_rt_mutex(lock, self, NULL, STEAL_LATERAL)) {
++		raw_spin_unlock(&lock->wait_lock);
++		return;
++	}
++
++	BUG_ON(rt_mutex_owner(lock) == self);
++
++	/*
++	 * We save whatever state the task is in and we'll restore it
++	 * after acquiring the lock taking real wakeups into account
++	 * as well. We are serialized via pi_lock against wakeups. See
++	 * try_to_wake_up().
++	 */
++	pi_lock(&self->pi_lock);
++	self->saved_state = self->state;
++	__set_current_state(TASK_UNINTERRUPTIBLE);
++	pi_unlock(&self->pi_lock);
++
++	ret = task_blocks_on_rt_mutex(lock, &waiter, self, 0);
++	BUG_ON(ret);
++
++	for (;;) {
++		/* Try to acquire the lock again. */
++		if (__try_to_take_rt_mutex(lock, self, &waiter, STEAL_LATERAL))
++			break;
++
++		top_waiter = rt_mutex_top_waiter(lock);
++		lock_owner = rt_mutex_owner(lock);
++
++		raw_spin_unlock(&lock->wait_lock);
++
++		debug_rt_mutex_print_deadlock(&waiter);
++
++		if (top_waiter != &waiter || adaptive_wait(lock, lock_owner))
++			schedule_rt_mutex(lock);
++
++		raw_spin_lock(&lock->wait_lock);
++
++		pi_lock(&self->pi_lock);
++		__set_current_state(TASK_UNINTERRUPTIBLE);
++		pi_unlock(&self->pi_lock);
++	}
++
++	/*
++	 * Restore the task state to current->saved_state. We set it
++	 * to the original state above and the try_to_wake_up() code
++	 * has possibly updated it when a real (non-rtmutex) wakeup
++	 * happened while we were blocked. Clear saved_state so
++	 * try_to_wakeup() does not get confused.
++	 */
++	pi_lock(&self->pi_lock);
++	__set_current_state(self->saved_state);
++	self->saved_state = TASK_RUNNING;
++	pi_unlock(&self->pi_lock);
++
++	/*
++	 * try_to_take_rt_mutex() sets the waiter bit
++	 * unconditionally. We might have to fix that up:
++	 */
++	fixup_rt_mutex_waiters(lock);
++
++	BUG_ON(rt_mutex_has_waiters(lock) && &waiter == rt_mutex_top_waiter(lock));
++	BUG_ON(!plist_node_empty(&waiter.list_entry));
++
++	raw_spin_unlock(&lock->wait_lock);
++
++	debug_rt_mutex_free_waiter(&waiter);
++}
++
++/*
++ * Slow path to release a rt_mutex spin_lock style
++ */
++static void  noinline __sched rt_spin_lock_slowunlock(struct rt_mutex *lock)
++{
++	raw_spin_lock(&lock->wait_lock);
++
++	debug_rt_mutex_unlock(lock);
++
++	rt_mutex_deadlock_account_unlock(current);
++
++	if (!rt_mutex_has_waiters(lock)) {
++		lock->owner = NULL;
++		raw_spin_unlock(&lock->wait_lock);
++		return;
++	}
++
++	wakeup_next_waiter(lock);
++
++	raw_spin_unlock(&lock->wait_lock);
++
++	/* Undo pi boosting.when necessary */
++	rt_mutex_adjust_prio(current);
++}
++
++void __lockfunc rt_spin_lock(spinlock_t *lock)
++{
++	rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock);
++	spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++}
++EXPORT_SYMBOL(rt_spin_lock);
++
++void __lockfunc __rt_spin_lock(struct rt_mutex *lock)
++{
++	rt_spin_lock_fastlock(lock, rt_spin_lock_slowlock);
++}
++EXPORT_SYMBOL(__rt_spin_lock);
++
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass)
++{
++	rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock);
++	spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
++}
++EXPORT_SYMBOL(rt_spin_lock_nested);
++#endif
++
++void __lockfunc rt_spin_unlock(spinlock_t *lock)
++{
++	/* NOTE: we always pass in '1' for nested, for simplicity */
++	spin_release(&lock->dep_map, 1, _RET_IP_);
++	rt_spin_lock_fastunlock(&lock->lock, rt_spin_lock_slowunlock);
++}
++EXPORT_SYMBOL(rt_spin_unlock);
++
++void __lockfunc __rt_spin_unlock(struct rt_mutex *lock)
++{
++	rt_spin_lock_fastunlock(lock, rt_spin_lock_slowunlock);
++}
++EXPORT_SYMBOL(__rt_spin_unlock);
++
++/*
++ * Wait for the lock to get unlocked: instead of polling for an unlock
++ * (like raw spinlocks do), we lock and unlock, to force the kernel to
++ * schedule if there's contention:
++ */
++void __lockfunc rt_spin_unlock_wait(spinlock_t *lock)
++{
++	spin_lock(lock);
++	spin_unlock(lock);
++}
++EXPORT_SYMBOL(rt_spin_unlock_wait);
++
++int __lockfunc rt_spin_trylock(spinlock_t *lock)
++{
++	int ret;
++
++	migrate_disable();
++	ret = rt_mutex_trylock(&lock->lock);
++	if (ret)
++		spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
++	else
++		migrate_enable();
++
++	return ret;
++}
++EXPORT_SYMBOL(rt_spin_trylock);
++
++int __lockfunc rt_spin_trylock_bh(spinlock_t *lock)
++{
++	int ret;
++
++	local_bh_disable();
++	ret = rt_mutex_trylock(&lock->lock);
++	if (ret) {
++		migrate_disable();
++		spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
++	} else
++		local_bh_enable();
++	return ret;
++}
++EXPORT_SYMBOL(rt_spin_trylock_bh);
++
++int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags)
++{
++	int ret;
++
++	*flags = 0;
++	migrate_disable();
++	ret = rt_mutex_trylock(&lock->lock);
++	if (ret)
++		spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
++	else
++		migrate_enable();
++	return ret;
++}
++EXPORT_SYMBOL(rt_spin_trylock_irqsave);
++
++int atomic_dec_and_spin_lock(atomic_t *atomic, spinlock_t *lock)
++{
++	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
++	if (atomic_add_unless(atomic, -1, 1))
++		return 0;
++	migrate_disable();
++	rt_spin_lock(lock);
++	if (atomic_dec_and_test(atomic))
++		return 1;
++	rt_spin_unlock(lock);
++	migrate_enable();
++	return 0;
++}
++EXPORT_SYMBOL(atomic_dec_and_spin_lock);
++
++void
++__rt_spin_lock_init(spinlock_t *lock, char *name, struct lock_class_key *key)
++{
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++	/*
++	 * Make sure we are not reinitializing a held lock:
++	 */
++	debug_check_no_locks_freed((void *)lock, sizeof(*lock));
++	lockdep_init_map(&lock->dep_map, name, key, 0);
++#endif
++}
++EXPORT_SYMBOL(__rt_spin_lock_init);
++
++#endif /* PREEMPT_RT_FULL */
++
+ /**
+  * __rt_mutex_slowlock() - Perform the wait-wake-try-to-take loop
+  * @lock:		 the rt_mutex to take
+  * @state:		 the state the task should block in (TASK_INTERRUPTIBLE
+- * 			 or TASK_UNINTERRUPTIBLE)
++ *			 or TASK_UNINTERRUPTIBLE)
+  * @timeout:		 the pre-initialized and started timer, or NULL for none
+  * @waiter:		 the pre-initialized rt_mutex_waiter
+  *
+@@ -655,9 +1002,10 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state,
+ 	struct rt_mutex_waiter waiter;
+ 	int ret = 0;
+ 
+-	debug_rt_mutex_init_waiter(&waiter);
++	rt_mutex_init_waiter(&waiter, false);
+ 
+ 	raw_spin_lock(&lock->wait_lock);
++	init_lists(lock);
+ 
+ 	/* Try to acquire the lock again: */
+ 	if (try_to_take_rt_mutex(lock, current, NULL)) {
+@@ -710,6 +1058,7 @@ rt_mutex_slowtrylock(struct rt_mutex *lock)
+ 	int ret = 0;
+ 
+ 	raw_spin_lock(&lock->wait_lock);
++	init_lists(lock);
+ 
+ 	if (likely(rt_mutex_owner(lock) != current)) {
+ 
+@@ -942,7 +1291,6 @@ EXPORT_SYMBOL_GPL(rt_mutex_destroy);
+ void __rt_mutex_init(struct rt_mutex *lock, const char *name)
+ {
+ 	lock->owner = NULL;
+-	raw_spin_lock_init(&lock->wait_lock);
+ 	plist_head_init(&lock->wait_list);
+ 
+ 	debug_rt_mutex_init(lock, name);
+@@ -962,7 +1310,7 @@ EXPORT_SYMBOL_GPL(__rt_mutex_init);
+ void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
+ 				struct task_struct *proxy_owner)
+ {
+-	__rt_mutex_init(lock, NULL);
++	rt_mutex_init(lock);
+ 	debug_rt_mutex_proxy_lock(lock, proxy_owner);
+ 	rt_mutex_set_owner(lock, proxy_owner);
+ 	rt_mutex_deadlock_account_lock(lock, proxy_owner);
+diff --git a/kernel/rtmutex_common.h b/kernel/rtmutex_common.h
+index 47290ec..6ec3dc1 100644
+--- a/kernel/rtmutex_common.h
++++ b/kernel/rtmutex_common.h
+@@ -49,6 +49,7 @@ struct rt_mutex_waiter {
+ 	struct plist_node	pi_list_entry;
+ 	struct task_struct	*task;
+ 	struct rt_mutex		*lock;
++	bool			savestate;
+ #ifdef CONFIG_DEBUG_RT_MUTEXES
+ 	unsigned long		ip;
+ 	struct pid		*deadlock_task_pid;
+@@ -126,4 +127,12 @@ extern int rt_mutex_finish_proxy_lock(struct rt_mutex *lock,
+ # include "rtmutex.h"
+ #endif
+ 
++static inline void
++rt_mutex_init_waiter(struct rt_mutex_waiter *waiter, bool savestate)
++{
++	debug_rt_mutex_init_waiter(waiter);
++	waiter->task = NULL;
++	waiter->savestate = savestate;
++}
++
+ #endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0196-spinlock-types-separate-raw.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0196-spinlock-types-separate-raw.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0196-spinlock-types-separate-raw.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0196-spinlock-types-separate-raw.patch.patch)
@@ -0,0 +1,220 @@
+From 25ab9c9e6f692045af9662683a54d6f8d58eb9ec Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 29 Jun 2011 19:34:01 +0200
+Subject: [PATCH 196/271] spinlock-types-separate-raw.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rwlock_types.h        |    4 ++
+ include/linux/spinlock_types.h      |   74 +----------------------------------
+ include/linux/spinlock_types_nort.h |   33 ++++++++++++++++
+ include/linux/spinlock_types_raw.h  |   56 ++++++++++++++++++++++++++
+ 4 files changed, 95 insertions(+), 72 deletions(-)
+ create mode 100644 include/linux/spinlock_types_nort.h
+ create mode 100644 include/linux/spinlock_types_raw.h
+
+diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h
+index cc0072e..5317cd9 100644
+--- a/include/linux/rwlock_types.h
++++ b/include/linux/rwlock_types.h
+@@ -1,6 +1,10 @@
+ #ifndef __LINUX_RWLOCK_TYPES_H
+ #define __LINUX_RWLOCK_TYPES_H
+ 
++#if !defined(__LINUX_SPINLOCK_TYPES_H)
++# error "Do not include directly, include spinlock_types.h"
++#endif
++
+ /*
+  * include/linux/rwlock_types.h - generic rwlock type definitions
+  *				  and initializers
+diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h
+index 73548eb..5c8664d 100644
+--- a/include/linux/spinlock_types.h
++++ b/include/linux/spinlock_types.h
+@@ -9,79 +9,9 @@
+  * Released under the General Public License (GPL).
+  */
+ 
+-#if defined(CONFIG_SMP)
+-# include <asm/spinlock_types.h>
+-#else
+-# include <linux/spinlock_types_up.h>
+-#endif
++#include <linux/spinlock_types_raw.h>
+ 
+-#include <linux/lockdep.h>
+-
+-typedef struct raw_spinlock {
+-	arch_spinlock_t raw_lock;
+-#ifdef CONFIG_GENERIC_LOCKBREAK
+-	unsigned int break_lock;
+-#endif
+-#ifdef CONFIG_DEBUG_SPINLOCK
+-	unsigned int magic, owner_cpu;
+-	void *owner;
+-#endif
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+-	struct lockdep_map dep_map;
+-#endif
+-} raw_spinlock_t;
+-
+-#define SPINLOCK_MAGIC		0xdead4ead
+-
+-#define SPINLOCK_OWNER_INIT	((void *)-1L)
+-
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define SPIN_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
+-#else
+-# define SPIN_DEP_MAP_INIT(lockname)
+-#endif
+-
+-#ifdef CONFIG_DEBUG_SPINLOCK
+-# define SPIN_DEBUG_INIT(lockname)		\
+-	.magic = SPINLOCK_MAGIC,		\
+-	.owner_cpu = -1,			\
+-	.owner = SPINLOCK_OWNER_INIT,
+-#else
+-# define SPIN_DEBUG_INIT(lockname)
+-#endif
+-
+-#define __RAW_SPIN_LOCK_INITIALIZER(lockname)	\
+-	{					\
+-	.raw_lock = __ARCH_SPIN_LOCK_UNLOCKED,	\
+-	SPIN_DEBUG_INIT(lockname)		\
+-	SPIN_DEP_MAP_INIT(lockname) }
+-
+-#define __RAW_SPIN_LOCK_UNLOCKED(lockname)	\
+-	(raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname)
+-
+-#define DEFINE_RAW_SPINLOCK(x)	raw_spinlock_t x = __RAW_SPIN_LOCK_UNLOCKED(x)
+-
+-typedef struct spinlock {
+-	union {
+-		struct raw_spinlock rlock;
+-
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define LOCK_PADSIZE (offsetof(struct raw_spinlock, dep_map))
+-		struct {
+-			u8 __padding[LOCK_PADSIZE];
+-			struct lockdep_map dep_map;
+-		};
+-#endif
+-	};
+-} spinlock_t;
+-
+-#define __SPIN_LOCK_INITIALIZER(lockname) \
+-	{ { .rlock = __RAW_SPIN_LOCK_INITIALIZER(lockname) } }
+-
+-#define __SPIN_LOCK_UNLOCKED(lockname) \
+-	(spinlock_t ) __SPIN_LOCK_INITIALIZER(lockname)
+-
+-#define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
++#include <linux/spinlock_types_nort.h>
+ 
+ #include <linux/rwlock_types.h>
+ 
+diff --git a/include/linux/spinlock_types_nort.h b/include/linux/spinlock_types_nort.h
+new file mode 100644
+index 0000000..f1dac1f
+--- /dev/null
++++ b/include/linux/spinlock_types_nort.h
+@@ -0,0 +1,33 @@
++#ifndef __LINUX_SPINLOCK_TYPES_NORT_H
++#define __LINUX_SPINLOCK_TYPES_NORT_H
++
++#ifndef __LINUX_SPINLOCK_TYPES_H
++#error "Do not include directly. Include spinlock_types.h instead"
++#endif
++
++/*
++ * The non RT version maps spinlocks to raw_spinlocks
++ */
++typedef struct spinlock {
++	union {
++		struct raw_spinlock rlock;
++
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define LOCK_PADSIZE (offsetof(struct raw_spinlock, dep_map))
++		struct {
++			u8 __padding[LOCK_PADSIZE];
++			struct lockdep_map dep_map;
++		};
++#endif
++	};
++} spinlock_t;
++
++#define __SPIN_LOCK_INITIALIZER(lockname) \
++	{ { .rlock = __RAW_SPIN_LOCK_INITIALIZER(lockname) } }
++
++#define __SPIN_LOCK_UNLOCKED(lockname) \
++	(spinlock_t ) __SPIN_LOCK_INITIALIZER(lockname)
++
++#define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
++
++#endif
+diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h
+new file mode 100644
+index 0000000..edffc4d
+--- /dev/null
++++ b/include/linux/spinlock_types_raw.h
+@@ -0,0 +1,56 @@
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
++#define __LINUX_SPINLOCK_TYPES_RAW_H
++
++#if defined(CONFIG_SMP)
++# include <asm/spinlock_types.h>
++#else
++# include <linux/spinlock_types_up.h>
++#endif
++
++#include <linux/lockdep.h>
++
++typedef struct raw_spinlock {
++	arch_spinlock_t raw_lock;
++#ifdef CONFIG_GENERIC_LOCKBREAK
++	unsigned int break_lock;
++#endif
++#ifdef CONFIG_DEBUG_SPINLOCK
++	unsigned int magic, owner_cpu;
++	void *owner;
++#endif
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++	struct lockdep_map dep_map;
++#endif
++} raw_spinlock_t;
++
++#define SPINLOCK_MAGIC		0xdead4ead
++
++#define SPINLOCK_OWNER_INIT	((void *)-1L)
++
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define SPIN_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
++#else
++# define SPIN_DEP_MAP_INIT(lockname)
++#endif
++
++#ifdef CONFIG_DEBUG_SPINLOCK
++# define SPIN_DEBUG_INIT(lockname)		\
++	.magic = SPINLOCK_MAGIC,		\
++	.owner_cpu = -1,			\
++	.owner = SPINLOCK_OWNER_INIT,
++#else
++# define SPIN_DEBUG_INIT(lockname)
++#endif
++
++#define __RAW_SPIN_LOCK_INITIALIZER(lockname)	\
++	{					\
++	.raw_lock = __ARCH_SPIN_LOCK_UNLOCKED,	\
++	SPIN_DEBUG_INIT(lockname)		\
++	SPIN_DEP_MAP_INIT(lockname) }
++
++#define __RAW_SPIN_LOCK_UNLOCKED(lockname)	\
++	(raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname)
++
++#define DEFINE_RAW_SPINLOCK(x)	raw_spinlock_t x = __RAW_SPIN_LOCK_UNLOCKED(x)
++
++#endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0197-rtmutex-avoid-include-hell.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0197-rtmutex-avoid-include-hell.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0197-rtmutex-avoid-include-hell.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0197-rtmutex-avoid-include-hell.patch.patch)
@@ -0,0 +1,26 @@
+From c8fccd9fbad72f51dfa88bf8241f1c0b9b3f1873 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 29 Jun 2011 20:06:39 +0200
+Subject: [PATCH 197/271] rtmutex-avoid-include-hell.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rtmutex.h |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
+index 928d93e..5ebd0bb 100644
+--- a/include/linux/rtmutex.h
++++ b/include/linux/rtmutex.h
+@@ -14,7 +14,7 @@
+ 
+ #include <linux/linkage.h>
+ #include <linux/plist.h>
+-#include <linux/spinlock_types.h>
++#include <linux/spinlock_types_raw.h>
+ 
+ extern int max_lock_depth; /* for sysctl */
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0198-rt-add-rt-spinlocks.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0198-rt-add-rt-spinlocks.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0198-rt-add-rt-spinlocks.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0198-rt-add-rt-spinlocks.patch.patch)
@@ -0,0 +1,132 @@
+From 9e5e2b3a9eb2fbb4938e70703bed6af3a4cea7f3 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 29 Jun 2011 19:43:35 +0200
+Subject: [PATCH 198/271] rt-add-rt-spinlocks.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rwlock_types_rt.h   |   33 +++++++++++++++++++++++++
+ include/linux/spinlock_types.h    |   11 ++++++---
+ include/linux/spinlock_types_rt.h |   49 +++++++++++++++++++++++++++++++++++++
+ 3 files changed, 90 insertions(+), 3 deletions(-)
+ create mode 100644 include/linux/rwlock_types_rt.h
+ create mode 100644 include/linux/spinlock_types_rt.h
+
+diff --git a/include/linux/rwlock_types_rt.h b/include/linux/rwlock_types_rt.h
+new file mode 100644
+index 0000000..b138321
+--- /dev/null
++++ b/include/linux/rwlock_types_rt.h
+@@ -0,0 +1,33 @@
++#ifndef __LINUX_RWLOCK_TYPES_RT_H
++#define __LINUX_RWLOCK_TYPES_RT_H
++
++#ifndef __LINUX_SPINLOCK_TYPES_H
++#error "Do not include directly. Include spinlock_types.h instead"
++#endif
++
++/*
++ * rwlocks - rtmutex which allows single reader recursion
++ */
++typedef struct {
++	struct rt_mutex		lock;
++	int			read_depth;
++	unsigned int		break_lock;
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++	struct lockdep_map	dep_map;
++#endif
++} rwlock_t;
++
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define RW_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
++#else
++# define RW_DEP_MAP_INIT(lockname)
++#endif
++
++#define __RW_LOCK_UNLOCKED(name) \
++	{ .lock = __RT_MUTEX_INITIALIZER_SAVE_STATE(name.lock),	\
++	  RW_DEP_MAP_INIT(name) }
++
++#define DEFINE_RWLOCK(name) \
++	rwlock_t name __cacheline_aligned_in_smp = __RW_LOCK_UNLOCKED(name)
++
++#endif
+diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h
+index 5c8664d..10bac71 100644
+--- a/include/linux/spinlock_types.h
++++ b/include/linux/spinlock_types.h
+@@ -11,8 +11,13 @@
+ 
+ #include <linux/spinlock_types_raw.h>
+ 
+-#include <linux/spinlock_types_nort.h>
+-
+-#include <linux/rwlock_types.h>
++#ifndef CONFIG_PREEMPT_RT_FULL
++# include <linux/spinlock_types_nort.h>
++# include <linux/rwlock_types.h>
++#else
++# include <linux/rtmutex.h>
++# include <linux/spinlock_types_rt.h>
++# include <linux/rwlock_types_rt.h>
++#endif
+ 
+ #endif /* __LINUX_SPINLOCK_TYPES_H */
+diff --git a/include/linux/spinlock_types_rt.h b/include/linux/spinlock_types_rt.h
+new file mode 100644
+index 0000000..1fe8fc0
+--- /dev/null
++++ b/include/linux/spinlock_types_rt.h
+@@ -0,0 +1,49 @@
++#ifndef __LINUX_SPINLOCK_TYPES_RT_H
++#define __LINUX_SPINLOCK_TYPES_RT_H
++
++#ifndef __LINUX_SPINLOCK_TYPES_H
++#error "Do not include directly. Include spinlock_types.h instead"
++#endif
++
++/*
++ * PREEMPT_RT: spinlocks - an RT mutex plus lock-break field:
++ */
++typedef struct spinlock {
++	struct rt_mutex		lock;
++	unsigned int		break_lock;
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++	struct lockdep_map	dep_map;
++#endif
++} spinlock_t;
++
++#ifdef CONFIG_DEBUG_RT_MUTEXES
++# define __RT_SPIN_INITIALIZER(name) \
++	{ \
++	.wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \
++	.save_state = 1, \
++	.file = __FILE__, \
++	.line = __LINE__ , \
++	}
++#else
++# define __RT_SPIN_INITIALIZER(name) \
++	{								\
++	.wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock),		\
++	.save_state = 1, \
++	}
++#endif
++
++/*
++.wait_list = PLIST_HEAD_INIT_RAW((name).lock.wait_list, (name).lock.wait_lock)
++*/
++
++#define __SPIN_LOCK_UNLOCKED(name)			\
++	{ .lock = __RT_SPIN_INITIALIZER(name.lock),		\
++	  SPIN_DEP_MAP_INIT(name) }
++
++#define __DEFINE_SPINLOCK(name) \
++	spinlock_t name = __SPIN_LOCK_UNLOCKED(name)
++
++#define DEFINE_SPINLOCK(name) \
++	spinlock_t name __cacheline_aligned_in_smp = __SPIN_LOCK_UNLOCKED(name)
++
++#endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0199-rt-add-rt-to-mutex-headers.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0199-rt-add-rt-to-mutex-headers.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0199-rt-add-rt-to-mutex-headers.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0199-rt-add-rt-to-mutex-headers.patch.patch)
@@ -0,0 +1,151 @@
+From 00470c04c494c3d79901a903d52e352e318ce35e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 29 Jun 2011 20:56:22 +0200
+Subject: [PATCH 199/271] rt-add-rt-to-mutex-headers.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/mutex.h    |   21 ++++++++----
+ include/linux/mutex_rt.h |   84 ++++++++++++++++++++++++++++++++++++++++++++++
+ 2 files changed, 98 insertions(+), 7 deletions(-)
+ create mode 100644 include/linux/mutex_rt.h
+
+diff --git a/include/linux/mutex.h b/include/linux/mutex.h
+index 9121595..bdf1da2 100644
+--- a/include/linux/mutex.h
++++ b/include/linux/mutex.h
+@@ -17,6 +17,17 @@
+ 
+ #include <linux/atomic.h>
+ 
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
++		, .dep_map = { .name = #lockname }
++#else
++# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
++#endif
++
++#ifdef CONFIG_PREEMPT_RT_FULL
++# include <linux/mutex_rt.h>
++#else
++
+ /*
+  * Simple, straightforward mutexes with strict semantics:
+  *
+@@ -95,13 +106,6 @@ do {							\
+ static inline void mutex_destroy(struct mutex *lock) {}
+ #endif
+ 
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
+-		, .dep_map = { .name = #lockname }
+-#else
+-# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
+-#endif
+-
+ #define __MUTEX_INITIALIZER(lockname) \
+ 		{ .count = ATOMIC_INIT(1) \
+ 		, .wait_lock = __SPIN_LOCK_UNLOCKED(lockname.wait_lock) \
+@@ -167,6 +171,9 @@ extern int __must_check mutex_lock_killable(struct mutex *lock);
+  */
+ extern int mutex_trylock(struct mutex *lock);
+ extern void mutex_unlock(struct mutex *lock);
++
++#endif /* !PREEMPT_RT_FULL */
++
+ extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
+ 
+ #ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX
+diff --git a/include/linux/mutex_rt.h b/include/linux/mutex_rt.h
+new file mode 100644
+index 0000000..c38a44b
+--- /dev/null
++++ b/include/linux/mutex_rt.h
+@@ -0,0 +1,84 @@
++#ifndef __LINUX_MUTEX_RT_H
++#define __LINUX_MUTEX_RT_H
++
++#ifndef __LINUX_MUTEX_H
++#error "Please include mutex.h"
++#endif
++
++#include <linux/rtmutex.h>
++
++/* FIXME: Just for __lockfunc */
++#include <linux/spinlock.h>
++
++struct mutex {
++	struct rt_mutex		lock;
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++	struct lockdep_map	dep_map;
++#endif
++};
++
++#define __MUTEX_INITIALIZER(mutexname)					\
++	{								\
++		.lock = __RT_MUTEX_INITIALIZER(mutexname.lock)		\
++		__DEP_MAP_MUTEX_INITIALIZER(mutexname)			\
++	}
++
++#define DEFINE_MUTEX(mutexname)						\
++	struct mutex mutexname = __MUTEX_INITIALIZER(mutexname)
++
++extern void __mutex_do_init(struct mutex *lock, const char *name, struct lock_class_key *key);
++extern void __lockfunc _mutex_lock(struct mutex *lock);
++extern int __lockfunc _mutex_lock_interruptible(struct mutex *lock);
++extern int __lockfunc _mutex_lock_killable(struct mutex *lock);
++extern void __lockfunc _mutex_lock_nested(struct mutex *lock, int subclass);
++extern void __lockfunc _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);
++extern int __lockfunc _mutex_lock_interruptible_nested(struct mutex *lock, int subclass);
++extern int __lockfunc _mutex_lock_killable_nested(struct mutex *lock, int subclass);
++extern int __lockfunc _mutex_trylock(struct mutex *lock);
++extern void __lockfunc _mutex_unlock(struct mutex *lock);
++
++#define mutex_is_locked(l)		rt_mutex_is_locked(&(l)->lock)
++#define mutex_lock(l)			_mutex_lock(l)
++#define mutex_lock_interruptible(l)	_mutex_lock_interruptible(l)
++#define mutex_lock_killable(l)		_mutex_lock_killable(l)
++#define mutex_trylock(l)		_mutex_trylock(l)
++#define mutex_unlock(l)			_mutex_unlock(l)
++#define mutex_destroy(l)		rt_mutex_destroy(&(l)->lock)
++
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define mutex_lock_nested(l, s)	_mutex_lock_nested(l, s)
++# define mutex_lock_interruptible_nested(l, s) \
++					_mutex_lock_interruptible_nested(l, s)
++# define mutex_lock_killable_nested(l, s) \
++					_mutex_lock_killable_nested(l, s)
++
++# define mutex_lock_nest_lock(lock, nest_lock)				\
++do {									\
++	typecheck(struct lockdep_map *, &(nest_lock)->dep_map);		\
++	_mutex_lock_nest_lock(lock, &(nest_lock)->dep_map);		\
++} while (0)
++
++#else
++# define mutex_lock_nested(l, s)	_mutex_lock(l)
++# define mutex_lock_interruptible_nested(l, s) \
++					_mutex_lock_interruptible(l)
++# define mutex_lock_killable_nested(l, s) \
++					_mutex_lock_killable(l)
++# define mutex_lock_nest_lock(lock, nest_lock) mutex_lock(lock)
++#endif
++
++# define mutex_init(mutex)				\
++do {							\
++	static struct lock_class_key __key;		\
++							\
++	rt_mutex_init(&(mutex)->lock);			\
++	__mutex_do_init((mutex), #mutex, &__key);	\
++} while (0)
++
++# define __mutex_init(mutex, name, key)			\
++do {							\
++	rt_mutex_init(&(mutex)->lock);			\
++	__mutex_do_init((mutex), name, key);		\
++} while (0)
++
++#endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0200-rwsem-add-rt-variant.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0200-rwsem-add-rt-variant.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0200-rwsem-add-rt-variant.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0200-rwsem-add-rt-variant.patch.patch)
@@ -0,0 +1,165 @@
+From 96ecdf14553a29759ac88843c31443d0b810ea92 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 29 Jun 2011 21:02:53 +0200
+Subject: [PATCH 200/271] rwsem-add-rt-variant.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rwsem.h    |    6 +++
+ include/linux/rwsem_rt.h |  105 ++++++++++++++++++++++++++++++++++++++++++++++
+ lib/Makefile             |    3 ++
+ 3 files changed, 114 insertions(+)
+ create mode 100644 include/linux/rwsem_rt.h
+
+diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
+index 63d4065..209be4b 100644
+--- a/include/linux/rwsem.h
++++ b/include/linux/rwsem.h
+@@ -17,6 +17,10 @@
+ #include <asm/system.h>
+ #include <linux/atomic.h>
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++#include <linux/rwsem_rt.h>
++#else /* PREEMPT_RT_FULL */
++
+ struct rw_semaphore;
+ 
+ #ifdef CONFIG_RWSEM_GENERIC_SPINLOCK
+@@ -131,4 +135,6 @@ extern void down_write_nested(struct rw_semaphore *sem, int subclass);
+ # define down_write_nested(sem, subclass)	down_write(sem)
+ #endif
+ 
++#endif /* !PREEMPT_RT_FULL */
++
+ #endif /* _LINUX_RWSEM_H */
+diff --git a/include/linux/rwsem_rt.h b/include/linux/rwsem_rt.h
+new file mode 100644
+index 0000000..802c690
+--- /dev/null
++++ b/include/linux/rwsem_rt.h
+@@ -0,0 +1,105 @@
++#ifndef _LINUX_RWSEM_RT_H
++#define _LINUX_RWSEM_RT_H
++
++#ifndef _LINUX_RWSEM_H
++#error "Include rwsem.h"
++#endif
++
++/*
++ * RW-semaphores are a spinlock plus a reader-depth count.
++ *
++ * Note that the semantics are different from the usual
++ * Linux rw-sems, in PREEMPT_RT mode we do not allow
++ * multiple readers to hold the lock at once, we only allow
++ * a read-lock owner to read-lock recursively. This is
++ * better for latency, makes the implementation inherently
++ * fair and makes it simpler as well.
++ */
++
++#include <linux/rtmutex.h>
++
++struct rw_semaphore {
++	struct rt_mutex		lock;
++	int			read_depth;
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++	struct lockdep_map	dep_map;
++#endif
++};
++
++#define __RWSEM_INITIALIZER(name) \
++	{ .lock = __RT_MUTEX_INITIALIZER(name.lock), \
++	  RW_DEP_MAP_INIT(name) }
++
++#define DECLARE_RWSEM(lockname) \
++	struct rw_semaphore lockname = __RWSEM_INITIALIZER(lockname)
++
++extern void  __rt_rwsem_init(struct rw_semaphore *rwsem, char *name,
++				     struct lock_class_key *key);
++
++# define rt_init_rwsem(sem)				\
++do {							\
++	static struct lock_class_key __key;		\
++							\
++	rt_mutex_init(&(sem)->lock);			\
++	__rt_rwsem_init((sem), #sem, &__key);		\
++} while (0)
++
++extern void  rt_down_write(struct rw_semaphore *rwsem);
++extern void rt_down_read_nested(struct rw_semaphore *rwsem, int subclass);
++extern void rt_down_write_nested(struct rw_semaphore *rwsem, int subclass);
++extern void  rt_down_read(struct rw_semaphore *rwsem);
++extern int  rt_down_write_trylock(struct rw_semaphore *rwsem);
++extern int  rt_down_read_trylock(struct rw_semaphore *rwsem);
++extern void  rt_up_read(struct rw_semaphore *rwsem);
++extern void  rt_up_write(struct rw_semaphore *rwsem);
++extern void  rt_downgrade_write(struct rw_semaphore *rwsem);
++
++#define init_rwsem(sem)		rt_init_rwsem(sem)
++#define rwsem_is_locked(s)	rt_mutex_is_locked(&(s)->lock)
++
++static inline void down_read(struct rw_semaphore *sem)
++{
++	rt_down_read(sem);
++}
++
++static inline int down_read_trylock(struct rw_semaphore *sem)
++{
++	return rt_down_read_trylock(sem);
++}
++
++static inline void down_write(struct rw_semaphore *sem)
++{
++	rt_down_write(sem);
++}
++
++static inline int down_write_trylock(struct rw_semaphore *sem)
++{
++	return rt_down_write_trylock(sem);
++}
++
++static inline void up_read(struct rw_semaphore *sem)
++{
++	rt_up_read(sem);
++}
++
++static inline void up_write(struct rw_semaphore *sem)
++{
++	rt_up_write(sem);
++}
++
++static inline void downgrade_write(struct rw_semaphore *sem)
++{
++	rt_downgrade_write(sem);
++}
++
++static inline void down_read_nested(struct rw_semaphore *sem, int subclass)
++{
++	return rt_down_read_nested(sem, subclass);
++}
++
++static inline void down_write_nested(struct rw_semaphore *sem, int subclass)
++{
++	rt_down_write_nested(sem, subclass);
++}
++
++#endif
+diff --git a/lib/Makefile b/lib/Makefile
+index a4da283..5026c91 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -37,8 +37,11 @@ obj-$(CONFIG_HAS_IOMEM) += iomap_copy.o devres.o
+ obj-$(CONFIG_CHECK_SIGNATURE) += check_signature.o
+ obj-$(CONFIG_DEBUG_LOCKING_API_SELFTESTS) += locking-selftest.o
+ obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock_debug.o
++
++ifneq ($(CONFIG_PREEMPT_RT_FULL),y)
+ lib-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
+ lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o
++endif
+ 
+ CFLAGS_hweight.o = $(subst $(quote),,$(CONFIG_ARCH_HWEIGHT_CFLAGS))
+ obj-$(CONFIG_GENERIC_HWEIGHT) += hweight.o
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0201-rt-Add-the-preempt-rt-lock-replacement-APIs.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0201-rt-Add-the-preempt-rt-lock-replacement-APIs.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0201-rt-Add-the-preempt-rt-lock-replacement-APIs.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0201-rt-Add-the-preempt-rt-lock-replacement-APIs.patch)
@@ -0,0 +1,917 @@
+From ddeb99a8987f7f3197d563a5cfe82f1b7053269b Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 26 Jul 2009 19:39:56 +0200
+Subject: [PATCH 201/271] rt: Add the preempt-rt lock replacement APIs
+
+Map spinlocks, rwlocks, rw_semaphores and semaphores to the rt_mutex
+based locking functions for preempt-rt.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rwlock_rt.h        |  123 +++++++++++
+ include/linux/spinlock.h         |   12 +-
+ include/linux/spinlock_api_smp.h |    4 +-
+ include/linux/spinlock_rt.h      |  156 ++++++++++++++
+ kernel/Makefile                  |    9 +-
+ kernel/rt.c                      |  442 ++++++++++++++++++++++++++++++++++++++
+ kernel/spinlock.c                |    7 +
+ lib/spinlock_debug.c             |    5 +
+ 8 files changed, 754 insertions(+), 4 deletions(-)
+ create mode 100644 include/linux/rwlock_rt.h
+ create mode 100644 include/linux/spinlock_rt.h
+ create mode 100644 kernel/rt.c
+
+diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
+new file mode 100644
+index 0000000..853ee36
+--- /dev/null
++++ b/include/linux/rwlock_rt.h
+@@ -0,0 +1,123 @@
++#ifndef __LINUX_RWLOCK_RT_H
++#define __LINUX_RWLOCK_RT_H
++
++#ifndef __LINUX_SPINLOCK_H
++#error Do not include directly. Use spinlock.h
++#endif
++
++#define rwlock_init(rwl)				\
++do {							\
++	static struct lock_class_key __key;		\
++							\
++	rt_mutex_init(&(rwl)->lock);			\
++	__rt_rwlock_init(rwl, #rwl, &__key);		\
++} while (0)
++
++extern void __lockfunc rt_write_lock(rwlock_t *rwlock);
++extern void __lockfunc rt_read_lock(rwlock_t *rwlock);
++extern int __lockfunc rt_write_trylock(rwlock_t *rwlock);
++extern int __lockfunc rt_write_trylock_irqsave(rwlock_t *trylock, unsigned long *flags);
++extern int __lockfunc rt_read_trylock(rwlock_t *rwlock);
++extern void __lockfunc rt_write_unlock(rwlock_t *rwlock);
++extern void __lockfunc rt_read_unlock(rwlock_t *rwlock);
++extern unsigned long __lockfunc rt_write_lock_irqsave(rwlock_t *rwlock);
++extern unsigned long __lockfunc rt_read_lock_irqsave(rwlock_t *rwlock);
++extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key *key);
++
++#define read_trylock(lock)	__cond_lock(lock, rt_read_trylock(lock))
++#define write_trylock(lock)	__cond_lock(lock, rt_write_trylock(lock))
++
++#define write_trylock_irqsave(lock, flags)	\
++	__cond_lock(lock, rt_write_trylock_irqsave(lock, &flags))
++
++#define read_lock_irqsave(lock, flags)			\
++	do {						\
++		typecheck(unsigned long, flags);	\
++		migrate_disable();			\
++		flags = rt_read_lock_irqsave(lock);	\
++	} while (0)
++
++#define write_lock_irqsave(lock, flags)			\
++	do {						\
++		typecheck(unsigned long, flags);	\
++		migrate_disable();			\
++		flags = rt_write_lock_irqsave(lock);	\
++	} while (0)
++
++#define read_lock(lock)					\
++	do {						\
++		migrate_disable();			\
++		rt_read_lock(lock);			\
++	} while (0)
++
++#define read_lock_bh(lock)				\
++	do {						\
++		local_bh_disable();			\
++		migrate_disable();			\
++		rt_read_lock(lock);			\
++	} while (0)
++
++#define read_lock_irq(lock)	read_lock(lock)
++
++#define write_lock(lock)				\
++	do {						\
++		migrate_disable();			\
++		rt_write_lock(lock);			\
++	} while (0)
++
++#define write_lock_bh(lock)				\
++	do {						\
++		local_bh_disable();			\
++		migrate_disable();			\
++		rt_write_lock(lock);			\
++	} while (0)
++
++#define write_lock_irq(lock)	write_lock(lock)
++
++#define read_unlock(lock)				\
++	do {						\
++		rt_read_unlock(lock);			\
++		migrate_enable();			\
++	} while (0)
++
++#define read_unlock_bh(lock)				\
++	do {						\
++		rt_read_unlock(lock);			\
++		migrate_enable();			\
++		local_bh_enable();			\
++	} while (0)
++
++#define read_unlock_irq(lock)	read_unlock(lock)
++
++#define write_unlock(lock)				\
++	do {						\
++		rt_write_unlock(lock);			\
++		migrate_enable();			\
++	} while (0)
++
++#define write_unlock_bh(lock)				\
++	do {						\
++		rt_write_unlock(lock);			\
++		migrate_enable();			\
++		local_bh_enable();			\
++	} while (0)
++
++#define write_unlock_irq(lock)	write_unlock(lock)
++
++#define read_unlock_irqrestore(lock, flags)		\
++	do {						\
++		typecheck(unsigned long, flags);	\
++		(void) flags;				\
++		rt_read_unlock(lock);			\
++		migrate_enable();			\
++	} while (0)
++
++#define write_unlock_irqrestore(lock, flags) \
++	do {						\
++		typecheck(unsigned long, flags);	\
++		(void) flags;				\
++		rt_write_unlock(lock);			\
++		migrate_enable();			\
++	} while (0)
++
++#endif
+diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
+index 7df6c17..5fe7e40 100644
+--- a/include/linux/spinlock.h
++++ b/include/linux/spinlock.h
+@@ -254,7 +254,11 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
+ #define raw_spin_can_lock(lock)	(!raw_spin_is_locked(lock))
+ 
+ /* Include rwlock functions */
+-#include <linux/rwlock.h>
++#ifdef CONFIG_PREEMPT_RT_FULL
++# include <linux/rwlock_rt.h>
++#else
++# include <linux/rwlock.h>
++#endif
+ 
+ /*
+  * Pull the _spin_*()/_read_*()/_write_*() functions/declarations:
+@@ -265,6 +269,10 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
+ # include <linux/spinlock_api_up.h>
+ #endif
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++# include <linux/spinlock_rt.h>
++#else /* PREEMPT_RT_FULL */
++
+ /*
+  * Map the spin_lock functions to the raw variants for PREEMPT_RT=n
+  */
+@@ -397,4 +405,6 @@ extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
+ #define atomic_dec_and_lock(atomic, lock) \
+ 		__cond_lock(lock, _atomic_dec_and_lock(atomic, lock))
+ 
++#endif /* !PREEMPT_RT_FULL */
++
+ #endif /* __LINUX_SPINLOCK_H */
+diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h
+index e253ccd..2a5ba05 100644
+--- a/include/linux/spinlock_api_smp.h
++++ b/include/linux/spinlock_api_smp.h
+@@ -191,6 +191,8 @@ static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock)
+ 	return 0;
+ }
+ 
+-#include <linux/rwlock_api_smp.h>
++#ifndef CONFIG_PREEMPT_RT_FULL
++# include <linux/rwlock_api_smp.h>
++#endif
+ 
+ #endif /* __LINUX_SPINLOCK_API_SMP_H */
+diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
+new file mode 100644
+index 0000000..205ca95
+--- /dev/null
++++ b/include/linux/spinlock_rt.h
+@@ -0,0 +1,156 @@
++#ifndef __LINUX_SPINLOCK_RT_H
++#define __LINUX_SPINLOCK_RT_H
++
++#ifndef __LINUX_SPINLOCK_H
++#error Do not include directly. Use spinlock.h
++#endif
++
++extern void
++__rt_spin_lock_init(spinlock_t *lock, char *name, struct lock_class_key *key);
++
++#define spin_lock_init(slock)				\
++do {							\
++	static struct lock_class_key __key;		\
++							\
++	rt_mutex_init(&(slock)->lock);			\
++	__rt_spin_lock_init(slock, #slock, &__key);	\
++} while (0)
++
++extern void __lockfunc rt_spin_lock(spinlock_t *lock);
++extern unsigned long __lockfunc rt_spin_lock_trace_flags(spinlock_t *lock);
++extern void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass);
++extern void __lockfunc rt_spin_unlock(spinlock_t *lock);
++extern void __lockfunc rt_spin_unlock_wait(spinlock_t *lock);
++extern int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags);
++extern int __lockfunc rt_spin_trylock_bh(spinlock_t *lock);
++extern int __lockfunc rt_spin_trylock(spinlock_t *lock);
++extern int atomic_dec_and_spin_lock(atomic_t *atomic, spinlock_t *lock);
++
++/*
++ * lockdep-less calls, for derived types like rwlock:
++ * (for trylock they can use rt_mutex_trylock() directly.
++ */
++extern void __lockfunc __rt_spin_lock(struct rt_mutex *lock);
++extern void __lockfunc __rt_spin_unlock(struct rt_mutex *lock);
++
++#define spin_lock_local(lock)			rt_spin_lock(lock)
++#define spin_unlock_local(lock)			rt_spin_unlock(lock)
++
++#define spin_lock(lock)				\
++	do {					\
++		migrate_disable();		\
++		rt_spin_lock(lock);		\
++	} while (0)
++
++#define spin_lock_bh(lock)			\
++	do {					\
++		local_bh_disable();		\
++		migrate_disable();		\
++		rt_spin_lock(lock);		\
++	} while (0)
++
++#define spin_lock_irq(lock)		spin_lock(lock)
++
++#define spin_trylock(lock)		__cond_lock(lock, rt_spin_trylock(lock))
++
++#ifdef CONFIG_LOCKDEP
++# define spin_lock_nested(lock, subclass)		\
++	do {						\
++		migrate_disable();			\
++		rt_spin_lock_nested(lock, subclass);	\
++	} while (0)
++
++# define spin_lock_irqsave_nested(lock, flags, subclass) \
++	do {						 \
++		typecheck(unsigned long, flags);	 \
++		flags = 0;				 \
++		migrate_disable();			 \
++		rt_spin_lock_nested(lock, subclass);	 \
++	} while (0)
++#else
++# define spin_lock_nested(lock, subclass)	spin_lock(lock)
++
++# define spin_lock_irqsave_nested(lock, flags, subclass) \
++	do {						 \
++		typecheck(unsigned long, flags);	 \
++		flags = 0;				 \
++		spin_lock(lock);			 \
++	} while (0)
++#endif
++
++#define spin_lock_irqsave(lock, flags)			 \
++	do {						 \
++		typecheck(unsigned long, flags);	 \
++		flags = 0;				 \
++		spin_lock(lock);			 \
++	} while (0)
++
++static inline unsigned long spin_lock_trace_flags(spinlock_t *lock)
++{
++	unsigned long flags = 0;
++#ifdef CONFIG_TRACE_IRQFLAGS
++	flags = rt_spin_lock_trace_flags(lock);
++#else
++	spin_lock(lock); /* lock_local */
++#endif
++	return flags;
++}
++
++/* FIXME: we need rt_spin_lock_nest_lock */
++#define spin_lock_nest_lock(lock, nest_lock) spin_lock_nested(lock, 0)
++
++#define spin_unlock(lock)				\
++	do {						\
++		rt_spin_unlock(lock);			\
++		migrate_enable();			\
++	} while (0)
++
++#define spin_unlock_bh(lock)				\
++	do {						\
++		rt_spin_unlock(lock);			\
++		migrate_enable();			\
++		local_bh_enable();			\
++	} while (0)
++
++#define spin_unlock_irq(lock)		spin_unlock(lock)
++
++#define spin_unlock_irqrestore(lock, flags)		\
++	do {						\
++		typecheck(unsigned long, flags);	\
++		(void) flags;				\
++		spin_unlock(lock);			\
++	} while (0)
++
++#define spin_trylock_bh(lock)	__cond_lock(lock, rt_spin_trylock_bh(lock))
++#define spin_trylock_irq(lock)	spin_trylock(lock)
++
++#define spin_trylock_irqsave(lock, flags)	\
++	rt_spin_trylock_irqsave(lock, &(flags))
++
++#define spin_unlock_wait(lock)		rt_spin_unlock_wait(lock)
++
++#ifdef CONFIG_GENERIC_LOCKBREAK
++# define spin_is_contended(lock)	((lock)->break_lock)
++#else
++# define spin_is_contended(lock)	(((void)(lock), 0))
++#endif
++
++static inline int spin_can_lock(spinlock_t *lock)
++{
++	return !rt_mutex_is_locked(&lock->lock);
++}
++
++static inline int spin_is_locked(spinlock_t *lock)
++{
++	return rt_mutex_is_locked(&lock->lock);
++}
++
++static inline void assert_spin_locked(spinlock_t *lock)
++{
++	BUG_ON(!spin_is_locked(lock));
++}
++
++#define atomic_dec_and_lock(atomic, lock) \
++	atomic_dec_and_spin_lock(atomic, lock)
++
++#endif
+diff --git a/kernel/Makefile b/kernel/Makefile
+index e898c5b..c961d3a 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -7,8 +7,8 @@ obj-y     = sched.o fork.o exec_domain.o panic.o printk.o \
+ 	    sysctl.o sysctl_binary.o capability.o ptrace.o timer.o user.o \
+ 	    signal.o sys.o kmod.o workqueue.o pid.o \
+ 	    rcupdate.o extable.o params.o posix-timers.o \
+-	    kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o mutex.o \
+-	    hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \
++	    kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o \
++	    hrtimer.o nsproxy.o srcu.o semaphore.o \
+ 	    notifier.o ksysfs.o sched_clock.o cred.o \
+ 	    async.o range.o
+ obj-y += groups.o
+@@ -29,7 +29,11 @@ obj-$(CONFIG_PROFILING) += profile.o
+ obj-$(CONFIG_SYSCTL_SYSCALL_CHECK) += sysctl_check.o
+ obj-$(CONFIG_STACKTRACE) += stacktrace.o
+ obj-y += time/
++ifneq ($(CONFIG_PREEMPT_RT_FULL),y)
++obj-y += mutex.o
+ obj-$(CONFIG_DEBUG_MUTEXES) += mutex-debug.o
++obj-y += rwsem.o
++endif
+ obj-$(CONFIG_LOCKDEP) += lockdep.o
+ ifeq ($(CONFIG_PROC_FS),y)
+ obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
+@@ -41,6 +45,7 @@ endif
+ obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
+ obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
+ obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
++obj-$(CONFIG_PREEMPT_RT_FULL) += rt.o
+ obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
+ obj-$(CONFIG_SMP) += smp.o
+ ifneq ($(CONFIG_SMP),y)
+diff --git a/kernel/rt.c b/kernel/rt.c
+new file mode 100644
+index 0000000..092d6b3
+--- /dev/null
++++ b/kernel/rt.c
+@@ -0,0 +1,442 @@
++/*
++ * kernel/rt.c
++ *
++ * Real-Time Preemption Support
++ *
++ * started by Ingo Molnar:
++ *
++ *  Copyright (C) 2004-2006 Red Hat, Inc., Ingo Molnar <mingo at redhat.com>
++ *  Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx at timesys.com>
++ *
++ * historic credit for proving that Linux spinlocks can be implemented via
++ * RT-aware mutexes goes to many people: The Pmutex project (Dirk Grambow
++ * and others) who prototyped it on 2.4 and did lots of comparative
++ * research and analysis; TimeSys, for proving that you can implement a
++ * fully preemptible kernel via the use of IRQ threading and mutexes;
++ * Bill Huey for persuasively arguing on lkml that the mutex model is the
++ * right one; and to MontaVista, who ported pmutexes to 2.6.
++ *
++ * This code is a from-scratch implementation and is not based on pmutexes,
++ * but the idea of converting spinlocks to mutexes is used here too.
++ *
++ * lock debugging, locking tree, deadlock detection:
++ *
++ *  Copyright (C) 2004, LynuxWorks, Inc., Igor Manyilov, Bill Huey
++ *  Released under the General Public License (GPL).
++ *
++ * Includes portions of the generic R/W semaphore implementation from:
++ *
++ *  Copyright (c) 2001   David Howells (dhowells at redhat.com).
++ *  - Derived partially from idea by Andrea Arcangeli <andrea at suse.de>
++ *  - Derived also from comments by Linus
++ *
++ * Pending ownership of locks and ownership stealing:
++ *
++ *  Copyright (C) 2005, Kihon Technologies Inc., Steven Rostedt
++ *
++ *   (also by Steven Rostedt)
++ *    - Converted single pi_lock to individual task locks.
++ *
++ * By Esben Nielsen:
++ *    Doing priority inheritance with help of the scheduler.
++ *
++ *  Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx at timesys.com>
++ *  - major rework based on Esben Nielsens initial patch
++ *  - replaced thread_info references by task_struct refs
++ *  - removed task->pending_owner dependency
++ *  - BKL drop/reacquire for semaphore style locks to avoid deadlocks
++ *    in the scheduler return path as discussed with Steven Rostedt
++ *
++ *  Copyright (C) 2006, Kihon Technologies Inc.
++ *    Steven Rostedt <rostedt at goodmis.org>
++ *  - debugged and patched Thomas Gleixner's rework.
++ *  - added back the cmpxchg to the rework.
++ *  - turned atomic require back on for SMP.
++ */
++
++#include <linux/spinlock.h>
++#include <linux/rtmutex.h>
++#include <linux/sched.h>
++#include <linux/delay.h>
++#include <linux/module.h>
++#include <linux/kallsyms.h>
++#include <linux/syscalls.h>
++#include <linux/interrupt.h>
++#include <linux/plist.h>
++#include <linux/fs.h>
++#include <linux/futex.h>
++#include <linux/hrtimer.h>
++
++#include "rtmutex_common.h"
++
++/*
++ * struct mutex functions
++ */
++void __mutex_do_init(struct mutex *mutex, const char *name,
++		     struct lock_class_key *key)
++{
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++	/*
++	 * Make sure we are not reinitializing a held lock:
++	 */
++	debug_check_no_locks_freed((void *)mutex, sizeof(*mutex));
++	lockdep_init_map(&mutex->dep_map, name, key, 0);
++#endif
++	mutex->lock.save_state = 0;
++}
++EXPORT_SYMBOL(__mutex_do_init);
++
++void __lockfunc _mutex_lock(struct mutex *lock)
++{
++	mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++	rt_mutex_lock(&lock->lock);
++}
++EXPORT_SYMBOL(_mutex_lock);
++
++int __lockfunc _mutex_lock_interruptible(struct mutex *lock)
++{
++	int ret;
++
++	mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++	ret = rt_mutex_lock_interruptible(&lock->lock, 0);
++	if (ret)
++		mutex_release(&lock->dep_map, 1, _RET_IP_);
++	return ret;
++}
++EXPORT_SYMBOL(_mutex_lock_interruptible);
++
++int __lockfunc _mutex_lock_killable(struct mutex *lock)
++{
++	int ret;
++
++	mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++	ret = rt_mutex_lock_killable(&lock->lock, 0);
++	if (ret)
++		mutex_release(&lock->dep_map, 1, _RET_IP_);
++	return ret;
++}
++EXPORT_SYMBOL(_mutex_lock_killable);
++
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++void __lockfunc _mutex_lock_nested(struct mutex *lock, int subclass)
++{
++	mutex_acquire_nest(&lock->dep_map, subclass, 0, NULL, _RET_IP_);
++	rt_mutex_lock(&lock->lock);
++}
++EXPORT_SYMBOL(_mutex_lock_nested);
++
++void __lockfunc _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
++{
++	mutex_acquire_nest(&lock->dep_map, 0, 0, nest, _RET_IP_);
++	rt_mutex_lock(&lock->lock);
++}
++EXPORT_SYMBOL(_mutex_lock_nest_lock);
++
++int __lockfunc _mutex_lock_interruptible_nested(struct mutex *lock, int subclass)
++{
++	int ret;
++
++	mutex_acquire_nest(&lock->dep_map, subclass, 0, NULL, _RET_IP_);
++	ret = rt_mutex_lock_interruptible(&lock->lock, 0);
++	if (ret)
++		mutex_release(&lock->dep_map, 1, _RET_IP_);
++	return ret;
++}
++EXPORT_SYMBOL(_mutex_lock_interruptible_nested);
++
++int __lockfunc _mutex_lock_killable_nested(struct mutex *lock, int subclass)
++{
++	int ret;
++
++	mutex_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
++	ret = rt_mutex_lock_killable(&lock->lock, 0);
++	if (ret)
++		mutex_release(&lock->dep_map, 1, _RET_IP_);
++	return ret;
++}
++EXPORT_SYMBOL(_mutex_lock_killable_nested);
++#endif
++
++int __lockfunc _mutex_trylock(struct mutex *lock)
++{
++	int ret = rt_mutex_trylock(&lock->lock);
++
++	if (ret)
++		mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_);
++
++	return ret;
++}
++EXPORT_SYMBOL(_mutex_trylock);
++
++void __lockfunc _mutex_unlock(struct mutex *lock)
++{
++	mutex_release(&lock->dep_map, 1, _RET_IP_);
++	rt_mutex_unlock(&lock->lock);
++}
++EXPORT_SYMBOL(_mutex_unlock);
++
++/*
++ * rwlock_t functions
++ */
++int __lockfunc rt_write_trylock(rwlock_t *rwlock)
++{
++	int ret = rt_mutex_trylock(&rwlock->lock);
++
++	migrate_disable();
++	if (ret)
++		rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
++	else
++		migrate_enable();
++
++	return ret;
++}
++EXPORT_SYMBOL(rt_write_trylock);
++
++int __lockfunc rt_write_trylock_irqsave(rwlock_t *rwlock, unsigned long *flags)
++{
++	int ret;
++
++	*flags = 0;
++	migrate_disable();
++	ret = rt_write_trylock(rwlock);
++	if (!ret)
++		migrate_enable();
++	return ret;
++}
++EXPORT_SYMBOL(rt_write_trylock_irqsave);
++
++int __lockfunc rt_read_trylock(rwlock_t *rwlock)
++{
++	struct rt_mutex *lock = &rwlock->lock;
++	int ret = 1;
++
++	/*
++	 * recursive read locks succeed when current owns the lock,
++	 * but not when read_depth == 0 which means that the lock is
++	 * write locked.
++	 */
++	migrate_disable();
++	if (rt_mutex_owner(lock) != current)
++		ret = rt_mutex_trylock(lock);
++	else if (!rwlock->read_depth)
++		ret = 0;
++
++	if (ret) {
++		rwlock->read_depth++;
++		rwlock_acquire_read(&rwlock->dep_map, 0, 1, _RET_IP_);
++	} else
++		migrate_enable();
++
++	return ret;
++}
++EXPORT_SYMBOL(rt_read_trylock);
++
++void __lockfunc rt_write_lock(rwlock_t *rwlock)
++{
++	rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_);
++	__rt_spin_lock(&rwlock->lock);
++}
++EXPORT_SYMBOL(rt_write_lock);
++
++void __lockfunc rt_read_lock(rwlock_t *rwlock)
++{
++	struct rt_mutex *lock = &rwlock->lock;
++
++	rwlock_acquire_read(&rwlock->dep_map, 0, 0, _RET_IP_);
++
++	/*
++	 * recursive read locks succeed when current owns the lock
++	 */
++	if (rt_mutex_owner(lock) != current)
++		__rt_spin_lock(lock);
++	rwlock->read_depth++;
++}
++
++EXPORT_SYMBOL(rt_read_lock);
++
++void __lockfunc rt_write_unlock(rwlock_t *rwlock)
++{
++	/* NOTE: we always pass in '1' for nested, for simplicity */
++	rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
++	__rt_spin_unlock(&rwlock->lock);
++}
++EXPORT_SYMBOL(rt_write_unlock);
++
++void __lockfunc rt_read_unlock(rwlock_t *rwlock)
++{
++	rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
++
++	/* Release the lock only when read_depth is down to 0 */
++	if (--rwlock->read_depth == 0)
++		__rt_spin_unlock(&rwlock->lock);
++}
++EXPORT_SYMBOL(rt_read_unlock);
++
++unsigned long __lockfunc rt_write_lock_irqsave(rwlock_t *rwlock)
++{
++	rt_write_lock(rwlock);
++
++	return 0;
++}
++EXPORT_SYMBOL(rt_write_lock_irqsave);
++
++unsigned long __lockfunc rt_read_lock_irqsave(rwlock_t *rwlock)
++{
++	rt_read_lock(rwlock);
++
++	return 0;
++}
++EXPORT_SYMBOL(rt_read_lock_irqsave);
++
++void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key *key)
++{
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++	/*
++	 * Make sure we are not reinitializing a held lock:
++	 */
++	debug_check_no_locks_freed((void *)rwlock, sizeof(*rwlock));
++	lockdep_init_map(&rwlock->dep_map, name, key, 0);
++#endif
++	rwlock->lock.save_state = 1;
++	rwlock->read_depth = 0;
++}
++EXPORT_SYMBOL(__rt_rwlock_init);
++
++/*
++ * rw_semaphores
++ */
++
++void  rt_up_write(struct rw_semaphore *rwsem)
++{
++	rwsem_release(&rwsem->dep_map, 1, _RET_IP_);
++	rt_mutex_unlock(&rwsem->lock);
++}
++EXPORT_SYMBOL(rt_up_write);
++
++void  rt_up_read(struct rw_semaphore *rwsem)
++{
++	rwsem_release(&rwsem->dep_map, 1, _RET_IP_);
++	if (--rwsem->read_depth == 0)
++		rt_mutex_unlock(&rwsem->lock);
++}
++EXPORT_SYMBOL(rt_up_read);
++
++/*
++ * downgrade a write lock into a read lock
++ * - just wake up any readers at the front of the queue
++ */
++void  rt_downgrade_write(struct rw_semaphore *rwsem)
++{
++	BUG_ON(rt_mutex_owner(&rwsem->lock) != current);
++	rwsem->read_depth = 1;
++}
++EXPORT_SYMBOL(rt_downgrade_write);
++
++int  rt_down_write_trylock(struct rw_semaphore *rwsem)
++{
++	int ret = rt_mutex_trylock(&rwsem->lock);
++
++	if (ret)
++		rwsem_acquire(&rwsem->dep_map, 0, 1, _RET_IP_);
++	return ret;
++}
++EXPORT_SYMBOL(rt_down_write_trylock);
++
++void  rt_down_write(struct rw_semaphore *rwsem)
++{
++	rwsem_acquire(&rwsem->dep_map, 0, 0, _RET_IP_);
++	rt_mutex_lock(&rwsem->lock);
++}
++EXPORT_SYMBOL(rt_down_write);
++
++void  rt_down_write_nested(struct rw_semaphore *rwsem, int subclass)
++{
++	rwsem_acquire(&rwsem->dep_map, subclass, 0, _RET_IP_);
++	rt_mutex_lock(&rwsem->lock);
++}
++EXPORT_SYMBOL(rt_down_write_nested);
++
++int  rt_down_read_trylock(struct rw_semaphore *rwsem)
++{
++	struct rt_mutex *lock = &rwsem->lock;
++	int ret = 1;
++
++	/*
++	 * recursive read locks succeed when current owns the rwsem,
++	 * but not when read_depth == 0 which means that the rwsem is
++	 * write locked.
++	 */
++	if (rt_mutex_owner(lock) != current)
++		ret = rt_mutex_trylock(&rwsem->lock);
++	else if (!rwsem->read_depth)
++		ret = 0;
++
++	if (ret) {
++		rwsem->read_depth++;
++		rwsem_acquire(&rwsem->dep_map, 0, 1, _RET_IP_);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(rt_down_read_trylock);
++
++static void __rt_down_read(struct rw_semaphore *rwsem, int subclass)
++{
++	struct rt_mutex *lock = &rwsem->lock;
++
++	rwsem_acquire_read(&rwsem->dep_map, subclass, 0, _RET_IP_);
++
++	if (rt_mutex_owner(lock) != current)
++		rt_mutex_lock(&rwsem->lock);
++	rwsem->read_depth++;
++}
++
++void  rt_down_read(struct rw_semaphore *rwsem)
++{
++	__rt_down_read(rwsem, 0);
++}
++EXPORT_SYMBOL(rt_down_read);
++
++void  rt_down_read_nested(struct rw_semaphore *rwsem, int subclass)
++{
++	__rt_down_read(rwsem, subclass);
++}
++EXPORT_SYMBOL(rt_down_read_nested);
++
++void  __rt_rwsem_init(struct rw_semaphore *rwsem, char *name,
++			      struct lock_class_key *key)
++{
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++	/*
++	 * Make sure we are not reinitializing a held lock:
++	 */
++	debug_check_no_locks_freed((void *)rwsem, sizeof(*rwsem));
++	lockdep_init_map(&rwsem->dep_map, name, key, 0);
++#endif
++	rwsem->read_depth = 0;
++	rwsem->lock.save_state = 0;
++}
++EXPORT_SYMBOL(__rt_rwsem_init);
++
++/**
++ * atomic_dec_and_mutex_lock - return holding mutex if we dec to 0
++ * @cnt: the atomic which we are to dec
++ * @lock: the mutex to return holding if we dec to 0
++ *
++ * return true and hold lock if we dec to 0, return false otherwise
++ */
++int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock)
++{
++	/* dec if we can't possibly hit 0 */
++	if (atomic_add_unless(cnt, -1, 1))
++		return 0;
++	/* we might hit 0, so take the lock */
++	mutex_lock(lock);
++	if (!atomic_dec_and_test(cnt)) {
++		/* when we actually did the dec, we didn't hit 0 */
++		mutex_unlock(lock);
++		return 0;
++	}
++	/* we hit 0, and we hold the lock */
++	return 1;
++}
++EXPORT_SYMBOL(atomic_dec_and_mutex_lock);
+diff --git a/kernel/spinlock.c b/kernel/spinlock.c
+index 84c7d96..47fd3cf 100644
+--- a/kernel/spinlock.c
++++ b/kernel/spinlock.c
+@@ -110,8 +110,11 @@ void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock)		\
+  *         __[spin|read|write]_lock_bh()
+  */
+ BUILD_LOCK_OPS(spin, raw_spinlock);
++
++#ifndef CONFIG_PREEMPT_RT_FULL
+ BUILD_LOCK_OPS(read, rwlock);
+ BUILD_LOCK_OPS(write, rwlock);
++#endif
+ 
+ #endif
+ 
+@@ -195,6 +198,8 @@ void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock)
+ EXPORT_SYMBOL(_raw_spin_unlock_bh);
+ #endif
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
++
+ #ifndef CONFIG_INLINE_READ_TRYLOCK
+ int __lockfunc _raw_read_trylock(rwlock_t *lock)
+ {
+@@ -339,6 +344,8 @@ void __lockfunc _raw_write_unlock_bh(rwlock_t *lock)
+ EXPORT_SYMBOL(_raw_write_unlock_bh);
+ #endif
+ 
++#endif /* !PREEMPT_RT_FULL */
++
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+ 
+ void __lockfunc _raw_spin_lock_nested(raw_spinlock_t *lock, int subclass)
+diff --git a/lib/spinlock_debug.c b/lib/spinlock_debug.c
+index 5f3eacd..f824704 100644
+--- a/lib/spinlock_debug.c
++++ b/lib/spinlock_debug.c
+@@ -31,6 +31,7 @@ void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name,
+ 
+ EXPORT_SYMBOL(__raw_spin_lock_init);
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ void __rwlock_init(rwlock_t *lock, const char *name,
+ 		   struct lock_class_key *key)
+ {
+@@ -48,6 +49,7 @@ void __rwlock_init(rwlock_t *lock, const char *name,
+ }
+ 
+ EXPORT_SYMBOL(__rwlock_init);
++#endif
+ 
+ static void spin_dump(raw_spinlock_t *lock, const char *msg)
+ {
+@@ -155,6 +157,7 @@ void do_raw_spin_unlock(raw_spinlock_t *lock)
+ 	arch_spin_unlock(&lock->raw_lock);
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ static void rwlock_bug(rwlock_t *lock, const char *msg)
+ {
+ 	if (!debug_locks_off())
+@@ -296,3 +299,5 @@ void do_raw_write_unlock(rwlock_t *lock)
+ 	debug_write_unlock(lock);
+ 	arch_write_unlock(&lock->raw_lock);
+ }
++
++#endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0202-rwlocks-Fix-section-mismatch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0202-rwlocks-Fix-section-mismatch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0202-rwlocks-Fix-section-mismatch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0202-rwlocks-Fix-section-mismatch.patch)
@@ -0,0 +1,68 @@
+From a405034a460d2d0348e311cf4a96cec19e438b97 Mon Sep 17 00:00:00 2001
+From: John Kacur <jkacur at redhat.com>
+Date: Mon, 19 Sep 2011 11:09:27 +0200
+Subject: [PATCH 202/271] rwlocks: Fix section mismatch
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+This fixes the following build error for the preempt-rt kernel.
+
+make kernel/fork.o
+  CC      kernel/fork.o
+kernel/fork.c:90: error: section of ¡tasklist_lock¢ conflicts with previous declaration
+make[2]: *** [kernel/fork.o] Error 1
+make[1]: *** [kernel/fork.o] Error 2
+
+The rt kernel cache aligns the RWLOCK in DEFINE_RWLOCK by default.
+The non-rt kernels explicitly cache align only the tasklist_lock in
+kernel/fork.c
+That can create a build conflict. This fixes the build problem by making the
+non-rt kernels cache align RWLOCKs by default. The side effect is that
+the other RWLOCKs are also cache aligned for non-rt.
+
+This is a short term solution for rt only.
+The longer term solution would be to push the cache aligned DEFINE_RWLOCK
+to mainline. If there are objections, then we could create a
+DEFINE_RWLOCK_CACHE_ALIGNED or something of that nature.
+
+Comments? Objections?
+
+Signed-off-by: John Kacur <jkacur at redhat.com>
+Cc: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/r/alpine.LFD.2.00.1109191104010.23118@localhost6.localdomain6
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rwlock_types.h |    3 ++-
+ kernel/fork.c                |    2 +-
+ 2 files changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h
+index 5317cd9..d0da966 100644
+--- a/include/linux/rwlock_types.h
++++ b/include/linux/rwlock_types.h
+@@ -47,6 +47,7 @@ typedef struct {
+ 				RW_DEP_MAP_INIT(lockname) }
+ #endif
+ 
+-#define DEFINE_RWLOCK(x)	rwlock_t x = __RW_LOCK_UNLOCKED(x)
++#define DEFINE_RWLOCK(name) \
++	rwlock_t name __cacheline_aligned_in_smp = __RW_LOCK_UNLOCKED(name)
+ 
+ #endif /* __LINUX_RWLOCK_TYPES_H */
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 8aeb811..fbc352e 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -88,7 +88,7 @@ int max_threads;		/* tunable limit on nr_threads */
+ 
+ DEFINE_PER_CPU(unsigned long, process_counts) = 0;
+ 
+-__cacheline_aligned DEFINE_RWLOCK(tasklist_lock);  /* outer */
++DEFINE_RWLOCK(tasklist_lock);  /* outer */
+ 
+ #ifdef CONFIG_PROVE_RCU
+ int lockdep_tasklist_lock_is_held(void)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0203-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0203-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0203-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0203-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch)
@@ -0,0 +1,81 @@
+From 1d23b0348fbc31941809798a980f366ef947b3d7 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 22:08:38 +0200
+Subject: [PATCH 203/271] timer-handle-idle-trylock-in-get-next-timer-irq.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/spinlock_rt.h |   12 +++++++++++-
+ kernel/rtmutex.c            |    7 +------
+ kernel/timer.c              |    7 ++++---
+ 3 files changed, 16 insertions(+), 10 deletions(-)
+
+diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
+index 205ca95..3b555b4 100644
+--- a/include/linux/spinlock_rt.h
++++ b/include/linux/spinlock_rt.h
+@@ -51,7 +51,17 @@ extern void __lockfunc __rt_spin_unlock(struct rt_mutex *lock);
+ 
+ #define spin_lock_irq(lock)		spin_lock(lock)
+ 
+-#define spin_trylock(lock)		__cond_lock(lock, rt_spin_trylock(lock))
++#define spin_do_trylock(lock)		__cond_lock(lock, rt_spin_trylock(lock))
++
++#define spin_trylock(lock)			\
++({						\
++	int __locked;				\
++	migrate_disable();			\
++	__locked = spin_do_trylock(lock);	\
++	if (!__locked)				\
++		migrate_enable();		\
++	__locked;				\
++})
+ 
+ #ifdef CONFIG_LOCKDEP
+ # define spin_lock_nested(lock, subclass)		\
+diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
+index a7723d2..9850dc0 100644
+--- a/kernel/rtmutex.c
++++ b/kernel/rtmutex.c
+@@ -861,15 +861,10 @@ EXPORT_SYMBOL(rt_spin_unlock_wait);
+ 
+ int __lockfunc rt_spin_trylock(spinlock_t *lock)
+ {
+-	int ret;
++	int ret = rt_mutex_trylock(&lock->lock);
+ 
+-	migrate_disable();
+-	ret = rt_mutex_trylock(&lock->lock);
+ 	if (ret)
+ 		spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
+-	else
+-		migrate_enable();
+-
+ 	return ret;
+ }
+ EXPORT_SYMBOL(rt_spin_trylock);
+diff --git a/kernel/timer.c b/kernel/timer.c
+index 8a9ca7d..7ea787b 100644
+--- a/kernel/timer.c
++++ b/kernel/timer.c
+@@ -1323,13 +1323,14 @@ unsigned long get_next_timer_interrupt(unsigned long now)
+ 	/*
+ 	 * On PREEMPT_RT we cannot sleep here. If the trylock does not
+ 	 * succeed then we return the worst-case 'expires in 1 tick'
+-	 * value:
++	 * value. We use the rt functions here directly to avoid a
++	 * migrate_disable() call.
+ 	 */
+-	if (spin_trylock(&base->lock)) {
++	if (spin_do_trylock(&base->lock)) {
+ 		if (time_before_eq(base->next_timer, base->timer_jiffies))
+ 			base->next_timer = __next_timer_interrupt(base);
+ 		expires = base->next_timer;
+-		spin_unlock(&base->lock);
++		rt_spin_unlock(&base->lock);
+ 	} else {
+ 		expires = now + 1;
+ 	}
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0204-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0204-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0204-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0204-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch)
@@ -0,0 +1,32 @@
+From 4864a89f1d69a916e5c9d09825a8999711317913 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:30 -0500
+Subject: [PATCH 204/271] RCU: Force PREEMPT_RCU for PREEMPT-RT
+
+PREEMPT_RT relies on PREEMPT_RCU - only allow RCU to be configured
+interactively in the !PREEMPT_RT case.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Link: http://lkml.kernel.org/n/tip-j1y0phicu6s6pu8guku2vca0@git.kernel.org
+---
+ init/Kconfig |    1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/init/Kconfig b/init/Kconfig
+index 720c182..dbc82d0 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -731,7 +731,6 @@ config RT_GROUP_SCHED
+ 	bool "Group scheduling for SCHED_RR/FIFO"
+ 	depends on EXPERIMENTAL
+ 	depends on CGROUP_SCHED
+-	depends on !PREEMPT_RT_FULL
+ 	default n
+ 	help
+ 	  This feature lets you explicitly allocate real CPU bandwidth
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0205-rcu-Frob-softirq-test.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0205-rcu-Frob-softirq-test.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0205-rcu-Frob-softirq-test.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0205-rcu-Frob-softirq-test.patch)
@@ -0,0 +1,172 @@
+From 600cb11178735715304dbc05dc42a9f4a38e6294 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Sat, 13 Aug 2011 00:23:17 +0200
+Subject: [PATCH 205/271] rcu: Frob softirq test
+
+With RT_FULL we get the below wreckage:
+
+[  126.060484] =======================================================
+[  126.060486] [ INFO: possible circular locking dependency detected ]
+[  126.060489] 3.0.1-rt10+ #30
+[  126.060490] -------------------------------------------------------
+[  126.060492] irq/24-eth0/1235 is trying to acquire lock:
+[  126.060495]  (&(lock)->wait_lock#2){+.+...}, at: [<ffffffff81501c81>] rt_mutex_slowunlock+0x16/0x55
+[  126.060503]
+[  126.060504] but task is already holding lock:
+[  126.060506]  (&p->pi_lock){-...-.}, at: [<ffffffff81074fdc>] try_to_wake_up+0x35/0x429
+[  126.060511]
+[  126.060511] which lock already depends on the new lock.
+[  126.060513]
+[  126.060514]
+[  126.060514] the existing dependency chain (in reverse order) is:
+[  126.060516]
+[  126.060516] -> #1 (&p->pi_lock){-...-.}:
+[  126.060519]        [<ffffffff810afe9e>] lock_acquire+0x145/0x18a
+[  126.060524]        [<ffffffff8150291e>] _raw_spin_lock_irqsave+0x4b/0x85
+[  126.060527]        [<ffffffff810b5aa4>] task_blocks_on_rt_mutex+0x36/0x20f
+[  126.060531]        [<ffffffff815019bb>] rt_mutex_slowlock+0xd1/0x15a
+[  126.060534]        [<ffffffff81501ae3>] rt_mutex_lock+0x2d/0x2f
+[  126.060537]        [<ffffffff810d9020>] rcu_boost+0xad/0xde
+[  126.060541]        [<ffffffff810d90ce>] rcu_boost_kthread+0x7d/0x9b
+[  126.060544]        [<ffffffff8109a760>] kthread+0x99/0xa1
+[  126.060547]        [<ffffffff81509b14>] kernel_thread_helper+0x4/0x10
+[  126.060551]
+[  126.060552] -> #0 (&(lock)->wait_lock#2){+.+...}:
+[  126.060555]        [<ffffffff810af1b8>] __lock_acquire+0x1157/0x1816
+[  126.060558]        [<ffffffff810afe9e>] lock_acquire+0x145/0x18a
+[  126.060561]        [<ffffffff8150279e>] _raw_spin_lock+0x40/0x73
+[  126.060564]        [<ffffffff81501c81>] rt_mutex_slowunlock+0x16/0x55
+[  126.060566]        [<ffffffff81501ce7>] rt_mutex_unlock+0x27/0x29
+[  126.060569]        [<ffffffff810d9f86>] rcu_read_unlock_special+0x17e/0x1c4
+[  126.060573]        [<ffffffff810da014>] __rcu_read_unlock+0x48/0x89
+[  126.060576]        [<ffffffff8106847a>] select_task_rq_rt+0xc7/0xd5
+[  126.060580]        [<ffffffff8107511c>] try_to_wake_up+0x175/0x429
+[  126.060583]        [<ffffffff81075425>] wake_up_process+0x15/0x17
+[  126.060585]        [<ffffffff81080a51>] wakeup_softirqd+0x24/0x26
+[  126.060590]        [<ffffffff81081df9>] irq_exit+0x49/0x55
+[  126.060593]        [<ffffffff8150a3bd>] smp_apic_timer_interrupt+0x8a/0x98
+[  126.060597]        [<ffffffff81509793>] apic_timer_interrupt+0x13/0x20
+[  126.060600]        [<ffffffff810d5952>] irq_forced_thread_fn+0x1b/0x44
+[  126.060603]        [<ffffffff810d582c>] irq_thread+0xde/0x1af
+[  126.060606]        [<ffffffff8109a760>] kthread+0x99/0xa1
+[  126.060608]        [<ffffffff81509b14>] kernel_thread_helper+0x4/0x10
+[  126.060611]
+[  126.060612] other info that might help us debug this:
+[  126.060614]
+[  126.060615]  Possible unsafe locking scenario:
+[  126.060616]
+[  126.060617]        CPU0                    CPU1
+[  126.060619]        ----                    ----
+[  126.060620]   lock(&p->pi_lock);
+[  126.060623]                                lock(&(lock)->wait_lock);
+[  126.060625]                                lock(&p->pi_lock);
+[  126.060627]   lock(&(lock)->wait_lock);
+[  126.060629]
+[  126.060629]  *** DEADLOCK ***
+[  126.060630]
+[  126.060632] 1 lock held by irq/24-eth0/1235:
+[  126.060633]  #0:  (&p->pi_lock){-...-.}, at: [<ffffffff81074fdc>] try_to_wake_up+0x35/0x429
+[  126.060638]
+[  126.060638] stack backtrace:
+[  126.060641] Pid: 1235, comm: irq/24-eth0 Not tainted 3.0.1-rt10+ #30
+[  126.060643] Call Trace:
+[  126.060644]  <IRQ>  [<ffffffff810acbde>] print_circular_bug+0x289/0x29a
+[  126.060651]  [<ffffffff810af1b8>] __lock_acquire+0x1157/0x1816
+[  126.060655]  [<ffffffff810ab3aa>] ? trace_hardirqs_off_caller+0x1f/0x99
+[  126.060658]  [<ffffffff81501c81>] ? rt_mutex_slowunlock+0x16/0x55
+[  126.060661]  [<ffffffff810afe9e>] lock_acquire+0x145/0x18a
+[  126.060664]  [<ffffffff81501c81>] ? rt_mutex_slowunlock+0x16/0x55
+[  126.060668]  [<ffffffff8150279e>] _raw_spin_lock+0x40/0x73
+[  126.060671]  [<ffffffff81501c81>] ? rt_mutex_slowunlock+0x16/0x55
+[  126.060674]  [<ffffffff810d9655>] ? rcu_report_qs_rsp+0x87/0x8c
+[  126.060677]  [<ffffffff81501c81>] rt_mutex_slowunlock+0x16/0x55
+[  126.060680]  [<ffffffff810d9ea3>] ? rcu_read_unlock_special+0x9b/0x1c4
+[  126.060683]  [<ffffffff81501ce7>] rt_mutex_unlock+0x27/0x29
+[  126.060687]  [<ffffffff810d9f86>] rcu_read_unlock_special+0x17e/0x1c4
+[  126.060690]  [<ffffffff810da014>] __rcu_read_unlock+0x48/0x89
+[  126.060693]  [<ffffffff8106847a>] select_task_rq_rt+0xc7/0xd5
+[  126.060696]  [<ffffffff810683da>] ? select_task_rq_rt+0x27/0xd5
+[  126.060701]  [<ffffffff810a852a>] ? clockevents_program_event+0x8e/0x90
+[  126.060704]  [<ffffffff8107511c>] try_to_wake_up+0x175/0x429
+[  126.060708]  [<ffffffff810a95dc>] ? tick_program_event+0x1f/0x21
+[  126.060711]  [<ffffffff81075425>] wake_up_process+0x15/0x17
+[  126.060715]  [<ffffffff81080a51>] wakeup_softirqd+0x24/0x26
+[  126.060718]  [<ffffffff81081df9>] irq_exit+0x49/0x55
+[  126.060721]  [<ffffffff8150a3bd>] smp_apic_timer_interrupt+0x8a/0x98
+[  126.060724]  [<ffffffff81509793>] apic_timer_interrupt+0x13/0x20
+[  126.060726]  <EOI>  [<ffffffff81072855>] ? migrate_disable+0x75/0x12d
+[  126.060733]  [<ffffffff81080a61>] ? local_bh_disable+0xe/0x1f
+[  126.060736]  [<ffffffff81080a70>] ? local_bh_disable+0x1d/0x1f
+[  126.060739]  [<ffffffff810d5952>] irq_forced_thread_fn+0x1b/0x44
+[  126.060742]  [<ffffffff81502ac0>] ? _raw_spin_unlock_irq+0x3b/0x59
+[  126.060745]  [<ffffffff810d582c>] irq_thread+0xde/0x1af
+[  126.060748]  [<ffffffff810d5937>] ? irq_thread_fn+0x3a/0x3a
+[  126.060751]  [<ffffffff810d574e>] ? irq_finalize_oneshot+0xd1/0xd1
+[  126.060754]  [<ffffffff810d574e>] ? irq_finalize_oneshot+0xd1/0xd1
+[  126.060757]  [<ffffffff8109a760>] kthread+0x99/0xa1
+[  126.060761]  [<ffffffff81509b14>] kernel_thread_helper+0x4/0x10
+[  126.060764]  [<ffffffff81069ed7>] ? finish_task_switch+0x87/0x10a
+[  126.060768]  [<ffffffff81502ec4>] ? retint_restore_args+0xe/0xe
+[  126.060771]  [<ffffffff8109a6c7>] ? __init_kthread_worker+0x8c/0x8c
+[  126.060774]  [<ffffffff81509b10>] ? gs_change+0xb/0xb
+
+Because irq_exit() does:
+
+void irq_exit(void)
+{
+	account_system_vtime(current);
+	trace_hardirq_exit();
+	sub_preempt_count(IRQ_EXIT_OFFSET);
+	if (!in_interrupt() && local_softirq_pending())
+		invoke_softirq();
+
+	...
+}
+
+Which triggers a wakeup, which uses RCU, now if the interrupted task has
+t->rcu_read_unlock_special set, the rcu usage from the wakeup will end
+up in rcu_read_unlock_special(). rcu_read_unlock_special() will test
+for in_irq(), which will fail as we just decremented preempt_count
+with IRQ_EXIT_OFFSET, and in_sering_softirq(), which for
+PREEMPT_RT_FULL reads:
+
+int in_serving_softirq(void)
+{
+	int res;
+
+	preempt_disable();
+	res = __get_cpu_var(local_softirq_runner) == current;
+	preempt_enable();
+	return res;
+}
+
+Which will thus also fail, resulting in the above wreckage.
+
+The 'somewhat' ugly solution is to open-code the preempt_count() test
+in rcu_read_unlock_special().
+
+Also, we're not at all sure how ->rcu_read_unlock_special gets set
+here... so this is very likely a bandaid and more thought is required.
+
+Cc: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+---
+ kernel/rcutree_plugin.h |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
+index 73cab33..2e63942 100644
+--- a/kernel/rcutree_plugin.h
++++ b/kernel/rcutree_plugin.h
+@@ -336,7 +336,7 @@ static noinline void rcu_read_unlock_special(struct task_struct *t)
+ 	}
+ 
+ 	/* Hardware IRQ handlers cannot block. */
+-	if (in_irq() || in_serving_softirq()) {
++	if (preempt_count() & (HARDIRQ_MASK | SOFTIRQ_OFFSET)) {
+ 		local_irq_restore(flags);
+ 		return;
+ 	}
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0206-rcu-Merge-RCU-bh-into-RCU-preempt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0206-rcu-Merge-RCU-bh-into-RCU-preempt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0206-rcu-Merge-RCU-bh-into-RCU-preempt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0206-rcu-Merge-RCU-bh-into-RCU-preempt.patch)
@@ -0,0 +1,264 @@
+From 686ef7c0afa9d00c0d2b3e92640a03466a4eb262 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 5 Oct 2011 11:59:38 -0700
+Subject: [PATCH 206/271] rcu: Merge RCU-bh into RCU-preempt
+
+The Linux kernel has long RCU-bh read-side critical sections that
+intolerably increase scheduling latency under mainline's RCU-bh rules,
+which include RCU-bh read-side critical sections being non-preemptible.
+This patch therefore arranges for RCU-bh to be implemented in terms of
+RCU-preempt for CONFIG_PREEMPT_RT_FULL=y.
+
+This has the downside of defeating the purpose of RCU-bh, namely,
+handling the case where the system is subjected to a network-based
+denial-of-service attack that keeps at least one CPU doing full-time
+softirq processing.  This issue will be fixed by a later commit.
+
+The current commit will need some work to make it appropriate for
+mainline use, for example, it needs to be extended to cover Tiny RCU.
+
+[ paulmck: Added a useful changelog ]
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Signed-off-by: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
+Link: http://lkml.kernel.org/r/20111005185938.GA20403@linux.vnet.ibm.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rcupdate.h |   25 +++++++++++++++++++++++++
+ include/linux/rcutree.h  |   18 ++++++++++++++++--
+ kernel/rcupdate.c        |    2 ++
+ kernel/rcutree.c         |   10 ++++++++++
+ 4 files changed, 53 insertions(+), 2 deletions(-)
+
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index a0082e2..7c31d86 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -92,6 +92,9 @@ extern void call_rcu(struct rcu_head *head,
+ 
+ #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++#define call_rcu_bh	call_rcu
++#else
+ /**
+  * call_rcu_bh() - Queue an RCU for invocation after a quicker grace period.
+  * @head: structure to be used for queueing the RCU updates.
+@@ -112,6 +115,7 @@ extern void call_rcu(struct rcu_head *head,
+  */
+ extern void call_rcu_bh(struct rcu_head *head,
+ 			void (*func)(struct rcu_head *head));
++#endif
+ 
+ /**
+  * call_rcu_sched() - Queue an RCU for invocation after sched grace period.
+@@ -181,7 +185,13 @@ static inline int rcu_preempt_depth(void)
+ 
+ /* Internal to kernel */
+ extern void rcu_sched_qs(int cpu);
++
++#ifndef CONFIG_PREEMPT_RT_FULL
+ extern void rcu_bh_qs(int cpu);
++#else
++static inline void rcu_bh_qs(int cpu) { }
++#endif
++
+ extern void rcu_check_callbacks(int cpu, int user);
+ struct notifier_block;
+ 
+@@ -281,7 +291,14 @@ static inline int rcu_read_lock_held(void)
+  * rcu_read_lock_bh_held() is defined out of line to avoid #include-file
+  * hell.
+  */
++#ifdef CONFIG_PREEMPT_RT_FULL
++static inline int rcu_read_lock_bh_held(void)
++{
++	return rcu_read_lock_held();
++}
++#else
+ extern int rcu_read_lock_bh_held(void);
++#endif
+ 
+ /**
+  * rcu_read_lock_sched_held() - might we be in RCU-sched read-side critical section?
+@@ -684,8 +701,12 @@ static inline void rcu_read_unlock(void)
+ static inline void rcu_read_lock_bh(void)
+ {
+ 	local_bh_disable();
++#ifdef CONFIG_PREEMPT_RT_FULL
++	rcu_read_lock();
++#else
+ 	__acquire(RCU_BH);
+ 	rcu_read_acquire_bh();
++#endif
+ }
+ 
+ /*
+@@ -695,8 +716,12 @@ static inline void rcu_read_lock_bh(void)
+  */
+ static inline void rcu_read_unlock_bh(void)
+ {
++#ifdef CONFIG_PREEMPT_RT_FULL
++	rcu_read_unlock();
++#else
+ 	rcu_read_release_bh();
+ 	__release(RCU_BH);
++#endif
+ 	local_bh_enable();
+ }
+ 
+diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
+index 6745846..800b840 100644
+--- a/include/linux/rcutree.h
++++ b/include/linux/rcutree.h
+@@ -57,7 +57,11 @@ static inline void exit_rcu(void)
+ 
+ #endif /* #else #ifdef CONFIG_TREE_PREEMPT_RCU */
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ extern void synchronize_rcu_bh(void);
++#else
++# define synchronize_rcu_bh()	synchronize_rcu()
++#endif
+ extern void synchronize_sched_expedited(void);
+ extern void synchronize_rcu_expedited(void);
+ 
+@@ -67,19 +71,29 @@ static inline void synchronize_rcu_bh_expedited(void)
+ }
+ 
+ extern void rcu_barrier(void);
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define rcu_barrier_bh		rcu_barrier
++#else
+ extern void rcu_barrier_bh(void);
++#endif
+ extern void rcu_barrier_sched(void);
+ 
+ extern unsigned long rcutorture_testseq;
+ extern unsigned long rcutorture_vernum;
+ extern long rcu_batches_completed(void);
+-extern long rcu_batches_completed_bh(void);
+ extern long rcu_batches_completed_sched(void);
+ 
+ extern void rcu_force_quiescent_state(void);
+-extern void rcu_bh_force_quiescent_state(void);
+ extern void rcu_sched_force_quiescent_state(void);
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
++extern void rcu_bh_force_quiescent_state(void);
++extern long rcu_batches_completed_bh(void);
++#else
++# define rcu_bh_force_quiescent_state	rcu_force_quiescent_state
++# define rcu_batches_completed_bh	rcu_batches_completed
++#endif
++
+ /* A context switch is a grace period for RCU-sched and RCU-bh. */
+ static inline int rcu_blocking_is_gp(void)
+ {
+diff --git a/kernel/rcupdate.c b/kernel/rcupdate.c
+index c5b98e5..24dcc71 100644
+--- a/kernel/rcupdate.c
++++ b/kernel/rcupdate.c
+@@ -77,6 +77,7 @@ int debug_lockdep_rcu_enabled(void)
+ }
+ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /**
+  * rcu_read_lock_bh_held() - might we be in RCU-bh read-side critical section?
+  *
+@@ -96,6 +97,7 @@ int rcu_read_lock_bh_held(void)
+ 	return in_softirq() || irqs_disabled();
+ }
+ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
++#endif
+ 
+ #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
+ 
+diff --git a/kernel/rcutree.c b/kernel/rcutree.c
+index 8ef8675..3afb0fd 100644
+--- a/kernel/rcutree.c
++++ b/kernel/rcutree.c
+@@ -170,6 +170,7 @@ void rcu_sched_qs(int cpu)
+ 	rdp->passed_quiesce = 1;
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ void rcu_bh_qs(int cpu)
+ {
+ 	struct rcu_data *rdp = &per_cpu(rcu_bh_data, cpu);
+@@ -180,6 +181,7 @@ void rcu_bh_qs(int cpu)
+ 		trace_rcu_grace_period("rcu_bh", rdp->gpnum, "cpuqs");
+ 	rdp->passed_quiesce = 1;
+ }
++#endif
+ 
+ /*
+  * Note a context switch.  This is a quiescent state for RCU-sched,
+@@ -225,6 +227,7 @@ long rcu_batches_completed_sched(void)
+ }
+ EXPORT_SYMBOL_GPL(rcu_batches_completed_sched);
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+  * Return the number of RCU BH batches processed thus far for debug & stats.
+  */
+@@ -242,6 +245,7 @@ void rcu_bh_force_quiescent_state(void)
+ 	force_quiescent_state(&rcu_bh_state, 0);
+ }
+ EXPORT_SYMBOL_GPL(rcu_bh_force_quiescent_state);
++#endif
+ 
+ /*
+  * Record the number of times rcutorture tests have been initiated and
+@@ -1667,6 +1671,7 @@ void call_rcu_sched(struct rcu_head *head, void (*func)(struct rcu_head *rcu))
+ }
+ EXPORT_SYMBOL_GPL(call_rcu_sched);
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+  * Queue an RCU for invocation after a quicker grace period.
+  */
+@@ -1675,6 +1680,7 @@ void call_rcu_bh(struct rcu_head *head, void (*func)(struct rcu_head *rcu))
+ 	__call_rcu(head, func, &rcu_bh_state);
+ }
+ EXPORT_SYMBOL_GPL(call_rcu_bh);
++#endif
+ 
+ /**
+  * synchronize_sched - wait until an rcu-sched grace period has elapsed.
+@@ -1707,6 +1713,7 @@ void synchronize_sched(void)
+ }
+ EXPORT_SYMBOL_GPL(synchronize_sched);
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /**
+  * synchronize_rcu_bh - wait until an rcu_bh grace period has elapsed.
+  *
+@@ -1723,6 +1730,7 @@ void synchronize_rcu_bh(void)
+ 	wait_rcu_gp(call_rcu_bh);
+ }
+ EXPORT_SYMBOL_GPL(synchronize_rcu_bh);
++#endif
+ 
+ /*
+  * Check to see if there is any immediate RCU-related work to be done
+@@ -1877,6 +1885,7 @@ static void _rcu_barrier(struct rcu_state *rsp,
+ 	mutex_unlock(&rcu_barrier_mutex);
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /**
+  * rcu_barrier_bh - Wait until all in-flight call_rcu_bh() callbacks complete.
+  */
+@@ -1885,6 +1894,7 @@ void rcu_barrier_bh(void)
+ 	_rcu_barrier(&rcu_bh_state, call_rcu_bh);
+ }
+ EXPORT_SYMBOL_GPL(rcu_barrier_bh);
++#endif
+ 
+ /**
+  * rcu_barrier_sched - Wait for in-flight call_rcu_sched() callbacks.
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0207-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0207-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0207-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0207-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch)
@@ -0,0 +1,43 @@
+From a9fcfdd233ec38ca43f008e909d18c6114dbc250 Mon Sep 17 00:00:00 2001
+From: John Kacur <jkacur at redhat.com>
+Date: Mon, 14 Nov 2011 02:44:42 +0100
+Subject: [PATCH 207/271] rcu: Fix macro substitution for synchronize_rcu_bh()
+ on RT
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+kernel/rcutorture.c:492: error: ‘synchronize_rcu_bh’ undeclared here (not in a function)
+
+synchronize_rcu_bh() is not just called as a normal function, but can
+also be referenced as a function pointer. When CONFIG_PREEMPT_RT_FULL
+is enabled, synchronize_rcu_bh() is defined as synchronize_rcu(), but
+needs to be defined without the parenthesis because the compiler will
+complain when synchronize_rcu_bh is referenced as a function pointer
+and not a function.
+
+Signed-off-by: John Kacur <jkacur at redhat.com>
+Cc: Paul McKenney <paulmck at linux.vnet.ibm.com>
+Cc: stable-rt at vger.kernel.org
+Link: http://lkml.kernel.org/r/1321235083-21756-1-git-send-email-jkacur@redhat.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rcutree.h |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
+index 800b840..6e503a3 100644
+--- a/include/linux/rcutree.h
++++ b/include/linux/rcutree.h
+@@ -60,7 +60,7 @@ static inline void exit_rcu(void)
+ #ifndef CONFIG_PREEMPT_RT_FULL
+ extern void synchronize_rcu_bh(void);
+ #else
+-# define synchronize_rcu_bh()	synchronize_rcu()
++# define synchronize_rcu_bh	synchronize_rcu
+ #endif
+ extern void synchronize_sched_expedited(void);
+ extern void synchronize_rcu_expedited(void);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0208-rcu-more-fallout.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0208-rcu-more-fallout.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0208-rcu-more-fallout.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0208-rcu-more-fallout.patch.patch)
@@ -0,0 +1,30 @@
+From 8f0c27b98405d1c30f048a50a37086249e1e7020 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 14 Nov 2011 10:57:54 +0100
+Subject: [PATCH 208/271] rcu-more-fallout.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/rcutiny.c |    2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/kernel/rcutiny.c b/kernel/rcutiny.c
+index 636af6d..6689097 100644
+--- a/kernel/rcutiny.c
++++ b/kernel/rcutiny.c
+@@ -243,6 +243,7 @@ void call_rcu_sched(struct rcu_head *head, void (*func)(struct rcu_head *rcu))
+ }
+ EXPORT_SYMBOL_GPL(call_rcu_sched);
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+  * Post an RCU bottom-half callback to be invoked after any subsequent
+  * quiescent state.
+@@ -252,3 +253,4 @@ void call_rcu_bh(struct rcu_head *head, void (*func)(struct rcu_head *rcu))
+ 	__call_rcu(head, func, &rcu_bh_ctrlblk);
+ }
+ EXPORT_SYMBOL_GPL(call_rcu_bh);
++#endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0209-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0209-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0209-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0209-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch)
@@ -0,0 +1,169 @@
+From 20f740e1cc883044915bf61344a482191d4c3f1c Mon Sep 17 00:00:00 2001
+From: "Paul E. McKenney" <paulmck at linux.vnet.ibm.com>
+Date: Wed, 5 Oct 2011 11:45:18 -0700
+Subject: [PATCH 209/271] rcu: Make ksoftirqd do RCU quiescent states
+
+Implementing RCU-bh in terms of RCU-preempt makes the system vulnerable
+to network-based denial-of-service attacks.  This patch therefore
+makes __do_softirq() invoke rcu_bh_qs(), but only when __do_softirq()
+is running in ksoftirqd context.  A wrapper layer in interposed so that
+other calls to __do_softirq() avoid invoking rcu_bh_qs().  The underlying
+function __do_softirq_common() does the actual work.
+
+The reason that rcu_bh_qs() is bad in these non-ksoftirqd contexts is
+that there might be a local_bh_enable() inside an RCU-preempt read-side
+critical section.  This local_bh_enable() can invoke __do_softirq()
+directly, so if __do_softirq() were to invoke rcu_bh_qs() (which just
+calls rcu_preempt_qs() in the PREEMPT_RT_FULL case), there would be
+an illegal RCU-preempt quiescent state in the middle of an RCU-preempt
+read-side critical section.  Therefore, quiescent states can only happen
+in cases where __do_softirq() is invoked directly from ksoftirqd.
+
+Signed-off-by: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
+Link: http://lkml.kernel.org/r/20111005184518.GA21601@linux.vnet.ibm.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/rcupdate.h |    6 ------
+ kernel/rcutree.c         |    7 ++++++-
+ kernel/rcutree.h         |    1 +
+ kernel/rcutree_plugin.h  |    2 +-
+ kernel/softirq.c         |   20 +++++++++++++-------
+ 5 files changed, 21 insertions(+), 15 deletions(-)
+
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 7c31d86..0e6fb5c 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -185,13 +185,7 @@ static inline int rcu_preempt_depth(void)
+ 
+ /* Internal to kernel */
+ extern void rcu_sched_qs(int cpu);
+-
+-#ifndef CONFIG_PREEMPT_RT_FULL
+ extern void rcu_bh_qs(int cpu);
+-#else
+-static inline void rcu_bh_qs(int cpu) { }
+-#endif
+-
+ extern void rcu_check_callbacks(int cpu, int user);
+ struct notifier_block;
+ 
+diff --git a/kernel/rcutree.c b/kernel/rcutree.c
+index 3afb0fd..3118218 100644
+--- a/kernel/rcutree.c
++++ b/kernel/rcutree.c
+@@ -170,7 +170,12 @@ void rcu_sched_qs(int cpu)
+ 	rdp->passed_quiesce = 1;
+ }
+ 
+-#ifndef CONFIG_PREEMPT_RT_FULL
++#ifdef CONFIG_PREEMPT_RT_FULL
++void rcu_bh_qs(int cpu)
++{
++	rcu_preempt_qs(cpu);
++}
++#else
+ void rcu_bh_qs(int cpu)
+ {
+ 	struct rcu_data *rdp = &per_cpu(rcu_bh_data, cpu);
+diff --git a/kernel/rcutree.h b/kernel/rcutree.h
+index dca495d..b522273 100644
+--- a/kernel/rcutree.h
++++ b/kernel/rcutree.h
+@@ -430,6 +430,7 @@ DECLARE_PER_CPU(char, rcu_cpu_has_work);
+ /* Forward declarations for rcutree_plugin.h */
+ static void rcu_bootup_announce(void);
+ long rcu_batches_completed(void);
++static void rcu_preempt_qs(int cpu);
+ static void rcu_preempt_note_context_switch(int cpu);
+ static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp);
+ #ifdef CONFIG_HOTPLUG_CPU
+diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
+index 2e63942..936441d 100644
+--- a/kernel/rcutree_plugin.h
++++ b/kernel/rcutree_plugin.h
+@@ -1933,7 +1933,7 @@ EXPORT_SYMBOL_GPL(synchronize_sched_expedited);
+ 
+ #endif /* #else #ifndef CONFIG_SMP */
+ 
+-#if !defined(CONFIG_RCU_FAST_NO_HZ)
++#if 1 /* !defined(CONFIG_RCU_FAST_NO_HZ) */
+ 
+ /*
+  * Check to see if any future RCU-related work will need to be done
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index bef08f3..ca00a68 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -139,7 +139,7 @@ static void wakeup_softirqd(void)
+ 		wake_up_process(tsk);
+ }
+ 
+-static void handle_pending_softirqs(u32 pending, int cpu)
++static void handle_pending_softirqs(u32 pending, int cpu, int need_rcu_bh_qs)
+ {
+ 	struct softirq_action *h = softirq_vec;
+ 	unsigned int prev_count = preempt_count();
+@@ -162,7 +162,8 @@ static void handle_pending_softirqs(u32 pending, int cpu)
+ 			       prev_count, (unsigned int) preempt_count());
+ 			preempt_count() = prev_count;
+ 		}
+-		rcu_bh_qs(cpu);
++		if (need_rcu_bh_qs)
++			rcu_bh_qs(cpu);
+ 	}
+ 	local_irq_disable();
+ }
+@@ -314,7 +315,7 @@ restart:
+ 	/* Reset the pending bitmask before enabling irqs */
+ 	set_softirq_pending(0);
+ 
+-	handle_pending_softirqs(pending, cpu);
++	handle_pending_softirqs(pending, cpu, 1);
+ 
+ 	pending = local_softirq_pending();
+ 	if (pending && --max_restart)
+@@ -384,7 +385,12 @@ static inline void ksoftirqd_clr_sched_params(void) { }
+ static DEFINE_LOCAL_IRQ_LOCK(local_softirq_lock);
+ static DEFINE_PER_CPU(struct task_struct *, local_softirq_runner);
+ 
+-static void __do_softirq(void);
++static void __do_softirq_common(int need_rcu_bh_qs);
++
++void __do_softirq(void)
++{
++	__do_softirq_common(0);
++}
+ 
+ void __init softirq_early_init(void)
+ {
+@@ -455,7 +461,7 @@ EXPORT_SYMBOL(in_serving_softirq);
+  * Called with bh and local interrupts disabled. For full RT cpu must
+  * be pinned.
+  */
+-static void __do_softirq(void)
++static void __do_softirq_common(int need_rcu_bh_qs)
+ {
+ 	u32 pending = local_softirq_pending();
+ 	int cpu = smp_processor_id();
+@@ -469,7 +475,7 @@ static void __do_softirq(void)
+ 
+ 	lockdep_softirq_enter();
+ 
+-	handle_pending_softirqs(pending, cpu);
++	handle_pending_softirqs(pending, cpu, need_rcu_bh_qs);
+ 
+ 	pending = local_softirq_pending();
+ 	if (pending)
+@@ -508,7 +514,7 @@ static int __thread_do_softirq(int cpu)
+ 	 * schedule!
+ 	 */
+ 	if (local_softirq_pending())
+-		__do_softirq();
++		__do_softirq_common(cpu >= 0);
+ 	local_unlock(local_softirq_lock);
+ 	unpin_current_cpu();
+ 	preempt_disable();
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0210-rt-rcutree-Move-misplaced-prototype.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0210-rt-rcutree-Move-misplaced-prototype.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0210-rt-rcutree-Move-misplaced-prototype.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0210-rt-rcutree-Move-misplaced-prototype.patch)
@@ -0,0 +1,50 @@
+From 554489274951704818f5e11a1246ce15085f539e Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Wed, 14 Dec 2011 12:51:28 +0100
+Subject: [PATCH 210/271] rt/rcutree: Move misplaced prototype
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Fix this warning on x86 defconfig:
+
+  kernel/rcutree.h:433:13: warning: ‘rcu_preempt_qs’ declared ‘static’ but never defined [-Wunused-function]
+
+The #ifdefs and prototypes here are a maze, move it closer to the
+usage site that needs it.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/rcutree.c |    2 ++
+ kernel/rcutree.h |    1 -
+ 2 files changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/rcutree.c b/kernel/rcutree.c
+index 3118218..8c26a49 100644
+--- a/kernel/rcutree.c
++++ b/kernel/rcutree.c
+@@ -171,6 +171,8 @@ void rcu_sched_qs(int cpu)
+ }
+ 
+ #ifdef CONFIG_PREEMPT_RT_FULL
++static void rcu_preempt_qs(int cpu);
++
+ void rcu_bh_qs(int cpu)
+ {
+ 	rcu_preempt_qs(cpu);
+diff --git a/kernel/rcutree.h b/kernel/rcutree.h
+index b522273..dca495d 100644
+--- a/kernel/rcutree.h
++++ b/kernel/rcutree.h
+@@ -430,7 +430,6 @@ DECLARE_PER_CPU(char, rcu_cpu_has_work);
+ /* Forward declarations for rcutree_plugin.h */
+ static void rcu_bootup_announce(void);
+ long rcu_batches_completed(void);
+-static void rcu_preempt_qs(int cpu);
+ static void rcu_preempt_note_context_switch(int cpu);
+ static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp);
+ #ifdef CONFIG_HOTPLUG_CPU
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0211-lglocks-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0211-lglocks-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0211-lglocks-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0211-lglocks-rt.patch.patch)
@@ -0,0 +1,128 @@
+From 38a27afbe2af41fb63f6546d24a89594830c00ce Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 15 Jun 2011 11:02:21 +0200
+Subject: [PATCH 211/271] lglocks-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/lglock.h |   99 ++++++++++++++++++++++++++++++++++++++++++++++++
+ 1 file changed, 99 insertions(+)
+
+diff --git a/include/linux/lglock.h b/include/linux/lglock.h
+index 87f402c..52b289f 100644
+--- a/include/linux/lglock.h
++++ b/include/linux/lglock.h
+@@ -71,6 +71,8 @@
+  extern void name##_global_lock_online(void);				\
+  extern void name##_global_unlock_online(void);				\
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
++
+ #define DEFINE_LGLOCK(name)						\
+ 									\
+  DEFINE_SPINLOCK(name##_cpu_lock);					\
+@@ -197,4 +199,101 @@
+ 	preempt_enable();						\
+  }									\
+  EXPORT_SYMBOL(name##_global_unlock);
++
++#else /* !PREEMPT_RT_FULL */
++#define DEFINE_LGLOCK(name)						\
++									\
++ DEFINE_PER_CPU(struct rt_mutex, name##_lock);					\
++ DEFINE_LGLOCK_LOCKDEP(name);						\
++									\
++ void name##_lock_init(void) {						\
++	int i;								\
++	LOCKDEP_INIT_MAP(&name##_lock_dep_map, #name, &name##_lock_key, 0); \
++	for_each_possible_cpu(i) {					\
++		struct rt_mutex *lock;					\
++		lock = &per_cpu(name##_lock, i);			\
++		rt_mutex_init(lock);					\
++	}								\
++ }									\
++ EXPORT_SYMBOL(name##_lock_init);					\
++									\
++ void name##_local_lock(void) {						\
++	struct rt_mutex *lock;						\
++	migrate_disable();						\
++	rwlock_acquire_read(&name##_lock_dep_map, 0, 0, _THIS_IP_);	\
++	lock = &__get_cpu_var(name##_lock);				\
++	__rt_spin_lock(lock);						\
++ }									\
++ EXPORT_SYMBOL(name##_local_lock);					\
++									\
++ void name##_local_unlock(void) {					\
++	struct rt_mutex *lock;						\
++	rwlock_release(&name##_lock_dep_map, 1, _THIS_IP_);		\
++	lock = &__get_cpu_var(name##_lock);				\
++	__rt_spin_unlock(lock);						\
++	migrate_enable();						\
++ }									\
++ EXPORT_SYMBOL(name##_local_unlock);					\
++									\
++ void name##_local_lock_cpu(int cpu) {					\
++	struct rt_mutex *lock;						\
++	rwlock_acquire_read(&name##_lock_dep_map, 0, 0, _THIS_IP_);	\
++	lock = &per_cpu(name##_lock, cpu);				\
++	__rt_spin_lock(lock);						\
++ }									\
++ EXPORT_SYMBOL(name##_local_lock_cpu);					\
++									\
++ void name##_local_unlock_cpu(int cpu) {				\
++	struct rt_mutex *lock;						\
++	rwlock_release(&name##_lock_dep_map, 1, _THIS_IP_);		\
++	lock = &per_cpu(name##_lock, cpu);				\
++	__rt_spin_unlock(lock);						\
++ }									\
++ EXPORT_SYMBOL(name##_local_unlock_cpu);				\
++									\
++ void name##_global_lock_online(void) {					\
++	int i;								\
++	rwlock_acquire(&name##_lock_dep_map, 0, 0, _RET_IP_);		\
++	for_each_online_cpu(i) {					\
++		struct rt_mutex *lock;					\
++		lock = &per_cpu(name##_lock, i);			\
++		__rt_spin_lock(lock);					\
++	}								\
++ }									\
++ EXPORT_SYMBOL(name##_global_lock_online);				\
++									\
++ void name##_global_unlock_online(void) {				\
++	int i;								\
++	rwlock_release(&name##_lock_dep_map, 1, _RET_IP_);		\
++	for_each_online_cpu(i) {					\
++		struct rt_mutex *lock;					\
++		lock = &per_cpu(name##_lock, i);			\
++		__rt_spin_unlock(lock);					\
++	}								\
++ }									\
++ EXPORT_SYMBOL(name##_global_unlock_online);				\
++									\
++ void name##_global_lock(void) {					\
++	int i;								\
++	rwlock_acquire(&name##_lock_dep_map, 0, 0, _RET_IP_);		\
++	for_each_possible_cpu(i) {					\
++		struct rt_mutex *lock;					\
++		lock = &per_cpu(name##_lock, i);			\
++		__rt_spin_lock(lock);					\
++	}								\
++ }									\
++ EXPORT_SYMBOL(name##_global_lock);					\
++									\
++ void name##_global_unlock(void) {					\
++	int i;								\
++	rwlock_release(&name##_lock_dep_map, 1, _RET_IP_);		\
++	for_each_possible_cpu(i) {					\
++		struct rt_mutex *lock;					\
++		lock = &per_cpu(name##_lock, i);			\
++		__rt_spin_unlock(lock);					\
++	}								\
++ }									\
++ EXPORT_SYMBOL(name##_global_unlock);
++#endif /* PRREMPT_RT_FULL */
++
+ #endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0212-serial-8250-Clean-up-the-locking-for-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0212-serial-8250-Clean-up-the-locking-for-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0212-serial-8250-Clean-up-the-locking-for-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0212-serial-8250-Clean-up-the-locking-for-rt.patch)
@@ -0,0 +1,47 @@
+From f9d8256725aaf4f9bc4ce99ece1e8b8d2c35ebe8 Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:01 -0500
+Subject: [PATCH 212/271] serial: 8250: Clean up the locking for -rt
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/tty/serial/8250.c |   15 +++++----------
+ 1 file changed, 5 insertions(+), 10 deletions(-)
+
+diff --git a/drivers/tty/serial/8250.c b/drivers/tty/serial/8250.c
+index 70585b6..e6d9dc1 100644
+--- a/drivers/tty/serial/8250.c
++++ b/drivers/tty/serial/8250.c
+@@ -2847,14 +2847,10 @@ serial8250_console_write(struct console *co, const char *s, unsigned int count)
+ 
+ 	touch_nmi_watchdog();
+ 
+-	local_irq_save(flags);
+-	if (up->port.sysrq) {
+-		/* serial8250_handle_port() already took the lock */
+-		locked = 0;
+-	} else if (oops_in_progress) {
+-		locked = spin_trylock(&up->port.lock);
+-	} else
+-		spin_lock(&up->port.lock);
++	if (up->port.sysrq || oops_in_progress)
++		locked = spin_trylock_irqsave(&up->port.lock, flags);
++	else
++		spin_lock_irqsave(&up->port.lock, flags);
+ 
+ 	/*
+ 	 *	First save the IER then disable the interrupts
+@@ -2886,8 +2882,7 @@ serial8250_console_write(struct console *co, const char *s, unsigned int count)
+ 		check_modem_status(up);
+ 
+ 	if (locked)
+-		spin_unlock(&up->port.lock);
+-	local_irq_restore(flags);
++		spin_unlock_irqrestore(&up->port.lock, flags);
+ }
+ 
+ static int __init serial8250_console_setup(struct console *co, char *options)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0213-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0213-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0213-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0213-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch)
@@ -0,0 +1,53 @@
+From 4cd86d50f81b22c35c3ab73a6e5ed3e14d16c9cb Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Fri, 3 Jul 2009 08:30:01 -0500
+Subject: [PATCH 213/271] serial: 8250: Call flush_to_ldisc when the irq is
+ threaded
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+---
+ drivers/tty/serial/8250.c |    2 ++
+ drivers/tty/tty_buffer.c  |    4 ++++
+ 2 files changed, 6 insertions(+)
+
+diff --git a/drivers/tty/serial/8250.c b/drivers/tty/serial/8250.c
+index e6d9dc1..b245819 100644
+--- a/drivers/tty/serial/8250.c
++++ b/drivers/tty/serial/8250.c
+@@ -1631,12 +1631,14 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
+ 
+ 		l = l->next;
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 		if (l == i->head && pass_counter++ > PASS_LIMIT) {
+ 			/* If we hit this, we're dead. */
+ 			printk_ratelimited(KERN_ERR
+ 				"serial8250: too much work for irq%d\n", irq);
+ 			break;
+ 		}
++#endif
+ 	} while (l != end);
+ 
+ 	spin_unlock(&i->lock);
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 6c9b7cd..a56c223 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -493,10 +493,14 @@ void tty_flip_buffer_push(struct tty_struct *tty)
+ 		tty->buf.tail->commit = tty->buf.tail->used;
+ 	spin_unlock_irqrestore(&tty->buf.lock, flags);
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 	if (tty->low_latency)
+ 		flush_to_ldisc(&tty->buf.work);
+ 	else
+ 		schedule_work(&tty->buf.work);
++#else
++	flush_to_ldisc(&tty->buf.work);
++#endif
+ }
+ EXPORT_SYMBOL(tty_flip_buffer_push);
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0214-drivers-tty-fix-omap-lock-crap.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0214-drivers-tty-fix-omap-lock-crap.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0214-drivers-tty-fix-omap-lock-crap.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0214-drivers-tty-fix-omap-lock-crap.patch.patch)
@@ -0,0 +1,43 @@
+From 6e07b71316fcd6f69a2e9230d6ba993af50a3a44 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 28 Jul 2011 13:32:57 +0200
+Subject: [PATCH 214/271] drivers-tty-fix-omap-lock-crap.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/tty/serial/omap-serial.c |    8 +++-----
+ 1 file changed, 3 insertions(+), 5 deletions(-)
+
+diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
+index 5e713d3..93cdb92 100644
+--- a/drivers/tty/serial/omap-serial.c
++++ b/drivers/tty/serial/omap-serial.c
+@@ -946,13 +946,12 @@ serial_omap_console_write(struct console *co, const char *s,
+ 	unsigned int ier;
+ 	int locked = 1;
+ 
+-	local_irq_save(flags);
+ 	if (up->port.sysrq)
+ 		locked = 0;
+ 	else if (oops_in_progress)
+-		locked = spin_trylock(&up->port.lock);
++		locked = spin_trylock_irqsave(&up->port.lock, flags);
+ 	else
+-		spin_lock(&up->port.lock);
++		spin_lock_irqsave(&up->port.lock, flags);
+ 
+ 	/*
+ 	 * First save the IER then disable the interrupts
+@@ -979,8 +978,7 @@ serial_omap_console_write(struct console *co, const char *s,
+ 		check_modem_status(up);
+ 
+ 	if (locked)
+-		spin_unlock(&up->port.lock);
+-	local_irq_restore(flags);
++		spin_unlock_irqrestore(&up->port.lock, flags);
+ }
+ 
+ static int __init
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0215-rt-Improve-the-serial-console-PASS_LIMIT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0215-rt-Improve-the-serial-console-PASS_LIMIT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0215-rt-Improve-the-serial-console-PASS_LIMIT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0215-rt-Improve-the-serial-console-PASS_LIMIT.patch)
@@ -0,0 +1,61 @@
+From ad90e83698ceff79059fc79c9120a6cd26d050df Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo at elte.hu>
+Date: Wed, 14 Dec 2011 13:05:54 +0100
+Subject: [PATCH 215/271] rt: Improve the serial console PASS_LIMIT
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Beyond the warning:
+
+ drivers/tty/serial/8250.c:1613:6: warning: unused variable ‘pass_counter’ [-Wunused-variable]
+
+the solution of just looping infinitely was ugly - up it to 1 million to
+give it a chance to continue in some really ugly situation.
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/tty/serial/8250.c |   13 ++++++++++---
+ 1 file changed, 10 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/tty/serial/8250.c b/drivers/tty/serial/8250.c
+index b245819..a3d3404 100644
+--- a/drivers/tty/serial/8250.c
++++ b/drivers/tty/serial/8250.c
+@@ -81,7 +81,16 @@ static unsigned int skip_txen_test; /* force skip of txen test at init time */
+ #define DEBUG_INTR(fmt...)	do { } while (0)
+ #endif
+ 
+-#define PASS_LIMIT	512
++/*
++ * On -rt we can have a more delays, and legitimately
++ * so - so don't drop work spuriously and spam the
++ * syslog:
++ */
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define PASS_LIMIT	1000000
++#else
++# define PASS_LIMIT	512
++#endif
+ 
+ #define BOTH_EMPTY 	(UART_LSR_TEMT | UART_LSR_THRE)
+ 
+@@ -1631,14 +1640,12 @@ static irqreturn_t serial8250_interrupt(int irq, void *dev_id)
+ 
+ 		l = l->next;
+ 
+-#ifndef CONFIG_PREEMPT_RT_FULL
+ 		if (l == i->head && pass_counter++ > PASS_LIMIT) {
+ 			/* If we hit this, we're dead. */
+ 			printk_ratelimited(KERN_ERR
+ 				"serial8250: too much work for irq%d\n", irq);
+ 			break;
+ 		}
+-#endif
+ 	} while (l != end);
+ 
+ 	spin_unlock(&i->lock);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0216-fs-namespace-preemption-fix.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0216-fs-namespace-preemption-fix.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0216-fs-namespace-preemption-fix.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0216-fs-namespace-preemption-fix.patch)
@@ -0,0 +1,48 @@
+From 818524b2f0a8fcdfcd31d57db28473bed012408e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 19 Jul 2009 08:44:27 -0500
+Subject: [PATCH 216/271] fs: namespace preemption fix
+
+On RT we cannot loop with preemption disabled here as
+mnt_make_readonly() might have been preempted. We can safely enable
+preemption while waiting for MNT_WRITE_HOLD to be cleared. Safe on !RT
+as well.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ fs/namespace.c |   10 +++++++---
+ 1 file changed, 7 insertions(+), 3 deletions(-)
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index ca4913a..644dbde 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -341,8 +341,14 @@ int mnt_want_write(struct vfsmount *mnt)
+ 	 * incremented count after it has set MNT_WRITE_HOLD.
+ 	 */
+ 	smp_mb();
+-	while (mnt->mnt_flags & MNT_WRITE_HOLD)
++	/*
++	 * No need to keep preemption disabled accross the spin loop.
++	 */
++	while (mnt->mnt_flags & MNT_WRITE_HOLD) {
++		preempt_enable();
+ 		cpu_relax();
++		preempt_disable();
++	}
+ 	/*
+ 	 * After the slowpath clears MNT_WRITE_HOLD, mnt_is_readonly will
+ 	 * be set to match its requirements. So we must not load that until
+@@ -352,9 +358,7 @@ int mnt_want_write(struct vfsmount *mnt)
+ 	if (__mnt_is_readonly(mnt)) {
+ 		mnt_dec_writers(mnt);
+ 		ret = -EROFS;
+-		goto out;
+ 	}
+-out:
+ 	preempt_enable();
+ 	return ret;
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0217-mm-protect-activate-switch-mm.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0217-mm-protect-activate-switch-mm.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0217-mm-protect-activate-switch-mm.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0217-mm-protect-activate-switch-mm.patch.patch)
@@ -0,0 +1,51 @@
+From 17888fdacbf63accc5bb522b870998eb169bb489 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 4 Jul 2011 09:48:40 +0200
+Subject: [PATCH 217/271] mm-protect-activate-switch-mm.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ fs/exec.c        |    2 ++
+ mm/mmu_context.c |    2 ++
+ 2 files changed, 4 insertions(+)
+
+diff --git a/fs/exec.c b/fs/exec.c
+index 160cd2f..2d44974 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -837,10 +837,12 @@ static int exec_mmap(struct mm_struct *mm)
+ 		}
+ 	}
+ 	task_lock(tsk);
++	local_irq_disable_rt();
+ 	active_mm = tsk->active_mm;
+ 	tsk->mm = mm;
+ 	tsk->active_mm = mm;
+ 	activate_mm(active_mm, mm);
++	local_irq_enable_rt();
+ 	task_unlock(tsk);
+ 	arch_pick_mmap_layout(mm);
+ 	if (old_mm) {
+diff --git a/mm/mmu_context.c b/mm/mmu_context.c
+index cf332bc..64ce279 100644
+--- a/mm/mmu_context.c
++++ b/mm/mmu_context.c
+@@ -26,6 +26,7 @@ void use_mm(struct mm_struct *mm)
+ 	struct task_struct *tsk = current;
+ 
+ 	task_lock(tsk);
++	local_irq_disable_rt();
+ 	active_mm = tsk->active_mm;
+ 	if (active_mm != mm) {
+ 		atomic_inc(&mm->mm_count);
+@@ -33,6 +34,7 @@ void use_mm(struct mm_struct *mm)
+ 	}
+ 	tsk->mm = mm;
+ 	switch_mm(active_mm, mm, tsk);
++	local_irq_enable_rt();
+ 	task_unlock(tsk);
+ 
+ 	if (active_mm != mm)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0218-fs-block-rt-support.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0218-fs-block-rt-support.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0218-fs-block-rt-support.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0218-fs-block-rt-support.patch.patch)
@@ -0,0 +1,48 @@
+From d1a20aa23e3d3a7f863359c7efeac7c2e25168e3 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 14 Jun 2011 17:05:09 +0200
+Subject: [PATCH 218/271] fs-block-rt-support.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ block/blk-core.c |    2 +-
+ fs/file.c        |    4 ++--
+ 2 files changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 7366ad4..ca732c0 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -235,7 +235,7 @@ EXPORT_SYMBOL(blk_delay_queue);
+  **/
+ void blk_start_queue(struct request_queue *q)
+ {
+-	WARN_ON(!irqs_disabled());
++	WARN_ON_NONRT(!irqs_disabled());
+ 
+ 	queue_flag_clear(QUEUE_FLAG_STOPPED, q);
+ 	__blk_run_queue(q);
+diff --git a/fs/file.c b/fs/file.c
+index 375472d..fd03258 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -105,14 +105,14 @@ void free_fdtable_rcu(struct rcu_head *rcu)
+ 		kfree(fdt->open_fds);
+ 		kfree(fdt);
+ 	} else {
+-		fddef = &get_cpu_var(fdtable_defer_list);
++		fddef = &per_cpu(fdtable_defer_list, get_cpu_light());
+ 		spin_lock(&fddef->lock);
+ 		fdt->next = fddef->next;
+ 		fddef->next = fdt;
+ 		/* vmallocs are handled from the workqueue context */
+ 		schedule_work(&fddef->wq);
+ 		spin_unlock(&fddef->lock);
+-		put_cpu_var(fdtable_defer_list);
++		put_cpu_light();
+ 	}
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0219-fs-ntfs-disable-interrupt-only-on-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0219-fs-ntfs-disable-interrupt-only-on-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0219-fs-ntfs-disable-interrupt-only-on-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0219-fs-ntfs-disable-interrupt-only-on-RT.patch)
@@ -0,0 +1,64 @@
+From 9e7b71e9591413798128a375c2860550a0e17690 Mon Sep 17 00:00:00 2001
+From: Mike Galbraith <efault at gmx.de>
+Date: Fri, 3 Jul 2009 08:44:12 -0500
+Subject: [PATCH 219/271] fs: ntfs: disable interrupt only on !RT
+
+On Sat, 2007-10-27 at 11:44 +0200, Ingo Molnar wrote:
+> * Nick Piggin <nickpiggin at yahoo.com.au> wrote:
+>
+> > > [10138.175796]  [<c0105de3>] show_trace+0x12/0x14
+> > > [10138.180291]  [<c0105dfb>] dump_stack+0x16/0x18
+> > > [10138.184769]  [<c011609f>] native_smp_call_function_mask+0x138/0x13d
+> > > [10138.191117]  [<c0117606>] smp_call_function+0x1e/0x24
+> > > [10138.196210]  [<c012f85c>] on_each_cpu+0x25/0x50
+> > > [10138.200807]  [<c0115c74>] flush_tlb_all+0x1e/0x20
+> > > [10138.205553]  [<c016caaf>] kmap_high+0x1b6/0x417
+> > > [10138.210118]  [<c011ec88>] kmap+0x4d/0x4f
+> > > [10138.214102]  [<c026a9d8>] ntfs_end_buffer_async_read+0x228/0x2f9
+> > > [10138.220163]  [<c01a0e9e>] end_bio_bh_io_sync+0x26/0x3f
+> > > [10138.225352]  [<c01a2b09>] bio_endio+0x42/0x6d
+> > > [10138.229769]  [<c02c2a08>] __end_that_request_first+0x115/0x4ac
+> > > [10138.235682]  [<c02c2da7>] end_that_request_chunk+0x8/0xa
+> > > [10138.241052]  [<c0365943>] ide_end_request+0x55/0x10a
+> > > [10138.246058]  [<c036dae3>] ide_dma_intr+0x6f/0xac
+> > > [10138.250727]  [<c0366d83>] ide_intr+0x93/0x1e0
+> > > [10138.255125]  [<c015afb4>] handle_IRQ_event+0x5c/0xc9
+> >
+> > Looks like ntfs is kmap()ing from interrupt context. Should be using
+> > kmap_atomic instead, I think.
+>
+> it's not atomic interrupt context but irq thread context - and -rt
+> remaps kmap_atomic() to kmap() internally.
+
+Hm.  Looking at the change to mm/bounce.c, perhaps I should do this
+instead?
+
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ fs/ntfs/aops.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c
+index 7fb7f1b..4c8095c 100644
+--- a/fs/ntfs/aops.c
++++ b/fs/ntfs/aops.c
+@@ -144,13 +144,13 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate)
+ 		recs = PAGE_CACHE_SIZE / rec_size;
+ 		/* Should have been verified before we got here... */
+ 		BUG_ON(!recs);
+-		local_irq_save(flags);
++		local_irq_save_nort(flags);
+ 		kaddr = kmap_atomic(page, KM_BIO_SRC_IRQ);
+ 		for (i = 0; i < recs; i++)
+ 			post_read_mst_fixup((NTFS_RECORD*)(kaddr +
+ 					i * rec_size), rec_size);
+ 		kunmap_atomic(kaddr, KM_BIO_SRC_IRQ);
+-		local_irq_restore(flags);
++		local_irq_restore_nort(flags);
+ 		flush_dcache_page(page);
+ 		if (likely(page_uptodate && !PageError(page)))
+ 			SetPageUptodate(page);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0220-x86-Convert-mce-timer-to-hrtimer.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0220-x86-Convert-mce-timer-to-hrtimer.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0220-x86-Convert-mce-timer-to-hrtimer.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0220-x86-Convert-mce-timer-to-hrtimer.patch)
@@ -0,0 +1,150 @@
+From afb22a2588eb56bac19438dacbba0ec240c67721 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 13 Dec 2010 16:33:39 +0100
+Subject: [PATCH 220/271] x86: Convert mce timer to hrtimer
+
+mce_timer is started in atomic contexts of cpu bringup. This results
+in might_sleep() warnings on RT. Convert mce_timer to a hrtimer to
+avoid this.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/kernel/cpu/mcheck/mce.c |   49 ++++++++++++++++++--------------------
+ 1 file changed, 23 insertions(+), 26 deletions(-)
+
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index b0f1271..48281f0 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -38,6 +38,7 @@
+ #include <linux/debugfs.h>
+ #include <linux/irq_work.h>
+ #include <linux/export.h>
++#include <linux/jiffies.h>
+ 
+ #include <asm/processor.h>
+ #include <asm/mce.h>
+@@ -1114,17 +1115,14 @@ void mce_log_therm_throt_event(__u64 status)
+  * poller finds an MCE, poll 2x faster.  When the poller finds no more
+  * errors, poll 2x slower (up to check_interval seconds).
+  */
+-static int check_interval = 5 * 60; /* 5 minutes */
++static unsigned long check_interval = 5 * 60; /* 5 minutes */
+ 
+-static DEFINE_PER_CPU(int, mce_next_interval); /* in jiffies */
+-static DEFINE_PER_CPU(struct timer_list, mce_timer);
++static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */
++static DEFINE_PER_CPU(struct hrtimer, mce_timer);
+ 
+-static void mce_start_timer(unsigned long data)
++static enum hrtimer_restart mce_start_timer(struct hrtimer *timer)
+ {
+-	struct timer_list *t = &per_cpu(mce_timer, data);
+-	int *n;
+-
+-	WARN_ON(smp_processor_id() != data);
++	unsigned long *n;
+ 
+ 	if (mce_available(__this_cpu_ptr(&cpu_info))) {
+ 		machine_check_poll(MCP_TIMESTAMP,
+@@ -1137,21 +1135,22 @@ static void mce_start_timer(unsigned long data)
+ 	 */
+ 	n = &__get_cpu_var(mce_next_interval);
+ 	if (mce_notify_irq())
+-		*n = max(*n/2, HZ/100);
++		*n = max(*n/2, HZ/100UL);
+ 	else
+-		*n = min(*n*2, (int)round_jiffies_relative(check_interval*HZ));
++		*n = min(*n*2, round_jiffies_relative(check_interval*HZ));
+ 
+-	t->expires = jiffies + *n;
+-	add_timer_on(t, smp_processor_id());
++	hrtimer_forward(timer, timer->base->get_time(),
++			ns_to_ktime(jiffies_to_usecs(*n) * 1000));
++	return HRTIMER_RESTART;
+ }
+ 
+-/* Must not be called in IRQ context where del_timer_sync() can deadlock */
++/* Must not be called in IRQ context where hrtimer_cancel() can deadlock */
+ static void mce_timer_delete_all(void)
+ {
+ 	int cpu;
+ 
+ 	for_each_online_cpu(cpu)
+-		del_timer_sync(&per_cpu(mce_timer, cpu));
++		hrtimer_cancel(&per_cpu(mce_timer, cpu));
+ }
+ 
+ static void mce_do_trigger(struct work_struct *work)
+@@ -1383,10 +1382,11 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c)
+ 
+ static void __mcheck_cpu_init_timer(void)
+ {
+-	struct timer_list *t = &__get_cpu_var(mce_timer);
+-	int *n = &__get_cpu_var(mce_next_interval);
++	struct hrtimer *t = &__get_cpu_var(mce_timer);
++	unsigned long *n = &__get_cpu_var(mce_next_interval);
+ 
+-	setup_timer(t, mce_start_timer, smp_processor_id());
++	hrtimer_init(t, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	t->function = mce_start_timer;
+ 
+ 	if (mce_ignore_ce)
+ 		return;
+@@ -1394,8 +1394,9 @@ static void __mcheck_cpu_init_timer(void)
+ 	*n = check_interval * HZ;
+ 	if (!*n)
+ 		return;
+-	t->expires = round_jiffies(jiffies + *n);
+-	add_timer_on(t, smp_processor_id());
++
++	hrtimer_start_range_ns(t, ns_to_ktime(jiffies_to_usecs(*n) * 1000),
++			       0 , HRTIMER_MODE_REL_PINNED);
+ }
+ 
+ /* Handle unconfigured int18 (should never happen) */
+@@ -2031,6 +2032,8 @@ static void __cpuinit mce_disable_cpu(void *h)
+ 	if (!mce_available(__this_cpu_ptr(&cpu_info)))
+ 		return;
+ 
++	hrtimer_cancel(&__get_cpu_var(mce_timer));
++
+ 	if (!(action & CPU_TASKS_FROZEN))
+ 		cmci_clear();
+ 	for (i = 0; i < banks; i++) {
+@@ -2057,6 +2060,7 @@ static void __cpuinit mce_reenable_cpu(void *h)
+ 		if (b->init)
+ 			wrmsrl(MSR_IA32_MCx_CTL(i), b->ctl);
+ 	}
++	__mcheck_cpu_init_timer();
+ }
+ 
+ /* Get notified when a cpu comes on/off. Be hotplug friendly. */
+@@ -2064,7 +2068,6 @@ static int __cpuinit
+ mce_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
+ {
+ 	unsigned int cpu = (unsigned long)hcpu;
+-	struct timer_list *t = &per_cpu(mce_timer, cpu);
+ 
+ 	switch (action) {
+ 	case CPU_ONLINE:
+@@ -2081,16 +2084,10 @@ mce_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
+ 		break;
+ 	case CPU_DOWN_PREPARE:
+ 	case CPU_DOWN_PREPARE_FROZEN:
+-		del_timer_sync(t);
+ 		smp_call_function_single(cpu, mce_disable_cpu, &action, 1);
+ 		break;
+ 	case CPU_DOWN_FAILED:
+ 	case CPU_DOWN_FAILED_FROZEN:
+-		if (!mce_ignore_ce && check_interval) {
+-			t->expires = round_jiffies(jiffies +
+-					   __get_cpu_var(mce_next_interval));
+-			add_timer_on(t, cpu);
+-		}
+ 		smp_call_function_single(cpu, mce_reenable_cpu, &action, 1);
+ 		break;
+ 	case CPU_POST_DEAD:
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0221-x86-stackprotector-Avoid-random-pool-on-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0221-x86-stackprotector-Avoid-random-pool-on-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0221-x86-stackprotector-Avoid-random-pool-on-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0221-x86-stackprotector-Avoid-random-pool-on-rt.patch)
@@ -0,0 +1,52 @@
+From 6929c91ac27618fa7e49f22f9a3599f207a926e5 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Thu, 16 Dec 2010 14:25:18 +0100
+Subject: [PATCH 221/271] x86: stackprotector: Avoid random pool on rt
+
+CPU bringup calls into the random pool to initialize the stack
+canary. During boot that works nicely even on RT as the might sleep
+checks are disabled. During CPU hotplug the might sleep checks
+trigger. Making the locks in random raw is a major PITA, so avoid the
+call on RT is the only sensible solution. This is basically the same
+randomness which we get during boot where the random pool has no
+entropy and we rely on the TSC randomnness.
+
+Reported-by: Carsten Emde <carsten.emde at osadl.org>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/include/asm/stackprotector.h |   10 +++++++++-
+ 1 file changed, 9 insertions(+), 1 deletion(-)
+
+diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h
+index 1575177..ac0703b 100644
+--- a/arch/x86/include/asm/stackprotector.h
++++ b/arch/x86/include/asm/stackprotector.h
+@@ -58,7 +58,7 @@
+  */
+ static __always_inline void boot_init_stack_canary(void)
+ {
+-	u64 canary;
++	u64 uninitialized_var(canary);
+ 	u64 tsc;
+ 
+ #ifdef CONFIG_X86_64
+@@ -69,8 +69,16 @@ static __always_inline void boot_init_stack_canary(void)
+ 	 * of randomness. The TSC only matters for very early init,
+ 	 * there it already has some randomness on most systems. Later
+ 	 * on during the bootup the random pool has true entropy too.
++	 *
++	 * For preempt-rt we need to weaken the randomness a bit, as
++	 * we can't call into the random generator from atomic context
++	 * due to locking constraints. We just leave canary
++	 * uninitialized and use the TSC based randomness on top of
++	 * it.
+ 	 */
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 	get_random_bytes(&canary, sizeof(canary));
++#endif
+ 	tsc = __native_read_tsc();
+ 	canary += tsc + (tsc << 32UL);
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0222-x86-Use-generic-rwsem_spinlocks-on-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0222-x86-Use-generic-rwsem_spinlocks-on-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0222-x86-Use-generic-rwsem_spinlocks-on-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0222-x86-Use-generic-rwsem_spinlocks-on-rt.patch)
@@ -0,0 +1,33 @@
+From c03c30327d32234c43a859522fcafd9d892e8998 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 26 Jul 2009 02:21:32 +0200
+Subject: [PATCH 222/271] x86: Use generic rwsem_spinlocks on -rt
+
+Simplifies the separation of anon_rw_semaphores and rw_semaphores for
+-rt.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/Kconfig |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index efb4294..e084a73 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -161,10 +161,10 @@ config ARCH_MAY_HAVE_PC_FDC
+ 	def_bool ISA_DMA_API
+ 
+ config RWSEM_GENERIC_SPINLOCK
+-	def_bool !X86_XADD
++	def_bool !X86_XADD || PREEMPT_RT_FULL
+ 
+ config RWSEM_XCHGADD_ALGORITHM
+-	def_bool X86_XADD
++	def_bool X86_XADD && !RWSEM_GENERIC_SPINLOCK && !PREEMPT_RT_FULL
+ 
+ config ARCH_HAS_CPU_IDLE_WAIT
+ 	def_bool y
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0223-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0223-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0223-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0223-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch)
@@ -0,0 +1,113 @@
+From 37a8c7948f62d89eefa1fa75c94d2cbef308c409 Mon Sep 17 00:00:00 2001
+From: Andi Kleen <ak at suse.de>
+Date: Fri, 3 Jul 2009 08:44:10 -0500
+Subject: [PATCH 223/271] x86: Disable IST stacks for debug/int 3/stack fault
+ for PREEMPT_RT
+
+Normally the x86-64 trap handlers for debug/int 3/stack fault run
+on a special interrupt stack to make them more robust
+when dealing with kernel code.
+
+The PREEMPT_RT kernel can sleep in locks even while allocating
+GFP_ATOMIC memory. When one of these trap handlers needs to send
+real time signals for ptrace it allocates memory and could then
+try to to schedule.  But it is not allowed to schedule on a
+IST stack. This can cause warnings and hangs.
+
+This patch disables the IST stacks for these handlers for PREEMPT_RT
+kernel. Instead let them run on the normal process stack.
+
+The kernel only really needs the ISTs here to make kernel debuggers more
+robust in case someone sets a break point somewhere where the stack is
+invalid. But there are no kernel debuggers in the standard kernel
+that do this.
+
+It also means kprobes cannot be set in situations with invalid stack;
+but that sounds like a reasonable restriction.
+
+The stack fault change could minimally impact oops quality, but not very
+much because stack faults are fairly rare.
+
+A better solution would be to use similar logic as the NMI "paranoid"
+path: check if signal is for user space, if yes go back to entry.S, switch stack,
+call sync_regs, then do the signal sending etc.
+
+But this patch is much simpler and should work too with minimal impact.
+
+Signed-off-by: Andi Kleen <ak at suse.de>
+Signed-off-by: Ingo Molnar <mingo at elte.hu>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/include/asm/page_64_types.h |   21 +++++++++++++++------
+ arch/x86/kernel/cpu/common.c         |    2 ++
+ arch/x86/kernel/dumpstack_64.c       |    4 ++++
+ 3 files changed, 21 insertions(+), 6 deletions(-)
+
+diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
+index 7639dbf..0883ecd 100644
+--- a/arch/x86/include/asm/page_64_types.h
++++ b/arch/x86/include/asm/page_64_types.h
+@@ -14,12 +14,21 @@
+ #define IRQ_STACK_ORDER 2
+ #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER)
+ 
+-#define STACKFAULT_STACK 1
+-#define DOUBLEFAULT_STACK 2
+-#define NMI_STACK 3
+-#define DEBUG_STACK 4
+-#define MCE_STACK 5
+-#define N_EXCEPTION_STACKS 5  /* hw limit: 7 */
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define STACKFAULT_STACK 0
++# define DOUBLEFAULT_STACK 1
++# define NMI_STACK 2
++# define DEBUG_STACK 0
++# define MCE_STACK 3
++# define N_EXCEPTION_STACKS 3  /* hw limit: 7 */
++#else
++# define STACKFAULT_STACK 1
++# define DOUBLEFAULT_STACK 2
++# define NMI_STACK 3
++# define DEBUG_STACK 4
++# define MCE_STACK 5
++# define N_EXCEPTION_STACKS 5  /* hw limit: 7 */
++#endif
+ 
+ #define PUD_PAGE_SIZE		(_AC(1, UL) << PUD_SHIFT)
+ #define PUD_PAGE_MASK		(~(PUD_PAGE_SIZE-1))
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index aa003b1..d39b525 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1055,7 +1055,9 @@ DEFINE_PER_CPU(unsigned int, irq_count) = -1;
+  */
+ static const unsigned int exception_stack_sizes[N_EXCEPTION_STACKS] = {
+ 	  [0 ... N_EXCEPTION_STACKS - 1]	= EXCEPTION_STKSZ,
++#if DEBUG_STACK > 0
+ 	  [DEBUG_STACK - 1]			= DEBUG_STKSZ
++#endif
+ };
+ 
+ static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
+diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
+index 6d728d9..352beb7 100644
+--- a/arch/x86/kernel/dumpstack_64.c
++++ b/arch/x86/kernel/dumpstack_64.c
+@@ -21,10 +21,14 @@
+ 		(N_EXCEPTION_STACKS + DEBUG_STKSZ/EXCEPTION_STKSZ - 2)
+ 
+ static char x86_stack_ids[][8] = {
++#if DEBUG_STACK > 0
+ 		[ DEBUG_STACK-1			]	= "#DB",
++#endif
+ 		[ NMI_STACK-1			]	= "NMI",
+ 		[ DOUBLEFAULT_STACK-1		]	= "#DF",
++#if STACKFAULT_STACK > 0
+ 		[ STACKFAULT_STACK-1		]	= "#SS",
++#endif
+ 		[ MCE_STACK-1			]	= "#MC",
+ #if DEBUG_STKSZ > EXCEPTION_STKSZ
+ 		[ N_EXCEPTION_STACKS ...
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0224-workqueue-use-get-cpu-light.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0224-workqueue-use-get-cpu-light.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0224-workqueue-use-get-cpu-light.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0224-workqueue-use-get-cpu-light.patch.patch)
@@ -0,0 +1,28 @@
+From 1882e11963e2c548cf2e1b6db443ae57a36aef6e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 21:42:26 +0200
+Subject: [PATCH 224/271] workqueue-use-get-cpu-light.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/workqueue.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 8bdc220..5d23c05b 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1063,8 +1063,8 @@ int queue_work(struct workqueue_struct *wq, struct work_struct *work)
+ {
+ 	int ret;
+ 
+-	ret = queue_work_on(get_cpu(), wq, work);
+-	put_cpu();
++	ret = queue_work_on(get_cpu_light(), wq, work);
++	put_cpu_light();
+ 
+ 	return ret;
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0225-epoll.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0225-epoll.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0225-epoll.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0225-epoll.patch.patch)
@@ -0,0 +1,32 @@
+From 324df13ee0fc61d910ac83df6d8e597ec65efe42 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 8 Jul 2011 16:35:35 +0200
+Subject: [PATCH 225/271] epoll.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ fs/eventpoll.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 4d9d3a4..af35db1 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -464,12 +464,12 @@ static int ep_poll_wakeup_proc(void *priv, void *cookie, int call_nests)
+  */
+ static void ep_poll_safewake(wait_queue_head_t *wq)
+ {
+-	int this_cpu = get_cpu();
++	int this_cpu = get_cpu_light();
+ 
+ 	ep_call_nested(&poll_safewake_ncalls, EP_MAX_NESTS,
+ 		       ep_poll_wakeup_proc, NULL, wq, (void *) (long) this_cpu);
+ 
+-	put_cpu();
++	put_cpu_light();
+ }
+ 
+ static void ep_remove_wait_queue(struct eppoll_entry *pwq)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0226-mm-vmalloc.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0226-mm-vmalloc.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0226-mm-vmalloc.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0226-mm-vmalloc.patch.patch)
@@ -0,0 +1,70 @@
+From db408def0885825ff24e72d6b842ab179fb8254c Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 12 Jul 2011 11:39:36 +0200
+Subject: [PATCH 226/271] mm-vmalloc.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ mm/vmalloc.c |   14 ++++++++------
+ 1 file changed, 8 insertions(+), 6 deletions(-)
+
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index eeba3bb..66df815 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -782,7 +782,7 @@ static struct vmap_block *new_vmap_block(gfp_t gfp_mask)
+ 	struct vmap_block *vb;
+ 	struct vmap_area *va;
+ 	unsigned long vb_idx;
+-	int node, err;
++	int node, err, cpu;
+ 
+ 	node = numa_node_id();
+ 
+@@ -821,12 +821,13 @@ static struct vmap_block *new_vmap_block(gfp_t gfp_mask)
+ 	BUG_ON(err);
+ 	radix_tree_preload_end();
+ 
+-	vbq = &get_cpu_var(vmap_block_queue);
++	cpu = get_cpu_light();
++	vbq = &__get_cpu_var(vmap_block_queue);
+ 	vb->vbq = vbq;
+ 	spin_lock(&vbq->lock);
+ 	list_add_rcu(&vb->free_list, &vbq->free);
+ 	spin_unlock(&vbq->lock);
+-	put_cpu_var(vmap_block_queue);
++	put_cpu_light();
+ 
+ 	return vb;
+ }
+@@ -900,7 +901,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask)
+ 	struct vmap_block *vb;
+ 	unsigned long addr = 0;
+ 	unsigned int order;
+-	int purge = 0;
++	int purge = 0, cpu;
+ 
+ 	BUG_ON(size & ~PAGE_MASK);
+ 	BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC);
+@@ -908,7 +909,8 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask)
+ 
+ again:
+ 	rcu_read_lock();
+-	vbq = &get_cpu_var(vmap_block_queue);
++	cpu = get_cpu_light();
++	vbq = &__get_cpu_var(vmap_block_queue);
+ 	list_for_each_entry_rcu(vb, &vbq->free, free_list) {
+ 		int i;
+ 
+@@ -945,7 +947,7 @@ next:
+ 	if (purge)
+ 		purge_fragmented_blocks_thiscpu();
+ 
+-	put_cpu_var(vmap_block_queue);
++	put_cpu_light();
+ 	rcu_read_unlock();
+ 
+ 	if (!addr) {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0227-workqueue-Fix-cpuhotplug-trainwreck.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0227-workqueue-Fix-cpuhotplug-trainwreck.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0227-workqueue-Fix-cpuhotplug-trainwreck.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0227-workqueue-Fix-cpuhotplug-trainwreck.patch)
@@ -0,0 +1,810 @@
+From be8c122edb51e40f2a88c9fbe7f3effc7015ec69 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Fri, 30 Sep 2011 11:57:58 +0200
+Subject: [PATCH 227/271] workqueue: Fix cpuhotplug trainwreck
+
+The current workqueue code does crazy stuff on cpu unplug, it relies on
+forced affine breakage, thereby violating per-cpu expectations. Worse,
+it tries to re-attach to a cpu if the thing comes up again before all
+previously queued works are finished. This breaks (admittedly bonkers)
+cpu-hotplug use that relies on a down-up cycle to push all usage away.
+
+Introduce a new WQ_NON_AFFINE flag that indicates a per-cpu workqueue
+will not respect cpu affinity and use this to migrate all its pending
+works to whatever cpu is doing cpu-down.
+
+This also adds a warning for queue_on_cpu() users which warns when its
+used on WQ_NON_AFFINE workqueues for the API implies you care about
+what cpu things are ran on when such workqueues cannot guarantee this.
+
+For the rest, simply flush all per-cpu works and don't mess about.
+This also means that currently all workqueues that are manually
+flushing things on cpu-down in order to provide the per-cpu guarantee
+no longer need to do so.
+
+In short, we tell the WQ what we want it to do, provide validation for
+this and loose ~250 lines of code.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/cpu.h       |    6 +-
+ include/linux/workqueue.h |    5 +-
+ kernel/workqueue.c        |  556 ++++++++++++---------------------------------
+ 3 files changed, 152 insertions(+), 415 deletions(-)
+
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index c46ec3e..72e90bb 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -66,8 +66,10 @@ enum {
+ 	/* migration should happen before other stuff but after perf */
+ 	CPU_PRI_PERF		= 20,
+ 	CPU_PRI_MIGRATION	= 10,
+-	/* prepare workqueues for other notifiers */
+-	CPU_PRI_WORKQUEUE	= 5,
++
++	CPU_PRI_WORKQUEUE_ACTIVE	= 5,  /* prepare workqueues for others */
++	CPU_PRI_NORMAL			= 0,
++	CPU_PRI_WORKQUEUE_INACTIVE	= -5, /* flush workqueues after others */
+ };
+ 
+ #define CPU_ONLINE		0x0002 /* CPU (unsigned)v is up */
+diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
+index e228ca9..3d8ac9d 100644
+--- a/include/linux/workqueue.h
++++ b/include/linux/workqueue.h
+@@ -254,9 +254,10 @@ enum {
+ 	WQ_MEM_RECLAIM		= 1 << 3, /* may be used for memory reclaim */
+ 	WQ_HIGHPRI		= 1 << 4, /* high priority */
+ 	WQ_CPU_INTENSIVE	= 1 << 5, /* cpu instensive workqueue */
++	WQ_NON_AFFINE		= 1 << 6, /* free to move works around cpus */
+ 
+-	WQ_DRAINING		= 1 << 6, /* internal: workqueue is draining */
+-	WQ_RESCUER		= 1 << 7, /* internal: workqueue has rescuer */
++	WQ_DRAINING		= 1 << 7, /* internal: workqueue is draining */
++	WQ_RESCUER		= 1 << 8, /* internal: workqueue has rescuer */
+ 
+ 	WQ_MAX_ACTIVE		= 512,	  /* I like 512, better ideas? */
+ 	WQ_MAX_UNBOUND_PER_CPU	= 4,	  /* 4 * #cpus for unbound wq */
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 5d23c05b..8daede8 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -41,6 +41,7 @@
+ #include <linux/debug_locks.h>
+ #include <linux/lockdep.h>
+ #include <linux/idr.h>
++#include <linux/delay.h>
+ 
+ #include "workqueue_sched.h"
+ 
+@@ -57,20 +58,10 @@ enum {
+ 	WORKER_DIE		= 1 << 1,	/* die die die */
+ 	WORKER_IDLE		= 1 << 2,	/* is idle */
+ 	WORKER_PREP		= 1 << 3,	/* preparing to run works */
+-	WORKER_ROGUE		= 1 << 4,	/* not bound to any cpu */
+-	WORKER_REBIND		= 1 << 5,	/* mom is home, come back */
+-	WORKER_CPU_INTENSIVE	= 1 << 6,	/* cpu intensive */
+-	WORKER_UNBOUND		= 1 << 7,	/* worker is unbound */
++	WORKER_CPU_INTENSIVE	= 1 << 4,	/* cpu intensive */
++	WORKER_UNBOUND		= 1 << 5,	/* worker is unbound */
+ 
+-	WORKER_NOT_RUNNING	= WORKER_PREP | WORKER_ROGUE | WORKER_REBIND |
+-				  WORKER_CPU_INTENSIVE | WORKER_UNBOUND,
+-
+-	/* gcwq->trustee_state */
+-	TRUSTEE_START		= 0,		/* start */
+-	TRUSTEE_IN_CHARGE	= 1,		/* trustee in charge of gcwq */
+-	TRUSTEE_BUTCHER		= 2,		/* butcher workers */
+-	TRUSTEE_RELEASE		= 3,		/* release workers */
+-	TRUSTEE_DONE		= 4,		/* trustee is done */
++	WORKER_NOT_RUNNING	= WORKER_PREP | WORKER_CPU_INTENSIVE | WORKER_UNBOUND,
+ 
+ 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
+ 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
+@@ -84,7 +75,6 @@ enum {
+ 						   (min two ticks) */
+ 	MAYDAY_INTERVAL		= HZ / 10,	/* and then every 100ms */
+ 	CREATE_COOLDOWN		= HZ,		/* time to breath after fail */
+-	TRUSTEE_COOLDOWN	= HZ / 10,	/* for trustee draining */
+ 
+ 	/*
+ 	 * Rescue workers are used only on emergencies and shared by
+@@ -136,7 +126,6 @@ struct worker {
+ 	unsigned long		last_active;	/* L: last active timestamp */
+ 	unsigned int		flags;		/* X: flags */
+ 	int			id;		/* I: worker id */
+-	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
+ 	int			sleeping;	/* None */
+ };
+ 
+@@ -164,10 +153,8 @@ struct global_cwq {
+ 
+ 	struct ida		worker_ida;	/* L: for worker IDs */
+ 
+-	struct task_struct	*trustee;	/* L: for gcwq shutdown */
+-	unsigned int		trustee_state;	/* L: trustee state */
+-	wait_queue_head_t	trustee_wait;	/* trustee wait */
+ 	struct worker		*first_idle;	/* L: first idle worker */
++	wait_queue_head_t	idle_wait;
+ } ____cacheline_aligned_in_smp;
+ 
+ /*
+@@ -974,13 +961,38 @@ static bool is_chained_work(struct workqueue_struct *wq)
+ 	return false;
+ }
+ 
+-static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
+-			 struct work_struct *work)
++static void ___queue_work(struct workqueue_struct *wq, struct global_cwq *gcwq,
++			  struct work_struct *work)
+ {
+-	struct global_cwq *gcwq;
+ 	struct cpu_workqueue_struct *cwq;
+ 	struct list_head *worklist;
+ 	unsigned int work_flags;
++
++	/* gcwq determined, get cwq and queue */
++	cwq = get_cwq(gcwq->cpu, wq);
++	trace_workqueue_queue_work(gcwq->cpu, cwq, work);
++
++	BUG_ON(!list_empty(&work->entry));
++
++	cwq->nr_in_flight[cwq->work_color]++;
++	work_flags = work_color_to_flags(cwq->work_color);
++
++	if (likely(cwq->nr_active < cwq->max_active)) {
++		trace_workqueue_activate_work(work);
++		cwq->nr_active++;
++		worklist = gcwq_determine_ins_pos(gcwq, cwq);
++	} else {
++		work_flags |= WORK_STRUCT_DELAYED;
++		worklist = &cwq->delayed_works;
++	}
++
++	insert_work(cwq, work, worklist, work_flags);
++}
++
++static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
++			 struct work_struct *work)
++{
++	struct global_cwq *gcwq;
+ 	unsigned long flags;
+ 
+ 	debug_work_activate(work);
+@@ -1026,27 +1038,32 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
+ 		spin_lock_irqsave(&gcwq->lock, flags);
+ 	}
+ 
+-	/* gcwq determined, get cwq and queue */
+-	cwq = get_cwq(gcwq->cpu, wq);
+-	trace_workqueue_queue_work(cpu, cwq, work);
++	___queue_work(wq, gcwq, work);
+ 
+-	BUG_ON(!list_empty(&work->entry));
++	spin_unlock_irqrestore(&gcwq->lock, flags);
++}
+ 
+-	cwq->nr_in_flight[cwq->work_color]++;
+-	work_flags = work_color_to_flags(cwq->work_color);
++/**
++ * queue_work_on - queue work on specific cpu
++ * @cpu: CPU number to execute work on
++ * @wq: workqueue to use
++ * @work: work to queue
++ *
++ * Returns 0 if @work was already on a queue, non-zero otherwise.
++ *
++ * We queue the work to a specific CPU, the caller must ensure it
++ * can't go away.
++ */
++static int
++__queue_work_on(int cpu, struct workqueue_struct *wq, struct work_struct *work)
++{
++	int ret = 0;
+ 
+-	if (likely(cwq->nr_active < cwq->max_active)) {
+-		trace_workqueue_activate_work(work);
+-		cwq->nr_active++;
+-		worklist = gcwq_determine_ins_pos(gcwq, cwq);
+-	} else {
+-		work_flags |= WORK_STRUCT_DELAYED;
+-		worklist = &cwq->delayed_works;
++	if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) {
++		__queue_work(cpu, wq, work);
++		ret = 1;
+ 	}
+-
+-	insert_work(cwq, work, worklist, work_flags);
+-
+-	spin_unlock_irqrestore(&gcwq->lock, flags);
++	return ret;
+ }
+ 
+ /**
+@@ -1063,34 +1080,19 @@ int queue_work(struct workqueue_struct *wq, struct work_struct *work)
+ {
+ 	int ret;
+ 
+-	ret = queue_work_on(get_cpu_light(), wq, work);
++	ret = __queue_work_on(get_cpu_light(), wq, work);
+ 	put_cpu_light();
+ 
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(queue_work);
+ 
+-/**
+- * queue_work_on - queue work on specific cpu
+- * @cpu: CPU number to execute work on
+- * @wq: workqueue to use
+- * @work: work to queue
+- *
+- * Returns 0 if @work was already on a queue, non-zero otherwise.
+- *
+- * We queue the work to a specific CPU, the caller must ensure it
+- * can't go away.
+- */
+ int
+ queue_work_on(int cpu, struct workqueue_struct *wq, struct work_struct *work)
+ {
+-	int ret = 0;
++	WARN_ON(wq->flags & WQ_NON_AFFINE);
+ 
+-	if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) {
+-		__queue_work(cpu, wq, work);
+-		ret = 1;
+-	}
+-	return ret;
++	return __queue_work_on(cpu, wq, work);
+ }
+ EXPORT_SYMBOL_GPL(queue_work_on);
+ 
+@@ -1136,6 +1138,8 @@ int queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
+ 	struct timer_list *timer = &dwork->timer;
+ 	struct work_struct *work = &dwork->work;
+ 
++	WARN_ON((wq->flags & WQ_NON_AFFINE) && cpu != -1);
++
+ 	if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) {
+ 		unsigned int lcpu;
+ 
+@@ -1201,12 +1205,13 @@ static void worker_enter_idle(struct worker *worker)
+ 	/* idle_list is LIFO */
+ 	list_add(&worker->entry, &gcwq->idle_list);
+ 
+-	if (likely(!(worker->flags & WORKER_ROGUE))) {
+-		if (too_many_workers(gcwq) && !timer_pending(&gcwq->idle_timer))
+-			mod_timer(&gcwq->idle_timer,
+-				  jiffies + IDLE_WORKER_TIMEOUT);
+-	} else
+-		wake_up_all(&gcwq->trustee_wait);
++	if (gcwq->nr_idle == gcwq->nr_workers)
++		wake_up_all(&gcwq->idle_wait);
++
++	if (too_many_workers(gcwq) && !timer_pending(&gcwq->idle_timer)) {
++		mod_timer(&gcwq->idle_timer,
++				jiffies + IDLE_WORKER_TIMEOUT);
++	}
+ 
+ 	/* sanity check nr_running */
+ 	WARN_ON_ONCE(gcwq->nr_workers == gcwq->nr_idle &&
+@@ -1298,23 +1303,6 @@ __acquires(&gcwq->lock)
+ 	}
+ }
+ 
+-/*
+- * Function for worker->rebind_work used to rebind rogue busy workers
+- * to the associated cpu which is coming back online.  This is
+- * scheduled by cpu up but can race with other cpu hotplug operations
+- * and may be executed twice without intervening cpu down.
+- */
+-static void worker_rebind_fn(struct work_struct *work)
+-{
+-	struct worker *worker = container_of(work, struct worker, rebind_work);
+-	struct global_cwq *gcwq = worker->gcwq;
+-
+-	if (worker_maybe_bind_and_lock(worker))
+-		worker_clr_flags(worker, WORKER_REBIND);
+-
+-	spin_unlock_irq(&gcwq->lock);
+-}
+-
+ static struct worker *alloc_worker(void)
+ {
+ 	struct worker *worker;
+@@ -1323,7 +1311,6 @@ static struct worker *alloc_worker(void)
+ 	if (worker) {
+ 		INIT_LIST_HEAD(&worker->entry);
+ 		INIT_LIST_HEAD(&worker->scheduled);
+-		INIT_WORK(&worker->rebind_work, worker_rebind_fn);
+ 		/* on creation a worker is in !idle && prep state */
+ 		worker->flags = WORKER_PREP;
+ 	}
+@@ -1663,13 +1650,6 @@ static bool manage_workers(struct worker *worker)
+ 
+ 	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+ 
+-	/*
+-	 * The trustee might be waiting to take over the manager
+-	 * position, tell it we're done.
+-	 */
+-	if (unlikely(gcwq->trustee))
+-		wake_up_all(&gcwq->trustee_wait);
+-
+ 	return ret;
+ }
+ 
+@@ -3209,171 +3189,71 @@ EXPORT_SYMBOL_GPL(work_busy);
+  * gcwqs serve mix of short, long and very long running works making
+  * blocked draining impractical.
+  *
+- * This is solved by allowing a gcwq to be detached from CPU, running
+- * it with unbound (rogue) workers and allowing it to be reattached
+- * later if the cpu comes back online.  A separate thread is created
+- * to govern a gcwq in such state and is called the trustee of the
+- * gcwq.
+- *
+- * Trustee states and their descriptions.
+- *
+- * START	Command state used on startup.  On CPU_DOWN_PREPARE, a
+- *		new trustee is started with this state.
+- *
+- * IN_CHARGE	Once started, trustee will enter this state after
+- *		assuming the manager role and making all existing
+- *		workers rogue.  DOWN_PREPARE waits for trustee to
+- *		enter this state.  After reaching IN_CHARGE, trustee
+- *		tries to execute the pending worklist until it's empty
+- *		and the state is set to BUTCHER, or the state is set
+- *		to RELEASE.
+- *
+- * BUTCHER	Command state which is set by the cpu callback after
+- *		the cpu has went down.  Once this state is set trustee
+- *		knows that there will be no new works on the worklist
+- *		and once the worklist is empty it can proceed to
+- *		killing idle workers.
+- *
+- * RELEASE	Command state which is set by the cpu callback if the
+- *		cpu down has been canceled or it has come online
+- *		again.  After recognizing this state, trustee stops
+- *		trying to drain or butcher and clears ROGUE, rebinds
+- *		all remaining workers back to the cpu and releases
+- *		manager role.
+- *
+- * DONE		Trustee will enter this state after BUTCHER or RELEASE
+- *		is complete.
+- *
+- *          trustee                 CPU                draining
+- *         took over                down               complete
+- * START -----------> IN_CHARGE -----------> BUTCHER -----------> DONE
+- *                        |                     |                  ^
+- *                        | CPU is back online  v   return workers |
+- *                         ----------------> RELEASE --------------
+  */
+ 
+-/**
+- * trustee_wait_event_timeout - timed event wait for trustee
+- * @cond: condition to wait for
+- * @timeout: timeout in jiffies
+- *
+- * wait_event_timeout() for trustee to use.  Handles locking and
+- * checks for RELEASE request.
+- *
+- * CONTEXT:
+- * spin_lock_irq(gcwq->lock) which may be released and regrabbed
+- * multiple times.  To be used by trustee.
+- *
+- * RETURNS:
+- * Positive indicating left time if @cond is satisfied, 0 if timed
+- * out, -1 if canceled.
+- */
+-#define trustee_wait_event_timeout(cond, timeout) ({			\
+-	long __ret = (timeout);						\
+-	while (!((cond) || (gcwq->trustee_state == TRUSTEE_RELEASE)) &&	\
+-	       __ret) {							\
+-		spin_unlock_irq(&gcwq->lock);				\
+-		__wait_event_timeout(gcwq->trustee_wait, (cond) ||	\
+-			(gcwq->trustee_state == TRUSTEE_RELEASE),	\
+-			__ret);						\
+-		spin_lock_irq(&gcwq->lock);				\
+-	}								\
+-	gcwq->trustee_state == TRUSTEE_RELEASE ? -1 : (__ret);		\
+-})
++static int __devinit workqueue_cpu_up_callback(struct notifier_block *nfb,
++						unsigned long action,
++						void *hcpu)
++{
++	unsigned int cpu = (unsigned long)hcpu;
++	struct global_cwq *gcwq = get_gcwq(cpu);
++	struct worker *uninitialized_var(new_worker);
++	unsigned long flags;
+ 
+-/**
+- * trustee_wait_event - event wait for trustee
+- * @cond: condition to wait for
+- *
+- * wait_event() for trustee to use.  Automatically handles locking and
+- * checks for CANCEL request.
+- *
+- * CONTEXT:
+- * spin_lock_irq(gcwq->lock) which may be released and regrabbed
+- * multiple times.  To be used by trustee.
+- *
+- * RETURNS:
+- * 0 if @cond is satisfied, -1 if canceled.
+- */
+-#define trustee_wait_event(cond) ({					\
+-	long __ret1;							\
+-	__ret1 = trustee_wait_event_timeout(cond, MAX_SCHEDULE_TIMEOUT);\
+-	__ret1 < 0 ? -1 : 0;						\
+-})
++	action &= ~CPU_TASKS_FROZEN;
+ 
+-static int __cpuinit trustee_thread(void *__gcwq)
+-{
+-	struct global_cwq *gcwq = __gcwq;
+-	struct worker *worker;
+-	struct work_struct *work;
+-	struct hlist_node *pos;
+-	long rc;
+-	int i;
++	switch (action) {
++	case CPU_UP_PREPARE:
++		BUG_ON(gcwq->first_idle);
++		new_worker = create_worker(gcwq, false);
++		if (!new_worker)
++			return NOTIFY_BAD;
++	}
+ 
+-	BUG_ON(gcwq->cpu != smp_processor_id());
++	/* some are called w/ irq disabled, don't disturb irq status */
++	spin_lock_irqsave(&gcwq->lock, flags);
+ 
+-	spin_lock_irq(&gcwq->lock);
+-	/*
+-	 * Claim the manager position and make all workers rogue.
+-	 * Trustee must be bound to the target cpu and can't be
+-	 * cancelled.
+-	 */
+-	BUG_ON(gcwq->cpu != smp_processor_id());
+-	rc = trustee_wait_event(!(gcwq->flags & GCWQ_MANAGING_WORKERS));
+-	BUG_ON(rc < 0);
++	switch (action) {
++	case CPU_UP_PREPARE:
++		BUG_ON(gcwq->first_idle);
++		gcwq->first_idle = new_worker;
++		break;
+ 
+-	gcwq->flags |= GCWQ_MANAGING_WORKERS;
++	case CPU_UP_CANCELED:
++		destroy_worker(gcwq->first_idle);
++		gcwq->first_idle = NULL;
++		break;
+ 
+-	list_for_each_entry(worker, &gcwq->idle_list, entry)
+-		worker->flags |= WORKER_ROGUE;
++	case CPU_ONLINE:
++		spin_unlock_irq(&gcwq->lock);
++		kthread_bind(gcwq->first_idle->task, cpu);
++		spin_lock_irq(&gcwq->lock);
++		gcwq->flags |= GCWQ_MANAGE_WORKERS;
++		start_worker(gcwq->first_idle);
++		gcwq->first_idle = NULL;
++		break;
++	}
+ 
+-	for_each_busy_worker(worker, i, pos, gcwq)
+-		worker->flags |= WORKER_ROGUE;
++	spin_unlock_irqrestore(&gcwq->lock, flags);
+ 
+-	/*
+-	 * Call schedule() so that we cross rq->lock and thus can
+-	 * guarantee sched callbacks see the rogue flag.  This is
+-	 * necessary as scheduler callbacks may be invoked from other
+-	 * cpus.
+-	 */
+-	spin_unlock_irq(&gcwq->lock);
+-	schedule();
+-	spin_lock_irq(&gcwq->lock);
++	return notifier_from_errno(0);
++}
+ 
+-	/*
+-	 * Sched callbacks are disabled now.  Zap nr_running.  After
+-	 * this, nr_running stays zero and need_more_worker() and
+-	 * keep_working() are always true as long as the worklist is
+-	 * not empty.
+-	 */
+-	atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
++static void flush_gcwq(struct global_cwq *gcwq)
++{
++	struct work_struct *work, *nw;
++	struct worker *worker, *n;
++	LIST_HEAD(non_affine_works);
+ 
+-	spin_unlock_irq(&gcwq->lock);
+-	del_timer_sync(&gcwq->idle_timer);
+ 	spin_lock_irq(&gcwq->lock);
++	list_for_each_entry_safe(work, nw, &gcwq->worklist, entry) {
++		struct workqueue_struct *wq = get_work_cwq(work)->wq;
+ 
+-	/*
+-	 * We're now in charge.  Notify and proceed to drain.  We need
+-	 * to keep the gcwq running during the whole CPU down
+-	 * procedure as other cpu hotunplug callbacks may need to
+-	 * flush currently running tasks.
+-	 */
+-	gcwq->trustee_state = TRUSTEE_IN_CHARGE;
+-	wake_up_all(&gcwq->trustee_wait);
++		if (wq->flags & WQ_NON_AFFINE)
++			list_move(&work->entry, &non_affine_works);
++	}
+ 
+-	/*
+-	 * The original cpu is in the process of dying and may go away
+-	 * anytime now.  When that happens, we and all workers would
+-	 * be migrated to other cpus.  Try draining any left work.  We
+-	 * want to get it over with ASAP - spam rescuers, wake up as
+-	 * many idlers as necessary and create new ones till the
+-	 * worklist is empty.  Note that if the gcwq is frozen, there
+-	 * may be frozen works in freezable cwqs.  Don't declare
+-	 * completion while frozen.
+-	 */
+-	while (gcwq->nr_workers != gcwq->nr_idle ||
+-	       gcwq->flags & GCWQ_FREEZING ||
+-	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
++	while (!list_empty(&gcwq->worklist)) {
+ 		int nr_works = 0;
+ 
+ 		list_for_each_entry(work, &gcwq->worklist, entry) {
+@@ -3387,200 +3267,55 @@ static int __cpuinit trustee_thread(void *__gcwq)
+ 			wake_up_process(worker->task);
+ 		}
+ 
++		spin_unlock_irq(&gcwq->lock);
++
+ 		if (need_to_create_worker(gcwq)) {
+-			spin_unlock_irq(&gcwq->lock);
+-			worker = create_worker(gcwq, false);
+-			spin_lock_irq(&gcwq->lock);
+-			if (worker) {
+-				worker->flags |= WORKER_ROGUE;
++			worker = create_worker(gcwq, true);
++			if (worker)
+ 				start_worker(worker);
+-			}
+ 		}
+ 
+-		/* give a breather */
+-		if (trustee_wait_event_timeout(false, TRUSTEE_COOLDOWN) < 0)
+-			break;
+-	}
+-
+-	/*
+-	 * Either all works have been scheduled and cpu is down, or
+-	 * cpu down has already been canceled.  Wait for and butcher
+-	 * all workers till we're canceled.
+-	 */
+-	do {
+-		rc = trustee_wait_event(!list_empty(&gcwq->idle_list));
+-		while (!list_empty(&gcwq->idle_list))
+-			destroy_worker(list_first_entry(&gcwq->idle_list,
+-							struct worker, entry));
+-	} while (gcwq->nr_workers && rc >= 0);
+-
+-	/*
+-	 * At this point, either draining has completed and no worker
+-	 * is left, or cpu down has been canceled or the cpu is being
+-	 * brought back up.  There shouldn't be any idle one left.
+-	 * Tell the remaining busy ones to rebind once it finishes the
+-	 * currently scheduled works by scheduling the rebind_work.
+-	 */
+-	WARN_ON(!list_empty(&gcwq->idle_list));
++		wait_event_timeout(gcwq->idle_wait,
++				gcwq->nr_idle == gcwq->nr_workers, HZ/10);
+ 
+-	for_each_busy_worker(worker, i, pos, gcwq) {
+-		struct work_struct *rebind_work = &worker->rebind_work;
++		spin_lock_irq(&gcwq->lock);
++	}
+ 
+-		/*
+-		 * Rebind_work may race with future cpu hotplug
+-		 * operations.  Use a separate flag to mark that
+-		 * rebinding is scheduled.
+-		 */
+-		worker->flags |= WORKER_REBIND;
+-		worker->flags &= ~WORKER_ROGUE;
++	WARN_ON(gcwq->nr_workers != gcwq->nr_idle);
+ 
+-		/* queue rebind_work, wq doesn't matter, use the default one */
+-		if (test_and_set_bit(WORK_STRUCT_PENDING_BIT,
+-				     work_data_bits(rebind_work)))
+-			continue;
++	list_for_each_entry_safe(worker, n, &gcwq->idle_list, entry)
++		destroy_worker(worker);
+ 
+-		debug_work_activate(rebind_work);
+-		insert_work(get_cwq(gcwq->cpu, system_wq), rebind_work,
+-			    worker->scheduled.next,
+-			    work_color_to_flags(WORK_NO_COLOR));
+-	}
++	WARN_ON(gcwq->nr_workers || gcwq->nr_idle);
+ 
+-	/* relinquish manager role */
+-	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+-
+-	/* notify completion */
+-	gcwq->trustee = NULL;
+-	gcwq->trustee_state = TRUSTEE_DONE;
+-	wake_up_all(&gcwq->trustee_wait);
+ 	spin_unlock_irq(&gcwq->lock);
+-	return 0;
+-}
+ 
+-/**
+- * wait_trustee_state - wait for trustee to enter the specified state
+- * @gcwq: gcwq the trustee of interest belongs to
+- * @state: target state to wait for
+- *
+- * Wait for the trustee to reach @state.  DONE is already matched.
+- *
+- * CONTEXT:
+- * spin_lock_irq(gcwq->lock) which may be released and regrabbed
+- * multiple times.  To be used by cpu_callback.
+- */
+-static void __cpuinit wait_trustee_state(struct global_cwq *gcwq, int state)
+-__releases(&gcwq->lock)
+-__acquires(&gcwq->lock)
+-{
+-	if (!(gcwq->trustee_state == state ||
+-	      gcwq->trustee_state == TRUSTEE_DONE)) {
+-		spin_unlock_irq(&gcwq->lock);
+-		__wait_event(gcwq->trustee_wait,
+-			     gcwq->trustee_state == state ||
+-			     gcwq->trustee_state == TRUSTEE_DONE);
+-		spin_lock_irq(&gcwq->lock);
++	gcwq = get_gcwq(get_cpu());
++	spin_lock_irq(&gcwq->lock);
++	list_for_each_entry_safe(work, nw, &non_affine_works, entry) {
++		list_del_init(&work->entry);
++		___queue_work(get_work_cwq(work)->wq, gcwq, work);
+ 	}
++	spin_unlock_irq(&gcwq->lock);
++	put_cpu();
+ }
+ 
+-static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
++static int __devinit workqueue_cpu_down_callback(struct notifier_block *nfb,
+ 						unsigned long action,
+ 						void *hcpu)
+ {
+ 	unsigned int cpu = (unsigned long)hcpu;
+ 	struct global_cwq *gcwq = get_gcwq(cpu);
+-	struct task_struct *new_trustee = NULL;
+-	struct worker *uninitialized_var(new_worker);
+-	unsigned long flags;
+ 
+ 	action &= ~CPU_TASKS_FROZEN;
+ 
+-	switch (action) {
+-	case CPU_DOWN_PREPARE:
+-		new_trustee = kthread_create(trustee_thread, gcwq,
+-					     "workqueue_trustee/%d\n", cpu);
+-		if (IS_ERR(new_trustee))
+-			return notifier_from_errno(PTR_ERR(new_trustee));
+-		kthread_bind(new_trustee, cpu);
+-		/* fall through */
+-	case CPU_UP_PREPARE:
+-		BUG_ON(gcwq->first_idle);
+-		new_worker = create_worker(gcwq, false);
+-		if (!new_worker) {
+-			if (new_trustee)
+-				kthread_stop(new_trustee);
+-			return NOTIFY_BAD;
+-		}
+-		break;
+-	case CPU_POST_DEAD:
+-	case CPU_UP_CANCELED:
+-	case CPU_DOWN_FAILED:
+-	case CPU_ONLINE:
+-		break;
+-	case CPU_DYING:
+-		/*
+-		 * We access this lockless. We are on the dying CPU
+-		 * and called from stomp machine.
+-		 *
+-		 * Before this, the trustee and all workers except for
+-		 * the ones which are still executing works from
+-		 * before the last CPU down must be on the cpu.  After
+-		 * this, they'll all be diasporas.
+-		 */
+-		gcwq->flags |= GCWQ_DISASSOCIATED;
+-	default:
+-		goto out;
+-	}
+-
+-	/* some are called w/ irq disabled, don't disturb irq status */
+-	spin_lock_irqsave(&gcwq->lock, flags);
+-
+-	switch (action) {
+-	case CPU_DOWN_PREPARE:
+-		/* initialize trustee and tell it to acquire the gcwq */
+-		BUG_ON(gcwq->trustee || gcwq->trustee_state != TRUSTEE_DONE);
+-		gcwq->trustee = new_trustee;
+-		gcwq->trustee_state = TRUSTEE_START;
+-		wake_up_process(gcwq->trustee);
+-		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
+-		/* fall through */
+-	case CPU_UP_PREPARE:
+-		BUG_ON(gcwq->first_idle);
+-		gcwq->first_idle = new_worker;
+-		break;
++        switch (action) {
++        case CPU_DOWN_PREPARE:
++                flush_gcwq(gcwq);
++                break;
++        }
+ 
+-	case CPU_POST_DEAD:
+-		gcwq->trustee_state = TRUSTEE_BUTCHER;
+-		/* fall through */
+-	case CPU_UP_CANCELED:
+-		destroy_worker(gcwq->first_idle);
+-		gcwq->first_idle = NULL;
+-		break;
+ 
+-	case CPU_DOWN_FAILED:
+-	case CPU_ONLINE:
+-		gcwq->flags &= ~GCWQ_DISASSOCIATED;
+-		if (gcwq->trustee_state != TRUSTEE_DONE) {
+-			gcwq->trustee_state = TRUSTEE_RELEASE;
+-			wake_up_process(gcwq->trustee);
+-			wait_trustee_state(gcwq, TRUSTEE_DONE);
+-		}
+-
+-		/*
+-		 * Trustee is done and there might be no worker left.
+-		 * Put the first_idle in and request a real manager to
+-		 * take a look.
+-		 */
+-		spin_unlock_irq(&gcwq->lock);
+-		kthread_bind(gcwq->first_idle->task, cpu);
+-		spin_lock_irq(&gcwq->lock);
+-		gcwq->flags |= GCWQ_MANAGE_WORKERS;
+-		start_worker(gcwq->first_idle);
+-		gcwq->first_idle = NULL;
+-		break;
+-	}
+-
+-	spin_unlock_irqrestore(&gcwq->lock, flags);
+-
+-out:
+ 	return notifier_from_errno(0);
+ }
+ 
+@@ -3777,7 +3512,8 @@ static int __init init_workqueues(void)
+ 	unsigned int cpu;
+ 	int i;
+ 
+-	cpu_notifier(workqueue_cpu_callback, CPU_PRI_WORKQUEUE);
++	cpu_notifier(workqueue_cpu_up_callback, CPU_PRI_WORKQUEUE_ACTIVE);
++ 	hotcpu_notifier(workqueue_cpu_down_callback, CPU_PRI_WORKQUEUE_INACTIVE);
+ 
+ 	/* initialize gcwqs */
+ 	for_each_gcwq_cpu(cpu) {
+@@ -3800,9 +3536,7 @@ static int __init init_workqueues(void)
+ 			    (unsigned long)gcwq);
+ 
+ 		ida_init(&gcwq->worker_ida);
+-
+-		gcwq->trustee_state = TRUSTEE_DONE;
+-		init_waitqueue_head(&gcwq->trustee_wait);
++		init_waitqueue_head(&gcwq->idle_wait);
+ 	}
+ 
+ 	/* create the initial worker */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0228-workqueue-Fix-PF_THREAD_BOUND-abuse.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0228-workqueue-Fix-PF_THREAD_BOUND-abuse.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0228-workqueue-Fix-PF_THREAD_BOUND-abuse.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0228-workqueue-Fix-PF_THREAD_BOUND-abuse.patch)
@@ -0,0 +1,106 @@
+From e219938b97dcd7e110c06527dbb6c2e91a7adaf0 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Mon, 3 Oct 2011 12:43:25 +0200
+Subject: [PATCH 228/271] workqueue: Fix PF_THREAD_BOUND abuse
+
+PF_THREAD_BOUND is set by kthread_bind() and means the thread is bound
+to a particular cpu for correctness. The workqueue code abuses this
+flag and blindly sets it for all created threads, including those that
+are free to migrate.
+
+Restore the original semantics now that the worst abuse in the
+cpu-hotplug path are gone. The only icky bit is the rescue thread for
+per-cpu workqueues, this cannot use kthread_bind() but will use
+set_cpus_allowed_ptr() to migrate itself to the desired cpu.
+
+Set and clear PF_THREAD_BOUND manually here.
+
+XXX: I think worker_maybe_bind_and_lock()/worker_unbind_and_unlock()
+should also do a get_online_cpus(), this would likely allow us to
+remove the while loop.
+
+XXX: should probably repurpose GCWQ_DISASSOCIATED to warn on adding
+works after CPU_DOWN_PREPARE -- its dual use to mark unbound gcwqs is
+a tad annoying though.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/workqueue.c |   29 ++++++++++++++++++++---------
+ 1 file changed, 20 insertions(+), 9 deletions(-)
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 8daede8..02ce5cc 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1288,8 +1288,14 @@ __acquires(&gcwq->lock)
+ 			return false;
+ 		if (task_cpu(task) == gcwq->cpu &&
+ 		    cpumask_equal(&current->cpus_allowed,
+-				  get_cpu_mask(gcwq->cpu)))
++				  get_cpu_mask(gcwq->cpu))) {
++			/*
++			 * Since we're binding to a particular cpu and need to
++			 * stay there for correctness, mark us PF_THREAD_BOUND.
++			 */
++			task->flags |= PF_THREAD_BOUND;
+ 			return true;
++		}
+ 		spin_unlock_irq(&gcwq->lock);
+ 
+ 		/*
+@@ -1303,6 +1309,18 @@ __acquires(&gcwq->lock)
+ 	}
+ }
+ 
++static void worker_unbind_and_unlock(struct worker *worker)
++{
++	struct global_cwq *gcwq = worker->gcwq;
++	struct task_struct *task = worker->task;
++
++	/*
++	 * Its no longer required we're PF_THREAD_BOUND, the work is done.
++	 */
++	task->flags &= ~PF_THREAD_BOUND;
++	spin_unlock_irq(&gcwq->lock);
++}
++
+ static struct worker *alloc_worker(void)
+ {
+ 	struct worker *worker;
+@@ -1365,15 +1383,9 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
+ 	if (IS_ERR(worker->task))
+ 		goto fail;
+ 
+-	/*
+-	 * A rogue worker will become a regular one if CPU comes
+-	 * online later on.  Make sure every worker has
+-	 * PF_THREAD_BOUND set.
+-	 */
+ 	if (bind && !on_unbound_cpu)
+ 		kthread_bind(worker->task, gcwq->cpu);
+ 	else {
+-		worker->task->flags |= PF_THREAD_BOUND;
+ 		if (on_unbound_cpu)
+ 			worker->flags |= WORKER_UNBOUND;
+ 	}
+@@ -2050,7 +2062,7 @@ repeat:
+ 		if (keep_working(gcwq))
+ 			wake_up_worker(gcwq);
+ 
+-		spin_unlock_irq(&gcwq->lock);
++		worker_unbind_and_unlock(rescuer);
+ 	}
+ 
+ 	schedule();
+@@ -2999,7 +3011,6 @@ struct workqueue_struct *__alloc_workqueue_key(const char *name,
+ 		if (IS_ERR(rescuer->task))
+ 			goto err;
+ 
+-		rescuer->task->flags |= PF_THREAD_BOUND;
+ 		wake_up_process(rescuer->task);
+ 	}
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0229-workqueue-Use-get_cpu_light-in-flush_gcwq.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0229-workqueue-Use-get_cpu_light-in-flush_gcwq.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0229-workqueue-Use-get_cpu_light-in-flush_gcwq.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0229-workqueue-Use-get_cpu_light-in-flush_gcwq.patch)
@@ -0,0 +1,85 @@
+From 325348467f26839c302200835fce7ffbe2840773 Mon Sep 17 00:00:00 2001
+From: Yong Zhang <yong.zhang0 at gmail.com>
+Date: Sun, 16 Oct 2011 18:56:46 +0800
+Subject: [PATCH 229/271] workqueue: Use get_cpu_light() in flush_gcwq()
+
+BUG: sleeping function called from invalid context at kernel/rtmutex.c:645
+in_atomic(): 1, irqs_disabled(): 0, pid: 1739, name: bash
+Pid: 1739, comm: bash Not tainted 3.0.6-rt17-00284-gb76d419 #3
+Call Trace:
+ [<c06e3b5d>] ? printk+0x1d/0x20
+ [<c01390b6>] __might_sleep+0xe6/0x110
+ [<c06e633c>] rt_spin_lock+0x1c/0x30
+ [<c01655a6>] flush_gcwq+0x236/0x320
+ [<c021c651>] ? kfree+0xe1/0x1a0
+ [<c05b7178>] ? __cpufreq_remove_dev+0xf8/0x260
+ [<c0183fad>] ? rt_down_write+0xd/0x10
+ [<c06cd91e>] workqueue_cpu_down_callback+0x26/0x2d
+ [<c06e9d65>] notifier_call_chain+0x45/0x60
+ [<c0171cfe>] __raw_notifier_call_chain+0x1e/0x30
+ [<c014c9b4>] __cpu_notify+0x24/0x40
+ [<c06cbc6f>] _cpu_down+0xdf/0x330
+ [<c06cbef0>] cpu_down+0x30/0x50
+ [<c06cd6b0>] store_online+0x50/0xa7
+ [<c06cd660>] ? acpi_os_map_memory+0xec/0xec
+ [<c04f2faa>] sysdev_store+0x2a/0x40
+ [<c02887a4>] sysfs_write_file+0xa4/0x100
+ [<c0229ab2>] vfs_write+0xa2/0x170
+ [<c0288700>] ? sysfs_poll+0x90/0x90
+ [<c0229d92>] sys_write+0x42/0x70
+ [<c06ecedf>] sysenter_do_call+0x12/0x2d
+CPU 1 is now offline
+SMP alternatives: switching to UP code
+SMP alternatives: switching to SMP code
+Booting Node 0 Processor 1 APIC 0x1
+smpboot cpu 1: start_ip = 9b000
+Initializing CPU#1
+BUG: sleeping function called from invalid context at kernel/rtmutex.c:645
+in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: kworker/0:0
+Pid: 0, comm: kworker/0:0 Not tainted 3.0.6-rt17-00284-gb76d419 #3
+Call Trace:
+ [<c06e3b5d>] ? printk+0x1d/0x20
+ [<c01390b6>] __might_sleep+0xe6/0x110
+ [<c06e633c>] rt_spin_lock+0x1c/0x30
+ [<c06cd85b>] workqueue_cpu_up_callback+0x56/0xf3
+ [<c06e9d65>] notifier_call_chain+0x45/0x60
+ [<c0171cfe>] __raw_notifier_call_chain+0x1e/0x30
+ [<c014c9b4>] __cpu_notify+0x24/0x40
+ [<c014c9ec>] cpu_notify+0x1c/0x20
+ [<c06e1d43>] notify_cpu_starting+0x1e/0x20
+ [<c06e0aad>] smp_callin+0xfb/0x10e
+ [<c06e0ad9>] start_secondary+0x19/0xd7
+NMI watchdog enabled, takes one hw-pmu counter.
+Switched to NOHz mode on CPU #1
+
+Signed-off-by: Yong Zhang <yong.zhang0 at gmail.com>
+Link: http://lkml.kernel.org/r/1318762607-2261-5-git-send-email-yong.zhang0@gmail.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/workqueue.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 02ce5cc..8389afe 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3301,14 +3301,14 @@ static void flush_gcwq(struct global_cwq *gcwq)
+ 
+ 	spin_unlock_irq(&gcwq->lock);
+ 
+-	gcwq = get_gcwq(get_cpu());
++	gcwq = get_gcwq(get_cpu_light());
+ 	spin_lock_irq(&gcwq->lock);
+ 	list_for_each_entry_safe(work, nw, &non_affine_works, entry) {
+ 		list_del_init(&work->entry);
+ 		___queue_work(get_work_cwq(work)->wq, gcwq, work);
+ 	}
+ 	spin_unlock_irq(&gcwq->lock);
+-	put_cpu();
++	put_cpu_light();
+ }
+ 
+ static int __devinit workqueue_cpu_down_callback(struct notifier_block *nfb,
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0230-hotplug-stuff.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0230-hotplug-stuff.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0230-hotplug-stuff.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0230-hotplug-stuff.patch.patch)
@@ -0,0 +1,31 @@
+From 45c6749a31fb2e9775423618bee9d5f5b572bc42 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Fri, 4 Nov 2011 18:58:24 +0100
+Subject: [PATCH 230/271] hotplug-stuff.patch
+
+Do not take lock for non handled cases (might be atomic context)
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/workqueue.c |    5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 8389afe..674d783 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3219,6 +3219,11 @@ static int __devinit workqueue_cpu_up_callback(struct notifier_block *nfb,
+ 		new_worker = create_worker(gcwq, false);
+ 		if (!new_worker)
+ 			return NOTIFY_BAD;
++	case CPU_UP_CANCELED:
++	case CPU_ONLINE:
++		break;
++	default:
++		return notifier_from_errno(0);
+ 	}
+ 
+ 	/* some are called w/ irq disabled, don't disturb irq status */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0231-debugobjects-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0231-debugobjects-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0231-debugobjects-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0231-debugobjects-rt.patch.patch)
@@ -0,0 +1,40 @@
+From 9619f70dd9f0f74d120fcf5e75028db5ba810897 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 21:41:35 +0200
+Subject: [PATCH 231/271] debugobjects-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ lib/debugobjects.c |    7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index a78b7c6..9b622c9 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -306,7 +306,10 @@ __debug_object_init(void *addr, struct debug_obj_descr *descr, int onstack)
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+ 
+-	fill_pool();
++#ifdef CONFIG_PREEMPT_RT_FULL
++	if (preempt_count() == 0 && !irqs_disabled())
++#endif
++		fill_pool();
+ 
+ 	db = get_bucket((unsigned long) addr);
+ 
+@@ -1015,9 +1018,9 @@ static int __init debug_objects_replace_static_objects(void)
+ 		}
+ 	}
+ 
++	local_irq_enable();
+ 	printk(KERN_DEBUG "ODEBUG: %d of %d active objects replaced\n", cnt,
+ 	       obj_pool_used);
+-	local_irq_enable();
+ 	return 0;
+ free:
+ 	hlist_for_each_entry_safe(obj, node, tmp, &objects, node) {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0232-jump-label-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0232-jump-label-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0232-jump-label-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0232-jump-label-rt.patch.patch)
@@ -0,0 +1,26 @@
+From dab12832a0e0d7a33e2afe3c5ea05bcbe6904d76 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 13 Jul 2011 11:03:16 +0200
+Subject: [PATCH 232/271] jump-label-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/jump_label.h |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
+index 388b0d4..9cc8ed9 100644
+--- a/include/linux/jump_label.h
++++ b/include/linux/jump_label.h
+@@ -4,7 +4,7 @@
+ #include <linux/types.h>
+ #include <linux/compiler.h>
+ 
+-#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
++#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL) && !defined(CONFIG_PREEMPT_BASE)
+ 
+ struct jump_label_key {
+ 	atomic_t enabled;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0233-skbufhead-raw-lock.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0233-skbufhead-raw-lock.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0233-skbufhead-raw-lock.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0233-skbufhead-raw-lock.patch.patch)
@@ -0,0 +1,137 @@
+From 7c340f56c62ecfc8e38734d27dcd620511115773 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 12 Jul 2011 15:38:34 +0200
+Subject: [PATCH 233/271] skbufhead-raw-lock.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/netdevice.h |    1 +
+ include/linux/skbuff.h    |    7 +++++++
+ net/core/dev.c            |   26 ++++++++++++++++++++------
+ 3 files changed, 28 insertions(+), 6 deletions(-)
+
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index cb52340..ea28f93 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -1751,6 +1751,7 @@ struct softnet_data {
+ 	unsigned		dropped;
+ 	struct sk_buff_head	input_pkt_queue;
+ 	struct napi_struct	backlog;
++	struct sk_buff_head	tofree_queue;
+ };
+ 
+ static inline void input_queue_head_incr(struct softnet_data *sd)
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index bdb4590..d857453 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -124,6 +124,7 @@ struct sk_buff_head {
+ 
+ 	__u32		qlen;
+ 	spinlock_t	lock;
++	raw_spinlock_t	raw_lock;
+ };
+ 
+ struct sk_buff;
+@@ -926,6 +927,12 @@ static inline void skb_queue_head_init(struct sk_buff_head *list)
+ 	__skb_queue_head_init(list);
+ }
+ 
++static inline void skb_queue_head_init_raw(struct sk_buff_head *list)
++{
++	raw_spin_lock_init(&list->raw_lock);
++	__skb_queue_head_init(list);
++}
++
+ static inline void skb_queue_head_init_class(struct sk_buff_head *list,
+ 		struct lock_class_key *class)
+ {
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2c63eea..0916344 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -222,14 +222,14 @@ static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)
+ static inline void rps_lock(struct softnet_data *sd)
+ {
+ #ifdef CONFIG_RPS
+-	spin_lock(&sd->input_pkt_queue.lock);
++	raw_spin_lock(&sd->input_pkt_queue.raw_lock);
+ #endif
+ }
+ 
+ static inline void rps_unlock(struct softnet_data *sd)
+ {
+ #ifdef CONFIG_RPS
+-	spin_unlock(&sd->input_pkt_queue.lock);
++	raw_spin_unlock(&sd->input_pkt_queue.raw_lock);
+ #endif
+ }
+ 
+@@ -3401,7 +3401,7 @@ static void flush_backlog(void *arg)
+ 	skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
+ 		if (skb->dev == dev) {
+ 			__skb_unlink(skb, &sd->input_pkt_queue);
+-			kfree_skb(skb);
++			__skb_queue_tail(&sd->tofree_queue, skb);
+ 			input_queue_head_incr(sd);
+ 		}
+ 	}
+@@ -3410,10 +3410,13 @@ static void flush_backlog(void *arg)
+ 	skb_queue_walk_safe(&sd->process_queue, skb, tmp) {
+ 		if (skb->dev == dev) {
+ 			__skb_unlink(skb, &sd->process_queue);
+-			kfree_skb(skb);
++			__skb_queue_tail(&sd->tofree_queue, skb);
+ 			input_queue_head_incr(sd);
+ 		}
+ 	}
++
++	if (!skb_queue_empty(&sd->tofree_queue))
++		raise_softirq_irqoff(NET_RX_SOFTIRQ);
+ }
+ 
+ static int napi_gro_complete(struct sk_buff *skb)
+@@ -3897,10 +3900,17 @@ static void net_rx_action(struct softirq_action *h)
+ 	struct softnet_data *sd = &__get_cpu_var(softnet_data);
+ 	unsigned long time_limit = jiffies + 2;
+ 	int budget = netdev_budget;
++	struct sk_buff *skb;
+ 	void *have;
+ 
+ 	local_irq_disable();
+ 
++	while ((skb = __skb_dequeue(&sd->tofree_queue))) {
++		local_irq_enable();
++		kfree_skb(skb);
++		local_irq_disable();
++	}
++
+ 	while (!list_empty(&sd->poll_list)) {
+ 		struct napi_struct *n;
+ 		int work, weight;
+@@ -6334,6 +6344,9 @@ static int dev_cpu_callback(struct notifier_block *nfb,
+ 		netif_rx(skb);
+ 		input_queue_head_incr(oldsd);
+ 	}
++	while ((skb = __skb_dequeue(&oldsd->tofree_queue))) {
++		kfree_skb(skb);
++	}
+ 
+ 	return NOTIFY_OK;
+ }
+@@ -6600,8 +6613,9 @@ static int __init net_dev_init(void)
+ 		struct softnet_data *sd = &per_cpu(softnet_data, i);
+ 
+ 		memset(sd, 0, sizeof(*sd));
+-		skb_queue_head_init(&sd->input_pkt_queue);
+-		skb_queue_head_init(&sd->process_queue);
++		skb_queue_head_init_raw(&sd->input_pkt_queue);
++		skb_queue_head_init_raw(&sd->process_queue);
++		skb_queue_head_init_raw(&sd->tofree_queue);
+ 		sd->completion_queue = NULL;
+ 		INIT_LIST_HEAD(&sd->poll_list);
+ 		sd->output_queue = NULL;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0234-x86-no-perf-irq-work-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0234-x86-no-perf-irq-work-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0234-x86-no-perf-irq-work-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0234-x86-no-perf-irq-work-rt.patch.patch)
@@ -0,0 +1,71 @@
+From 7ff26a4e498110abe0b33f670f7c28aabb6609a0 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 13 Jul 2011 14:05:05 +0200
+Subject: [PATCH 234/271] x86-no-perf-irq-work-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/kernel/irq_work.c |    2 ++
+ kernel/irq_work.c          |    2 ++
+ kernel/timer.c             |    6 +++++-
+ 3 files changed, 9 insertions(+), 1 deletion(-)
+
+diff --git a/arch/x86/kernel/irq_work.c b/arch/x86/kernel/irq_work.c
+index ca8f703..129b8bb 100644
+--- a/arch/x86/kernel/irq_work.c
++++ b/arch/x86/kernel/irq_work.c
+@@ -18,6 +18,7 @@ void smp_irq_work_interrupt(struct pt_regs *regs)
+ 	irq_exit();
+ }
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ void arch_irq_work_raise(void)
+ {
+ #ifdef CONFIG_X86_LOCAL_APIC
+@@ -28,3 +29,4 @@ void arch_irq_work_raise(void)
+ 	apic_wait_icr_idle();
+ #endif
+ }
++#endif
+diff --git a/kernel/irq_work.c b/kernel/irq_work.c
+index c3c46c7..727ba59 100644
+--- a/kernel/irq_work.c
++++ b/kernel/irq_work.c
+@@ -105,8 +105,10 @@ void irq_work_run(void)
+ 	if (llist_empty(this_list))
+ 		return;
+ 
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 	BUG_ON(!in_irq());
+ 	BUG_ON(!irqs_disabled());
++#endif
+ 
+ 	llnode = llist_del_all(this_list);
+ 	while (llnode != NULL) {
+diff --git a/kernel/timer.c b/kernel/timer.c
+index 7ea787b..a791a43 100644
+--- a/kernel/timer.c
++++ b/kernel/timer.c
+@@ -1362,7 +1362,7 @@ void update_process_times(int user_tick)
+ 	scheduler_tick();
+ 	run_local_timers();
+ 	rcu_check_callbacks(cpu, user_tick);
+-#ifdef CONFIG_IRQ_WORK
++#if defined(CONFIG_IRQ_WORK) && !defined(CONFIG_PREEMPT_RT_FULL)
+ 	if (in_irq())
+ 		irq_work_run();
+ #endif
+@@ -1376,6 +1376,10 @@ static void run_timer_softirq(struct softirq_action *h)
+ {
+ 	struct tvec_base *base = __this_cpu_read(tvec_bases);
+ 
++#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
++	irq_work_run();
++#endif
++
+ 	printk_tick();
+ 	hrtimer_run_pending();
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0235-console-make-rt-friendly.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0235-console-make-rt-friendly.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0235-console-make-rt-friendly.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0235-console-make-rt-friendly.patch.patch)
@@ -0,0 +1,88 @@
+From 7b992d36d3f990ac56ee7f9722def7a6f494cffd Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 17 Jul 2011 22:43:07 +0200
+Subject: [PATCH 235/271] console-make-rt-friendly.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/printk.c |   26 +++++++++++++++++++++++---
+ 1 file changed, 23 insertions(+), 3 deletions(-)
+
+diff --git a/kernel/printk.c b/kernel/printk.c
+index 2b95bc0..fad23c8 100644
+--- a/kernel/printk.c
++++ b/kernel/printk.c
+@@ -504,6 +504,7 @@ static void __call_console_drivers(unsigned start, unsigned end)
+ {
+ 	struct console *con;
+ 
++	migrate_disable();
+ 	for_each_console(con) {
+ 		if (exclusive_console && con != exclusive_console)
+ 			continue;
+@@ -512,6 +513,7 @@ static void __call_console_drivers(unsigned start, unsigned end)
+ 				(con->flags & CON_ANYTIME)))
+ 			con->write(con, &LOG_BUF(start), end - start);
+ 	}
++	migrate_enable();
+ }
+ 
+ #ifdef CONFIG_EARLY_PRINTK
+@@ -827,12 +829,18 @@ static inline int can_use_console(unsigned int cpu)
+  * interrupts disabled. It should return with 'lockbuf_lock'
+  * released but interrupts still disabled.
+  */
+-static int console_trylock_for_printk(unsigned int cpu)
++static int console_trylock_for_printk(unsigned int cpu, unsigned long flags)
+ 	__releases(&logbuf_lock)
+ {
+ 	int retval = 0, wake = 0;
++#ifdef CONFIG_PREEMPT_RT_FULL
++	int lock = !early_boot_irqs_disabled && !irqs_disabled_flags(flags) &&
++		!preempt_count();
++#else
++	int lock = 1;
++#endif
+ 
+-	if (console_trylock()) {
++	if (lock && console_trylock()) {
+ 		retval = 1;
+ 
+ 		/*
+@@ -1010,8 +1018,15 @@ asmlinkage int vprintk(const char *fmt, va_list args)
+ 	 * will release 'logbuf_lock' regardless of whether it
+ 	 * actually gets the semaphore or not.
+ 	 */
+-	if (console_trylock_for_printk(this_cpu))
++	if (console_trylock_for_printk(this_cpu, flags)) {
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 		console_unlock();
++#else
++		raw_local_irq_restore(flags);
++		console_unlock();
++		raw_local_irq_save(flags);
++#endif
++	}
+ 
+ 	lockdep_on();
+ out_restore_irqs:
+@@ -1321,11 +1336,16 @@ again:
+ 		_con_start = con_start;
+ 		_log_end = log_end;
+ 		con_start = log_end;		/* Flush */
++#ifndef CONFIG_PREEMPT_RT_FULL
+ 		raw_spin_unlock(&logbuf_lock);
+ 		stop_critical_timings();	/* don't trace print latency */
+ 		call_console_drivers(_con_start, _log_end);
+ 		start_critical_timings();
+ 		local_irq_restore(flags);
++#else
++		raw_spin_unlock_irqrestore(&logbuf_lock, flags);
++		call_console_drivers(_con_start, _log_end);
++#endif
+ 	}
+ 	console_locked = 0;
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0236-printk-Disable-migration-instead-of-preemption.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0236-printk-Disable-migration-instead-of-preemption.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0236-printk-Disable-migration-instead-of-preemption.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0236-printk-Disable-migration-instead-of-preemption.patch)
@@ -0,0 +1,64 @@
+From 0457387c9839435b4d57bae9cff2f0e2aad30802 Mon Sep 17 00:00:00 2001
+From: Richard Weinberger <rw at linutronix.de>
+Date: Mon, 12 Dec 2011 14:35:56 +0100
+Subject: [PATCH 236/271] printk: Disable migration instead of preemption
+
+There is no need do disable preemption in vprintk(), disable_migrate()
+is sufficient. This fixes the following bug in -rt:
+
+[   14.759233] BUG: sleeping function called from invalid context
+at /home/rw/linux-rt/kernel/rtmutex.c:645
+[   14.759235] in_atomic(): 1, irqs_disabled(): 0, pid: 547, name: bash
+[   14.759244] Pid: 547, comm: bash Not tainted 3.0.12-rt29+ #3
+[   14.759246] Call Trace:
+[   14.759301]  [<ffffffff8106fade>] __might_sleep+0xeb/0xf0
+[   14.759318]  [<ffffffff810ad784>] rt_spin_lock_fastlock.constprop.9+0x21/0x43
+[   14.759336]  [<ffffffff8161fef0>] rt_spin_lock+0xe/0x10
+[   14.759354]  [<ffffffff81347ad1>] serial8250_console_write+0x81/0x121
+[   14.759366]  [<ffffffff8107ecd3>] __call_console_drivers+0x7c/0x93
+[   14.759369]  [<ffffffff8107ef31>] _call_console_drivers+0x5c/0x60
+[   14.759372]  [<ffffffff8107f7e5>] console_unlock+0x147/0x1a2
+[   14.759374]  [<ffffffff8107fd33>] vprintk+0x3ea/0x462
+[   14.759383]  [<ffffffff816160e0>] printk+0x51/0x53
+[   14.759399]  [<ffffffff811974e4>] ? proc_reg_poll+0x9a/0x9a
+[   14.759403]  [<ffffffff81335b42>] __handle_sysrq+0x50/0x14d
+[   14.759406]  [<ffffffff81335c8a>] write_sysrq_trigger+0x4b/0x53
+[   14.759408]  [<ffffffff81335c3f>] ? __handle_sysrq+0x14d/0x14d
+[   14.759410]  [<ffffffff81197583>] proc_reg_write+0x9f/0xbe
+[   14.759426]  [<ffffffff811497ec>] vfs_write+0xac/0xf3
+[   14.759429]  [<ffffffff8114a9b3>] ? fget_light+0x3a/0x9b
+[   14.759431]  [<ffffffff811499db>] sys_write+0x4a/0x6e
+[   14.759438]  [<ffffffff81625d52>] system_call_fastpath+0x16/0x1b
+
+Signed-off-by: Richard Weinberger <rw at linutronix.de>
+Link: http://lkml.kernel.org/r/1323696956-11445-1-git-send-email-rw@linutronix.de
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/printk.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/printk.c b/kernel/printk.c
+index fad23c8..13ea6a9 100644
+--- a/kernel/printk.c
++++ b/kernel/printk.c
+@@ -901,7 +901,7 @@ asmlinkage int vprintk(const char *fmt, va_list args)
+ 	boot_delay_msec();
+ 	printk_delay();
+ 
+-	preempt_disable();
++	migrate_disable();
+ 	/* This stops the holder of console_sem just where we want him */
+ 	raw_local_irq_save(flags);
+ 	this_cpu = smp_processor_id();
+@@ -1032,7 +1032,7 @@ asmlinkage int vprintk(const char *fmt, va_list args)
+ out_restore_irqs:
+ 	raw_local_irq_restore(flags);
+ 
+-	preempt_enable();
++	migrate_enable();
+ 	return printed_len;
+ }
+ EXPORT_SYMBOL(printk);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0237-power-use-generic-rwsem-on-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0237-power-use-generic-rwsem-on-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0237-power-use-generic-rwsem-on-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0237-power-use-generic-rwsem-on-rt.patch)
@@ -0,0 +1,29 @@
+From ea54720cb5b1ec946f2d8e6544998a7dd723dec7 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 10 Apr 2012 14:34:18 -0400
+Subject: [PATCH 237/271] power-use-generic-rwsem-on-rt
+
+---
+ arch/powerpc/Kconfig |    3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 7c93c7e..c504625 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -69,10 +69,11 @@ config LOCKDEP_SUPPORT
+ 
+ config RWSEM_GENERIC_SPINLOCK
+ 	bool
++	default y if PREEMPT_RT_FULL
+ 
+ config RWSEM_XCHGADD_ALGORITHM
+ 	bool
+-	default y
++	default y if !PREEMPT_RT_FULL
+ 
+ config GENERIC_LOCKBREAK
+ 	bool
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0238-power-disable-highmem-on-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0238-power-disable-highmem-on-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0238-power-disable-highmem-on-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0238-power-disable-highmem-on-rt.patch.patch)
@@ -0,0 +1,26 @@
+From ad77318d1737d00b2de554c3a087b9fb7d59f50a Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 18 Jul 2011 17:08:34 +0200
+Subject: [PATCH 238/271] power-disable-highmem-on-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/powerpc/Kconfig |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index c504625..d01baf8 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -276,7 +276,7 @@ menu "Kernel options"
+ 
+ config HIGHMEM
+ 	bool "High memory support"
+-	depends on PPC32
++	depends on PPC32 && !PREEMPT_RT_FULL
+ 
+ source kernel/time/Kconfig
+ source kernel/Kconfig.hz
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0239-arm-disable-highmem-on-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0239-arm-disable-highmem-on-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0239-arm-disable-highmem-on-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0239-arm-disable-highmem-on-rt.patch.patch)
@@ -0,0 +1,26 @@
+From af6a1f1ce3a0ab5add0ede2cf31095786fa40b5f Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 18 Jul 2011 17:09:28 +0200
+Subject: [PATCH 239/271] arm-disable-highmem-on-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/arm/Kconfig |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 6e88003..bb92097 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -1677,7 +1677,7 @@ config HAVE_ARCH_PFN_VALID
+ 
+ config HIGHMEM
+ 	bool "High Memory Support"
+-	depends on MMU
++	depends on MMU && !PREEMPT_RT_FULL
+ 	help
+ 	  The address space of ARM processors is only 4 Gigabytes large
+ 	  and it has to accommodate user address space, kernel address
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0240-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0240-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0240-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0240-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch)
@@ -0,0 +1,37 @@
+From 952d80801423eaa6cf09414c8f71890099eede95 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 1 May 2010 18:29:35 +0200
+Subject: [PATCH 240/271] ARM: at91: tclib: Default to tclib timer for RT
+
+RT is not too happy about the shared timer interrupt in AT91
+devices. Default to tclib timer for RT.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/misc/Kconfig |    3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
+index 1cb530c..951ae6c 100644
+--- a/drivers/misc/Kconfig
++++ b/drivers/misc/Kconfig
+@@ -82,6 +82,7 @@ config AB8500_PWM
+ config ATMEL_TCLIB
+ 	bool "Atmel AT32/AT91 Timer/Counter Library"
+ 	depends on (AVR32 || ARCH_AT91)
++	default y if PREEMPT_RT_FULL
+ 	help
+ 	  Select this if you want a library to allocate the Timer/Counter
+ 	  blocks found on many Atmel processors.  This facilitates using
+@@ -114,7 +115,7 @@ config ATMEL_TCB_CLKSRC_BLOCK
+ config ATMEL_TCB_CLKSRC_USE_SLOW_CLOCK
+ 	bool "TC Block use 32 KiHz clock"
+ 	depends on ATMEL_TCB_CLKSRC
+-	default y
++	default y if !PREEMPT_RT_FULL
+ 	help
+ 	  Select this to use 32 KiHz base clock rate as TC block clock
+ 	  source for clock events.
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0241-mips-disable-highmem-on-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0241-mips-disable-highmem-on-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0241-mips-disable-highmem-on-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0241-mips-disable-highmem-on-rt.patch.patch)
@@ -0,0 +1,26 @@
+From 6c3284be883940e5832f1c37fb2a3793f6562b1d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 18 Jul 2011 17:10:12 +0200
+Subject: [PATCH 241/271] mips-disable-highmem-on-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/mips/Kconfig |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index d46f1da..9f02e8b 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -2040,7 +2040,7 @@ config CPU_R4400_WORKAROUNDS
+ #
+ config HIGHMEM
+ 	bool "High Memory Support"
+-	depends on 32BIT && CPU_SUPPORTS_HIGHMEM && SYS_SUPPORTS_HIGHMEM
++	depends on 32BIT && CPU_SUPPORTS_HIGHMEM && SYS_SUPPORTS_HIGHMEM && !PREEMPT_RT_FULL
+ 
+ config CPU_SUPPORTS_HIGHMEM
+ 	bool
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0242-net-Avoid-livelock-in-net_tx_action-on-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0242-net-Avoid-livelock-in-net_tx_action-on-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0242-net-Avoid-livelock-in-net_tx_action-on-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0242-net-Avoid-livelock-in-net_tx_action-on-RT.patch)
@@ -0,0 +1,97 @@
+From d87b6967a333be4a6aff553b8ed740f75d0be7f8 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <srostedt at redhat.com>
+Date: Thu, 6 Oct 2011 10:48:39 -0400
+Subject: [PATCH 242/271] net: Avoid livelock in net_tx_action() on RT
+
+qdisc_lock is taken w/o disabling interrupts or bottom halfs. So code
+holding a qdisc_lock() can be interrupted and softirqs can run on the
+return of interrupt in !RT.
+
+The spin_trylock() in net_tx_action() makes sure, that the softirq
+does not deadlock. When the lock can't be acquired q is requeued and
+the NET_TX softirq is raised. That causes the softirq to run over and
+over.
+
+That works in mainline as do_softirq() has a retry loop limit and
+leaves the softirq processing in the interrupt return path and
+schedules ksoftirqd. The task which holds qdisc_lock cannot be
+preempted, so the lock is released and either ksoftirqd or the next
+softirq in the return from interrupt path can proceed. Though it's a
+bit strange to actually run MAX_SOFTIRQ_RESTART (10) loops before it
+decides to bail out even if it's clear in the first iteration :)
+
+On RT all softirq processing is done in a FIFO thread and we don't
+have a loop limit, so ksoftirqd preempts the lock holder forever and
+unqueues and requeues until the reset button is hit.
+
+Due to the forced threading of ksoftirqd on RT we actually cannot
+deadlock on qdisc_lock because it's a "sleeping lock". So it's safe to
+replace the spin_trylock() with a spin_lock(). When contended,
+ksoftirqd is scheduled out and the lock holder can proceed.
+
+[ tglx: Massaged changelog and code comments ]
+
+Solved-by: Thomas Gleixner <tglx at linuxtronix.de>
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Tested-by: Carsten Emde <cbe at osadl.org>
+Cc: Clark Williams <williams at redhat.com>
+Cc: John Kacur <jkacur at redhat.com>
+Cc: Luis Claudio R. Goncalves <lclaudio at redhat.com>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ net/core/dev.c |   32 +++++++++++++++++++++++++++++++-
+ 1 file changed, 31 insertions(+), 1 deletion(-)
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 0916344..546cc6a 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3038,6 +3038,36 @@ int netif_rx_ni(struct sk_buff *skb)
+ }
+ EXPORT_SYMBOL(netif_rx_ni);
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++/*
++ * RT runs ksoftirqd as a real time thread and the root_lock is a
++ * "sleeping spinlock". If the trylock fails then we can go into an
++ * infinite loop when ksoftirqd preempted the task which actually
++ * holds the lock, because we requeue q and raise NET_TX softirq
++ * causing ksoftirqd to loop forever.
++ *
++ * It's safe to use spin_lock on RT here as softirqs run in thread
++ * context and cannot deadlock against the thread which is holding
++ * root_lock.
++ *
++ * On !RT the trylock might fail, but there we bail out from the
++ * softirq loop after 10 attempts which we can't do on RT. And the
++ * task holding root_lock cannot be preempted, so the only downside of
++ * that trylock is that we need 10 loops to decide that we should have
++ * given up in the first one :)
++ */
++static inline int take_root_lock(spinlock_t *lock)
++{
++	spin_lock(lock);
++	return 1;
++}
++#else
++static inline int take_root_lock(spinlock_t *lock)
++{
++	return spin_trylock(lock);
++}
++#endif
++
+ static void net_tx_action(struct softirq_action *h)
+ {
+ 	struct softnet_data *sd = &__get_cpu_var(softnet_data);
+@@ -3076,7 +3106,7 @@ static void net_tx_action(struct softirq_action *h)
+ 			head = head->next_sched;
+ 
+ 			root_lock = qdisc_lock(q);
+-			if (spin_trylock(root_lock)) {
++			if (take_root_lock(root_lock)) {
+ 				smp_mb__before_clear_bit();
+ 				clear_bit(__QDISC_STATE_SCHED,
+ 					  &q->state);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0243-ping-sysrq.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0243-ping-sysrq.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0243-ping-sysrq.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0243-ping-sysrq.patch.patch)
@@ -0,0 +1,132 @@
+From cb365ae10d414b663ed50187d49a02b53876218b Mon Sep 17 00:00:00 2001
+From: Carsten Emde <C.Emde at osadl.org>
+Date: Tue, 19 Jul 2011 13:51:17 +0100
+Subject: [PATCH 243/271] ping-sysrq.patch
+
+There are (probably rare) situations when a system crashed and the system
+console becomes unresponsive but the network icmp layer still is alive.
+Wouldn't it be wonderful, if we then could submit a sysreq command via ping?
+
+This patch provides this facility. Please consult the updated documentation
+Documentation/sysrq.txt for details.
+
+Signed-off-by: Carsten Emde <C.Emde at osadl.org>
+---
+ Documentation/sysrq.txt    |   11 +++++++++--
+ include/net/netns/ipv4.h   |    1 +
+ net/ipv4/icmp.c            |   30 ++++++++++++++++++++++++++++++
+ net/ipv4/sysctl_net_ipv4.c |    7 +++++++
+ 4 files changed, 47 insertions(+), 2 deletions(-)
+
+diff --git a/Documentation/sysrq.txt b/Documentation/sysrq.txt
+index 312e375..9981f30 100644
+--- a/Documentation/sysrq.txt
++++ b/Documentation/sysrq.txt
+@@ -57,10 +57,17 @@ On PowerPC - Press 'ALT - Print Screen (or F13) - <command key>,
+ On other - If you know of the key combos for other architectures, please
+            let me know so I can add them to this section.
+ 
+-On all -  write a character to /proc/sysrq-trigger.  e.g.:
+-
++On all -  write a character to /proc/sysrq-trigger, e.g.:
+ 		echo t > /proc/sysrq-trigger
+ 
++On all - Enable network SysRq by writing a cookie to icmp_echo_sysrq, e.g.
++		echo 0x01020304 >/proc/sys/net/ipv4/icmp_echo_sysrq
++	 Send an ICMP echo request with this pattern plus the particular
++	 SysRq command key. Example:
++	 	# ping -c1 -s57 -p0102030468
++	 will trigger the SysRq-H (help) command.
++
++
+ *  What are the 'command' keys?
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ 'b'     - Will immediately reboot the system without syncing or unmounting
+diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
+index d786b4f..8cef1d1 100644
+--- a/include/net/netns/ipv4.h
++++ b/include/net/netns/ipv4.h
+@@ -47,6 +47,7 @@ struct netns_ipv4 {
+ 
+ 	int sysctl_icmp_echo_ignore_all;
+ 	int sysctl_icmp_echo_ignore_broadcasts;
++	int sysctl_icmp_echo_sysrq;
+ 	int sysctl_icmp_ignore_bogus_error_responses;
+ 	int sysctl_icmp_ratelimit;
+ 	int sysctl_icmp_ratemask;
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index ab188ae..028eb47 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -67,6 +67,7 @@
+ #include <linux/jiffies.h>
+ #include <linux/kernel.h>
+ #include <linux/fcntl.h>
++#include <linux/sysrq.h>
+ #include <linux/socket.h>
+ #include <linux/in.h>
+ #include <linux/inet.h>
+@@ -801,6 +802,30 @@ out_err:
+ }
+ 
+ /*
++ * 32bit and 64bit have different timestamp length, so we check for
++ * the cookie at offset 20 and verify it is repeated at offset 50
++ */
++#define CO_POS0		20
++#define CO_POS1		50
++#define CO_SIZE		sizeof(int)
++#define ICMP_SYSRQ_SIZE	57
++
++/*
++ * We got a ICMP_SYSRQ_SIZE sized ping request. Check for the cookie
++ * pattern and if it matches send the next byte as a trigger to sysrq.
++ */
++static void icmp_check_sysrq(struct net *net, struct sk_buff *skb)
++{
++	int cookie = htonl(net->ipv4.sysctl_icmp_echo_sysrq);
++	char *p = skb->data;
++
++	if (!memcmp(&cookie, p + CO_POS0, CO_SIZE) &&
++	    !memcmp(&cookie, p + CO_POS1, CO_SIZE) &&
++	    p[CO_POS0 + CO_SIZE] == p[CO_POS1 + CO_SIZE])
++		handle_sysrq(p[CO_POS0 + CO_SIZE]);
++}
++
++/*
+  *	Handle ICMP_ECHO ("ping") requests.
+  *
+  *	RFC 1122: 3.2.2.6 MUST have an echo server that answers ICMP echo
+@@ -827,6 +852,11 @@ static void icmp_echo(struct sk_buff *skb)
+ 		icmp_param.data_len	   = skb->len;
+ 		icmp_param.head_len	   = sizeof(struct icmphdr);
+ 		icmp_reply(&icmp_param, skb);
++
++		if (skb->len == ICMP_SYSRQ_SIZE &&
++		    net->ipv4.sysctl_icmp_echo_sysrq) {
++			icmp_check_sysrq(net, skb);
++		}
+ 	}
+ }
+ 
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index 69fd720..0ecdb72 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -680,6 +680,13 @@ static struct ctl_table ipv4_net_table[] = {
+ 		.proc_handler	= proc_dointvec
+ 	},
+ 	{
++		.procname	= "icmp_echo_sysrq",
++		.data		= &init_net.ipv4.sysctl_icmp_echo_sysrq,
++		.maxlen		= sizeof(int),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec
++	},
++	{
+ 		.procname	= "icmp_ignore_bogus_error_responses",
+ 		.data		= &init_net.ipv4.sysctl_icmp_ignore_bogus_error_responses,
+ 		.maxlen		= sizeof(int),
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0244-kgdb-serial-Short-term-workaround.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0244-kgdb-serial-Short-term-workaround.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0244-kgdb-serial-Short-term-workaround.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0244-kgdb-serial-Short-term-workaround.patch)
@@ -0,0 +1,119 @@
+From 6494d399f04fefaad6a9272ca415c64e5eb58a0c Mon Sep 17 00:00:00 2001
+From: Jason Wessel <jason.wessel at windriver.com>
+Date: Thu, 28 Jul 2011 12:42:23 -0500
+Subject: [PATCH 244/271] kgdb/serial: Short term workaround
+
+On 07/27/2011 04:37 PM, Thomas Gleixner wrote:
+>  - KGDB (not yet disabled) is reportedly unusable on -rt right now due
+>    to missing hacks in the console locking which I dropped on purpose.
+>
+
+To work around this in the short term you can use this patch, in
+addition to the clocksource watchdog patch that Thomas brewed up.
+
+Comments are welcome of course.  Ultimately the right solution is to
+change separation between the console and the HW to have a polled mode
++ work queue so as not to introduce any kind of latency.
+
+Thanks,
+Jason.
+---
+ drivers/tty/serial/8250.c |   13 +++++++++----
+ include/linux/kdb.h       |    2 ++
+ kernel/debug/kdb/kdb_io.c |    6 ++----
+ 3 files changed, 13 insertions(+), 8 deletions(-)
+
+diff --git a/drivers/tty/serial/8250.c b/drivers/tty/serial/8250.c
+index a3d3404..f15a1df 100644
+--- a/drivers/tty/serial/8250.c
++++ b/drivers/tty/serial/8250.c
+@@ -38,6 +38,7 @@
+ #include <linux/nmi.h>
+ #include <linux/mutex.h>
+ #include <linux/slab.h>
++#include <linux/kdb.h>
+ 
+ #include <asm/io.h>
+ #include <asm/irq.h>
+@@ -2856,10 +2857,14 @@ serial8250_console_write(struct console *co, const char *s, unsigned int count)
+ 
+ 	touch_nmi_watchdog();
+ 
+-	if (up->port.sysrq || oops_in_progress)
+-		locked = spin_trylock_irqsave(&up->port.lock, flags);
+-	else
+-		spin_lock_irqsave(&up->port.lock, flags);
++	if (unlikely(in_kdb_printk())) {
++		locked = 0;
++	} else {
++		if (up->port.sysrq || oops_in_progress)
++			locked = spin_trylock_irqsave(&up->port.lock, flags);
++		else
++			spin_lock_irqsave(&up->port.lock, flags);
++	}
+ 
+ 	/*
+ 	 *	First save the IER then disable the interrupts
+diff --git a/include/linux/kdb.h b/include/linux/kdb.h
+index 0647258..0d1ebfc 100644
+--- a/include/linux/kdb.h
++++ b/include/linux/kdb.h
+@@ -150,12 +150,14 @@ extern int kdb_register(char *, kdb_func_t, char *, char *, short);
+ extern int kdb_register_repeat(char *, kdb_func_t, char *, char *,
+ 			       short, kdb_repeat_t);
+ extern int kdb_unregister(char *);
++#define in_kdb_printk() (kdb_trap_printk)
+ #else /* ! CONFIG_KGDB_KDB */
+ #define kdb_printf(...)
+ #define kdb_init(x)
+ #define kdb_register(...)
+ #define kdb_register_repeat(...)
+ #define kdb_uregister(x)
++#define in_kdb_printk() (0)
+ #endif	/* CONFIG_KGDB_KDB */
+ enum {
+ 	KDB_NOT_INITIALIZED,
+diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c
+index 4802eb5..5b7455f 100644
+--- a/kernel/debug/kdb/kdb_io.c
++++ b/kernel/debug/kdb/kdb_io.c
+@@ -553,7 +553,6 @@ int vkdb_printf(const char *fmt, va_list ap)
+ 	int diag;
+ 	int linecount;
+ 	int logging, saved_loglevel = 0;
+-	int saved_trap_printk;
+ 	int got_printf_lock = 0;
+ 	int retlen = 0;
+ 	int fnd, len;
+@@ -564,8 +563,6 @@ int vkdb_printf(const char *fmt, va_list ap)
+ 	unsigned long uninitialized_var(flags);
+ 
+ 	preempt_disable();
+-	saved_trap_printk = kdb_trap_printk;
+-	kdb_trap_printk = 0;
+ 
+ 	/* Serialize kdb_printf if multiple cpus try to write at once.
+ 	 * But if any cpu goes recursive in kdb, just print the output,
+@@ -821,7 +818,6 @@ kdb_print_out:
+ 	} else {
+ 		__release(kdb_printf_lock);
+ 	}
+-	kdb_trap_printk = saved_trap_printk;
+ 	preempt_enable();
+ 	return retlen;
+ }
+@@ -831,9 +827,11 @@ int kdb_printf(const char *fmt, ...)
+ 	va_list ap;
+ 	int r;
+ 
++	kdb_trap_printk++;
+ 	va_start(ap, fmt);
+ 	r = vkdb_printf(fmt, ap);
+ 	va_end(ap);
++	kdb_trap_printk--;
+ 
+ 	return r;
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0245-add-sys-kernel-realtime-entry.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0245-add-sys-kernel-realtime-entry.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0245-add-sys-kernel-realtime-entry.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0245-add-sys-kernel-realtime-entry.patch)
@@ -0,0 +1,53 @@
+From 549bfe0dcabd1cc1b7d47d00dd0ee3ee6ae860f1 Mon Sep 17 00:00:00 2001
+From: Clark Williams <williams at redhat.com>
+Date: Sat, 30 Jul 2011 21:55:53 -0500
+Subject: [PATCH 245/271] add /sys/kernel/realtime entry
+
+Add a /sys/kernel entry to indicate that the kernel is a
+realtime kernel.
+
+Clark says that he needs this for udev rules, udev needs to evaluate
+if its a PREEMPT_RT kernel a few thousand times and parsing uname
+output is too slow or so.
+
+Are there better solutions? Should it exist and return 0 on !-rt?
+
+Signed-off-by: Clark Williams <williams at redhat.com>
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+---
+ kernel/ksysfs.c |   12 ++++++++++++
+ 1 file changed, 12 insertions(+)
+
+diff --git a/kernel/ksysfs.c b/kernel/ksysfs.c
+index 4e316e1..a546d33 100644
+--- a/kernel/ksysfs.c
++++ b/kernel/ksysfs.c
+@@ -133,6 +133,15 @@ KERNEL_ATTR_RO(vmcoreinfo);
+ 
+ #endif /* CONFIG_KEXEC */
+ 
++#if defined(CONFIG_PREEMPT_RT_FULL)
++static ssize_t  realtime_show(struct kobject *kobj,
++			      struct kobj_attribute *attr, char *buf)
++{
++	return sprintf(buf, "%d\n", 1);
++}
++KERNEL_ATTR_RO(realtime);
++#endif
++
+ /* whether file capabilities are enabled */
+ static ssize_t fscaps_show(struct kobject *kobj,
+ 				  struct kobj_attribute *attr, char *buf)
+@@ -182,6 +191,9 @@ static struct attribute * kernel_attrs[] = {
+ 	&kexec_crash_size_attr.attr,
+ 	&vmcoreinfo_attr.attr,
+ #endif
++#ifdef CONFIG_PREEMPT_RT_FULL
++	&realtime_attr.attr,
++#endif
+ 	NULL
+ };
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0246-mm-rt-kmap_atomic-scheduling.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0246-mm-rt-kmap_atomic-scheduling.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0246-mm-rt-kmap_atomic-scheduling.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0246-mm-rt-kmap_atomic-scheduling.patch)
@@ -0,0 +1,123 @@
+From 6afdb3d9259405e0fcb8b0ed0202fd222354ae4b Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz at infradead.org>
+Date: Thu, 28 Jul 2011 10:43:51 +0200
+Subject: [PATCH 246/271] mm, rt: kmap_atomic scheduling
+
+In fact, with migrate_disable() existing one could play games with
+kmap_atomic. You could save/restore the kmap_atomic slots on context
+switch (if there are any in use of course), this should be esp easy now
+that we have a kmap_atomic stack.
+
+Something like the below.. it wants replacing all the preempt_disable()
+stuff with pagefault_disable() && migrate_disable() of course, but then
+you can flip kmaps around like below.
+
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+[dvhart at linux.intel.com: build fix]
+Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins
+---
+ arch/x86/kernel/process_32.c |   36 ++++++++++++++++++++++++++++++++++++
+ include/linux/sched.h        |    5 +++++
+ mm/memory.c                  |    2 ++
+ 3 files changed, 43 insertions(+)
+
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index ada175e3..20f1573 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -39,6 +39,7 @@
+ #include <linux/io.h>
+ #include <linux/kdebug.h>
+ #include <linux/cpuidle.h>
++#include <linux/highmem.h>
+ 
+ #include <asm/pgtable.h>
+ #include <asm/system.h>
+@@ -339,6 +340,41 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 		     task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT))
+ 		__switch_to_xtra(prev_p, next_p, tss);
+ 
++#if defined CONFIG_PREEMPT_RT_FULL && defined CONFIG_HIGHMEM
++	/*
++	 * Save @prev's kmap_atomic stack
++	 */
++	prev_p->kmap_idx = __this_cpu_read(__kmap_atomic_idx);
++	if (unlikely(prev_p->kmap_idx)) {
++		int i;
++
++		for (i = 0; i < prev_p->kmap_idx; i++) {
++			int idx = i + KM_TYPE_NR * smp_processor_id();
++
++			pte_t *ptep = kmap_pte - idx;
++			prev_p->kmap_pte[i] = *ptep;
++			kpte_clear_flush(ptep, __fix_to_virt(FIX_KMAP_BEGIN + idx));
++		}
++
++		__this_cpu_write(__kmap_atomic_idx, 0);
++	}
++
++	/*
++	 * Restore @next_p's kmap_atomic stack
++	 */
++	if (unlikely(next_p->kmap_idx)) {
++		int i;
++
++		__this_cpu_write(__kmap_atomic_idx, next_p->kmap_idx);
++
++		for (i = 0; i < next_p->kmap_idx; i++) {
++			int idx = i + KM_TYPE_NR * smp_processor_id();
++
++			set_pte(kmap_pte - idx, next_p->kmap_pte[i]);
++		}
++	}
++#endif
++
+ 	/*
+ 	 * Leave lazy mode, flushing any hypercalls made here.
+ 	 * This must be done before restoring TLS segments so
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index a84a901..1f6b11a 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -63,6 +63,7 @@ struct sched_param {
+ #include <linux/nodemask.h>
+ #include <linux/mm_types.h>
+ 
++#include <asm/kmap_types.h>
+ #include <asm/system.h>
+ #include <asm/page.h>
+ #include <asm/ptrace.h>
+@@ -1603,6 +1604,10 @@ struct task_struct {
+ 	struct rcu_head put_rcu;
+ 	int softirq_nestcnt;
+ #endif
++#if defined CONFIG_PREEMPT_RT_FULL && defined CONFIG_HIGHMEM
++	int kmap_idx;
++	pte_t kmap_pte[KM_TYPE_NR];
++#endif
+ };
+ 
+ #ifdef CONFIG_PREEMPT_RT_FULL
+diff --git a/mm/memory.c b/mm/memory.c
+index af0df1a..a3ae5e7 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3447,6 +3447,7 @@ unlock:
+ #ifdef CONFIG_PREEMPT_RT_FULL
+ void pagefault_disable(void)
+ {
++	migrate_disable();
+ 	current->pagefault_disabled++;
+ 	/*
+ 	 * make sure to have issued the store before a pagefault
+@@ -3464,6 +3465,7 @@ void pagefault_enable(void)
+ 	 */
+ 	barrier();
+ 	current->pagefault_disabled--;
++	migrate_enable();
+ }
+ EXPORT_SYMBOL_GPL(pagefault_enable);
+ #endif
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0247-ipc-sem-Rework-semaphore-wakeups.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0247-ipc-sem-Rework-semaphore-wakeups.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0247-ipc-sem-Rework-semaphore-wakeups.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0247-ipc-sem-Rework-semaphore-wakeups.patch)
@@ -0,0 +1,75 @@
+From 3ff09c18bef2ba0721662beea7bf8bfb82f07e48 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Date: Tue, 13 Sep 2011 15:09:40 +0200
+Subject: [PATCH 247/271] ipc/sem: Rework semaphore wakeups
+
+Current sysv sems have a weird ass wakeup scheme that involves keeping
+preemption disabled over a potential O(n^2) loop and busy waiting on
+that on other CPUs.
+
+Kill this and simply wake the task directly from under the sem_lock.
+
+This was discovered by a migrate_disable() debug feature that
+disallows:
+
+  spin_lock();
+  preempt_disable();
+  spin_unlock()
+  preempt_enable();
+
+Cc: Manfred Spraul <manfred at colorfullife.com>
+Suggested-by: Thomas Gleixner <tglx at linutronix.de>
+Reported-by: Mike Galbraith <efault at gmx.de>
+Signed-off-by: Peter Zijlstra <a.p.zijlstra at chello.nl>
+Cc: Manfred Spraul <manfred at colorfullife.com>
+Link: http://lkml.kernel.org/r/1315994224.5040.1.camel@twins
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ ipc/sem.c |   10 ++++++++++
+ 1 file changed, 10 insertions(+)
+
+diff --git a/ipc/sem.c b/ipc/sem.c
+index 5215a81..5eaf684 100644
+--- a/ipc/sem.c
++++ b/ipc/sem.c
+@@ -461,6 +461,13 @@ undo:
+ static void wake_up_sem_queue_prepare(struct list_head *pt,
+ 				struct sem_queue *q, int error)
+ {
++#ifdef CONFIG_PREEMPT_RT_BASE
++	struct task_struct *p = q->sleeper;
++	get_task_struct(p);
++	q->status = error;
++	wake_up_process(p);
++	put_task_struct(p);
++#else
+ 	if (list_empty(pt)) {
+ 		/*
+ 		 * Hold preempt off so that we don't get preempted and have the
+@@ -472,6 +479,7 @@ static void wake_up_sem_queue_prepare(struct list_head *pt,
+ 	q->pid = error;
+ 
+ 	list_add_tail(&q->simple_list, pt);
++#endif
+ }
+ 
+ /**
+@@ -485,6 +493,7 @@ static void wake_up_sem_queue_prepare(struct list_head *pt,
+  */
+ static void wake_up_sem_queue_do(struct list_head *pt)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ 	struct sem_queue *q, *t;
+ 	int did_something;
+ 
+@@ -497,6 +506,7 @@ static void wake_up_sem_queue_do(struct list_head *pt)
+ 	}
+ 	if (did_something)
+ 		preempt_enable();
++#endif
+ }
+ 
+ static void unlink_queue(struct sem_array *sma, struct sem_queue *q)
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0248-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0248-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0248-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0248-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch)
@@ -0,0 +1,169 @@
+From 4cfe99af9a7fab14a52aeff48f057bc66720a6ca Mon Sep 17 00:00:00 2001
+From: Frank Rowand <frank.rowand at am.sony.com>
+Date: Fri, 23 Sep 2011 13:43:12 -0700
+Subject: [PATCH 248/271] sysrq: Allow immediate Magic SysRq output for
+ PREEMPT_RT_FULL
+
+Add a CONFIG option to allow the output from Magic SysRq to be output
+immediately, even if this causes large latencies.
+
+If PREEMPT_RT_FULL, printk() will not try to acquire the console lock
+when interrupts or preemption are disabled.  If the console lock is
+not acquired the printk() output will be buffered, but will not be
+output immediately. Some drivers call into the Magic SysRq code
+with interrupts or preemption disabled, so the output of Magic SysRq
+will be buffered instead of printing immediately if this option is
+not selected.
+
+Even with this option selected, Magic SysRq output will be delayed
+if the attempt to acquire the console lock fails.
+
+Signed-off-by: Frank Rowand <frank.rowand at am.sony.com>
+Link: http://lkml.kernel.org/r/4E7CEF60.5020508@am.sony.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+[ukleinek: make apply on top of debian/sysrq-mask.patch]
+
+ drivers/tty/serial/cpm_uart/cpm_uart_core.c |    2 +-
+ drivers/tty/sysrq.c                         |   23 +++++++++++++++++++++++
+ include/linux/sysrq.h                       |    5 +++++
+ kernel/printk.c                             |    5 +++--
+ lib/Kconfig.debug                           |   22 ++++++++++++++++++++++
+ 5 files changed, 54 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+index b418947..a8b0559 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+@@ -1226,7 +1226,7 @@ static void cpm_uart_console_write(struct console *co, const char *s,
+ {
+ 	struct uart_cpm_port *pinfo = &cpm_uart_ports[co->index];
+ 	unsigned long flags;
+-	int nolock = oops_in_progress;
++	int nolock = oops_in_progress || sysrq_in_progress;
+ 
+ 	if (unlikely(nolock)) {
+ 		local_irq_save(flags);
+diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
+index 43db715..5219738 100644
+--- a/drivers/tty/sysrq.c
++++ b/drivers/tty/sysrq.c
+@@ -492,6 +492,23 @@ static void __sysrq_put_key_op(int key, struct sysrq_key_op *op_p)
+                 sysrq_key_table[i] = op_p;
+ }
+ 
++#ifdef CONFIG_MAGIC_SYSRQ_FORCE_PRINTK
++
++int sysrq_in_progress;
++
++static void set_sysrq_in_progress(int value)
++{
++	sysrq_in_progress = value;
++}
++
++#else
++
++static void set_sysrq_in_progress(int value)
++{
++}
++
++#endif
++
+ void __handle_sysrq(int key, bool check_mask)
+ {
+ 	struct sysrq_key_op *op_p;
+@@ -500,6 +517,9 @@ void __handle_sysrq(int key, bool check_mask)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&sysrq_key_table_lock, flags);
++
++	set_sysrq_in_progress(1);
++
+ 	/*
+ 	 * Raise the apparent loglevel to maximum so that the sysrq header
+ 	 * is shown to provide the user with positive feedback.  We do not
+@@ -541,6 +561,9 @@ void __handle_sysrq(int key, bool check_mask)
+ 		printk("\n");
+ 		console_loglevel = orig_log_level;
+ 	}
++
++	set_sysrq_in_progress(0);
++
+ 	spin_unlock_irqrestore(&sysrq_key_table_lock, flags);
+ }
+ 
+diff --git a/include/linux/sysrq.h b/include/linux/sysrq.h
+index 7faf933..d224c0b 100644
+--- a/include/linux/sysrq.h
++++ b/include/linux/sysrq.h
+@@ -38,6 +38,11 @@ struct sysrq_key_op {
+ 	int enable_mask;
+ };
+ 
++#ifdef CONFIG_MAGIC_SYSRQ_FORCE_PRINTK
++extern int sysrq_in_progress;
++#else
++#define sysrq_in_progress 0
++#endif
+ #ifdef CONFIG_MAGIC_SYSRQ
+ 
+ /* Generic SysRq interface -- you may call it from any device driver, supplying
+diff --git a/kernel/printk.c b/kernel/printk.c
+index 13ea6a9..9eabbbb 100644
+--- a/kernel/printk.c
++++ b/kernel/printk.c
+@@ -21,6 +21,7 @@
+ #include <linux/tty.h>
+ #include <linux/tty_driver.h>
+ #include <linux/console.h>
++#include <linux/sysrq.h>
+ #include <linux/init.h>
+ #include <linux/jiffies.h>
+ #include <linux/nmi.h>
+@@ -834,8 +835,8 @@ static int console_trylock_for_printk(unsigned int cpu, unsigned long flags)
+ {
+ 	int retval = 0, wake = 0;
+ #ifdef CONFIG_PREEMPT_RT_FULL
+-	int lock = !early_boot_irqs_disabled && !irqs_disabled_flags(flags) &&
+-		!preempt_count();
++	int lock = (!early_boot_irqs_disabled && !irqs_disabled_flags(flags) &&
++		!preempt_count()) || sysrq_in_progress;
+ #else
+ 	int lock = 1;
+ #endif
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index c347db3..13a937b 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -62,6 +62,28 @@ config MAGIC_SYSRQ
+ 	  Specifies the default mask for the allowed SysRq keys.  This can be
+ 	  used to disable several sensitive keys by default.
+ 
++config MAGIC_SYSRQ_FORCE_PRINTK
++	bool "Force printk from Magic SysRq"
++	depends on MAGIC_SYSRQ && PREEMPT_RT_FULL
++	default n
++	help
++	  Allow the output from Magic SysRq to be output immediately, even if
++	  this causes large latencies.  This can cause performance problems
++	  for real-time processes.
++
++	  If PREEMPT_RT_FULL, printk() will not try to acquire the console lock
++	  when interrupts or preemption are disabled.  If the console lock is
++	  not acquired the printk() output will be buffered, but will not be
++	  output immediately.  Some drivers call into the Magic SysRq code
++	  with interrupts or preemption disabled, so the output of Magic SysRq
++	  will be buffered instead of printing immediately if this option is
++	  not selected.
++
++	  Even with this option selected, Magic SysRq output will be delayed
++	  if the attempt to acquire the console lock fails.
++
++	  Don't say Y unless you really know what this hack does.
++
+ config STRIP_ASM_SYMS
+ 	bool "Strip assembler-generated symbols during link"
+ 	default n
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0249-x86-kvm-require-const-tsc-for-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0249-x86-kvm-require-const-tsc-for-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0249-x86-kvm-require-const-tsc-for-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0249-x86-kvm-require-const-tsc-for-rt.patch.patch)
@@ -0,0 +1,31 @@
+From ddc2b463d6d9d9eefe4aeb06a7bc0eb83e2b9d1f Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 6 Nov 2011 12:26:18 +0100
+Subject: [PATCH 249/271] x86-kvm-require-const-tsc-for-rt.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/kvm/x86.c |    7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 4fc5323..18fc878 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -5210,6 +5210,13 @@ int kvm_arch_init(void *opaque)
+ 		goto out;
+ 	}
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++	if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC)) {
++		printk(KERN_ERR "RT requires X86_FEATURE_CONSTANT_TSC\n");
++		return -EOPNOTSUPP;
++	}
++#endif
++
+ 	r = kvm_mmu_module_init();
+ 	if (r)
+ 		goto out;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0250-scsi-fcoe-rt-aware.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0250-scsi-fcoe-rt-aware.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0250-scsi-fcoe-rt-aware.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0250-scsi-fcoe-rt-aware.patch.patch)
@@ -0,0 +1,116 @@
+From db68ca60955dd57030651ac162579f48374920de Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sat, 12 Nov 2011 14:00:48 +0100
+Subject: [PATCH 250/271] scsi-fcoe-rt-aware.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/scsi/fcoe/fcoe.c      |   16 ++++++++--------
+ drivers/scsi/fcoe/fcoe_ctlr.c |    4 ++--
+ drivers/scsi/libfc/fc_exch.c  |    4 ++--
+ 3 files changed, 12 insertions(+), 12 deletions(-)
+
+diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
+index 8d67467..4085187 100644
+--- a/drivers/scsi/fcoe/fcoe.c
++++ b/drivers/scsi/fcoe/fcoe.c
+@@ -1156,7 +1156,7 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
+ 	struct sk_buff *skb;
+ #ifdef CONFIG_SMP
+ 	struct fcoe_percpu_s *p0;
+-	unsigned targ_cpu = get_cpu();
++	unsigned targ_cpu = get_cpu_light();
+ #endif /* CONFIG_SMP */
+ 
+ 	FCOE_DBG("Destroying receive thread for CPU %d\n", cpu);
+@@ -1212,7 +1212,7 @@ static void fcoe_percpu_thread_destroy(unsigned int cpu)
+ 			kfree_skb(skb);
+ 		spin_unlock_bh(&p->fcoe_rx_list.lock);
+ 	}
+-	put_cpu();
++	put_cpu_light();
+ #else
+ 	/*
+ 	 * This a non-SMP scenario where the singular Rx thread is
+@@ -1435,11 +1435,11 @@ err2:
+ static int fcoe_alloc_paged_crc_eof(struct sk_buff *skb, int tlen)
+ {
+ 	struct fcoe_percpu_s *fps;
+-	int rc;
++	int rc, cpu = get_cpu_light();
+ 
+-	fps = &get_cpu_var(fcoe_percpu);
++	fps = &per_cpu(fcoe_percpu, cpu);
+ 	rc = fcoe_get_paged_crc_eof(skb, tlen, fps);
+-	put_cpu_var(fcoe_percpu);
++	put_cpu_light();
+ 
+ 	return rc;
+ }
+@@ -1680,7 +1680,7 @@ static void fcoe_recv_frame(struct sk_buff *skb)
+ 	 */
+ 	hp = (struct fcoe_hdr *) skb_network_header(skb);
+ 
+-	stats = per_cpu_ptr(lport->dev_stats, get_cpu());
++	stats = per_cpu_ptr(lport->dev_stats, get_cpu_light());
+ 	if (unlikely(FC_FCOE_DECAPS_VER(hp) != FC_FCOE_VER)) {
+ 		if (stats->ErrorFrames < 5)
+ 			printk(KERN_WARNING "fcoe: FCoE version "
+@@ -1712,13 +1712,13 @@ static void fcoe_recv_frame(struct sk_buff *skb)
+ 		goto drop;
+ 
+ 	if (!fcoe_filter_frames(lport, fp)) {
+-		put_cpu();
++		put_cpu_light();
+ 		fc_exch_recv(lport, fp);
+ 		return;
+ 	}
+ drop:
+ 	stats->ErrorFrames++;
+-	put_cpu();
++	put_cpu_light();
+ 	kfree_skb(skb);
+ }
+ 
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index e7522dc..bfb83c0 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -719,7 +719,7 @@ static unsigned long fcoe_ctlr_age_fcfs(struct fcoe_ctlr *fip)
+ 	unsigned long sel_time = 0;
+ 	struct fcoe_dev_stats *stats;
+ 
+-	stats = per_cpu_ptr(fip->lp->dev_stats, get_cpu());
++	stats = per_cpu_ptr(fip->lp->dev_stats, get_cpu_light());
+ 
+ 	list_for_each_entry_safe(fcf, next, &fip->fcfs, list) {
+ 		deadline = fcf->time + fcf->fka_period + fcf->fka_period / 2;
+@@ -752,7 +752,7 @@ static unsigned long fcoe_ctlr_age_fcfs(struct fcoe_ctlr *fip)
+ 				sel_time = fcf->time;
+ 		}
+ 	}
+-	put_cpu();
++	put_cpu_light();
+ 	if (sel_time && !fip->sel_fcf && !fip->sel_time) {
+ 		sel_time += msecs_to_jiffies(FCOE_CTLR_START_DELAY);
+ 		fip->sel_time = sel_time;
+diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c
+index 9de9db2..340998f 100644
+--- a/drivers/scsi/libfc/fc_exch.c
++++ b/drivers/scsi/libfc/fc_exch.c
+@@ -724,10 +724,10 @@ static struct fc_exch *fc_exch_em_alloc(struct fc_lport *lport,
+ 	}
+ 	memset(ep, 0, sizeof(*ep));
+ 
+-	cpu = get_cpu();
++	cpu = get_cpu_light();
+ 	pool = per_cpu_ptr(mp->pool, cpu);
+ 	spin_lock_bh(&pool->lock);
+-	put_cpu();
++	put_cpu_light();
+ 
+ 	/* peek cache of free slot */
+ 	if (pool->left != FC_XID_UNKNOWN) {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0251-x86-crypto-Reduce-preempt-disabled-regions.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0251-x86-crypto-Reduce-preempt-disabled-regions.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0251-x86-crypto-Reduce-preempt-disabled-regions.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0251-x86-crypto-Reduce-preempt-disabled-regions.patch)
@@ -0,0 +1,118 @@
+From a5f782d4f3dc1bc7b0696a29d66f5586b6b42f7a Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz at infradead.org>
+Date: Mon, 14 Nov 2011 18:19:27 +0100
+Subject: [PATCH 251/271] x86: crypto: Reduce preempt disabled regions
+
+Restrict the preempt disabled regions to the actual floating point
+operations and enable preemption for the administrative actions.
+
+This is necessary on RT to avoid that kfree and other operations are
+called with preemption disabled.
+
+Reported-and-tested-by: Carsten Emde <cbe at osadl.org>
+Signed-off-by: Peter Zijlstra <peterz at infradead.org>
+Cc: stable-rt at vger.kernel.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/crypto/aesni-intel_glue.c |   24 +++++++++++++-----------
+ 1 file changed, 13 insertions(+), 11 deletions(-)
+
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index 545d0ce..0c9eaf1 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -289,14 +289,14 @@ static int ecb_encrypt(struct blkcipher_desc *desc,
+ 	err = blkcipher_walk_virt(desc, &walk);
+ 	desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+ 
+-	kernel_fpu_begin();
+ 	while ((nbytes = walk.nbytes)) {
++		kernel_fpu_begin();
+ 		aesni_ecb_enc(ctx, walk.dst.virt.addr, walk.src.virt.addr,
+-			      nbytes & AES_BLOCK_MASK);
++				nbytes & AES_BLOCK_MASK);
++		kernel_fpu_end();
+ 		nbytes &= AES_BLOCK_SIZE - 1;
+ 		err = blkcipher_walk_done(desc, &walk, nbytes);
+ 	}
+-	kernel_fpu_end();
+ 
+ 	return err;
+ }
+@@ -313,14 +313,14 @@ static int ecb_decrypt(struct blkcipher_desc *desc,
+ 	err = blkcipher_walk_virt(desc, &walk);
+ 	desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+ 
+-	kernel_fpu_begin();
+ 	while ((nbytes = walk.nbytes)) {
++		kernel_fpu_begin();
+ 		aesni_ecb_dec(ctx, walk.dst.virt.addr, walk.src.virt.addr,
+ 			      nbytes & AES_BLOCK_MASK);
++		kernel_fpu_end();
+ 		nbytes &= AES_BLOCK_SIZE - 1;
+ 		err = blkcipher_walk_done(desc, &walk, nbytes);
+ 	}
+-	kernel_fpu_end();
+ 
+ 	return err;
+ }
+@@ -359,14 +359,14 @@ static int cbc_encrypt(struct blkcipher_desc *desc,
+ 	err = blkcipher_walk_virt(desc, &walk);
+ 	desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+ 
+-	kernel_fpu_begin();
+ 	while ((nbytes = walk.nbytes)) {
++		kernel_fpu_begin();
+ 		aesni_cbc_enc(ctx, walk.dst.virt.addr, walk.src.virt.addr,
+ 			      nbytes & AES_BLOCK_MASK, walk.iv);
++		kernel_fpu_end();
+ 		nbytes &= AES_BLOCK_SIZE - 1;
+ 		err = blkcipher_walk_done(desc, &walk, nbytes);
+ 	}
+-	kernel_fpu_end();
+ 
+ 	return err;
+ }
+@@ -383,14 +383,14 @@ static int cbc_decrypt(struct blkcipher_desc *desc,
+ 	err = blkcipher_walk_virt(desc, &walk);
+ 	desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+ 
+-	kernel_fpu_begin();
+ 	while ((nbytes = walk.nbytes)) {
++		kernel_fpu_begin();
+ 		aesni_cbc_dec(ctx, walk.dst.virt.addr, walk.src.virt.addr,
+ 			      nbytes & AES_BLOCK_MASK, walk.iv);
++		kernel_fpu_end();
+ 		nbytes &= AES_BLOCK_SIZE - 1;
+ 		err = blkcipher_walk_done(desc, &walk, nbytes);
+ 	}
+-	kernel_fpu_end();
+ 
+ 	return err;
+ }
+@@ -445,18 +445,20 @@ static int ctr_crypt(struct blkcipher_desc *desc,
+ 	err = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE);
+ 	desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+ 
+-	kernel_fpu_begin();
+ 	while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
++		kernel_fpu_begin();
+ 		aesni_ctr_enc(ctx, walk.dst.virt.addr, walk.src.virt.addr,
+ 			      nbytes & AES_BLOCK_MASK, walk.iv);
++		kernel_fpu_end();
+ 		nbytes &= AES_BLOCK_SIZE - 1;
+ 		err = blkcipher_walk_done(desc, &walk, nbytes);
+ 	}
+ 	if (walk.nbytes) {
++		kernel_fpu_begin();
+ 		ctr_crypt_final(ctx, &walk);
++		kernel_fpu_end();
+ 		err = blkcipher_walk_done(desc, &walk, 0);
+ 	}
+-	kernel_fpu_end();
+ 
+ 	return err;
+ }
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0252-dm-Make-rt-aware.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0252-dm-Make-rt-aware.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0252-dm-Make-rt-aware.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0252-dm-Make-rt-aware.patch)
@@ -0,0 +1,40 @@
+From 1e586b1cc36f241bd94af1232e357e621fcc75e2 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Mon, 14 Nov 2011 23:06:09 +0100
+Subject: [PATCH 252/271] dm: Make rt aware
+
+Use the BUG_ON_NORT variant for the irq_disabled() checks. RT has
+interrupts legitimately enabled here as we cant deadlock against the
+irq thread due to the "sleeping spinlocks" conversion.
+
+Reported-by: Luis Claudio R. Goncalves <lclaudio at uudg.org>
+Cc: stable-rt at vger.kernel.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ drivers/md/dm.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 4720f68..b1eff42 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1648,14 +1648,14 @@ static void dm_request_fn(struct request_queue *q)
+ 		if (map_request(ti, clone, md))
+ 			goto requeued;
+ 
+-		BUG_ON(!irqs_disabled());
++		BUG_ON_NONRT(!irqs_disabled());
+ 		spin_lock(q->queue_lock);
+ 	}
+ 
+ 	goto out;
+ 
+ requeued:
+-	BUG_ON(!irqs_disabled());
++	BUG_ON_NONRT(!irqs_disabled());
+ 	spin_lock(q->queue_lock);
+ 
+ delay_and_out:
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0253-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0253-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0253-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0253-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch)
@@ -0,0 +1,42 @@
+From d3c201cf28f41cc64c38cc93dcd6461a99b074e4 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 14 Dec 2011 01:03:49 +0100
+Subject: [PATCH 253/271] cpumask: Disable CONFIG_CPUMASK_OFFSTACK for RT
+
+We can't deal with the cpumask allocations which happen in atomic
+context (see arch/x86/kernel/apic/io_apic.c) on RT right now.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/x86/Kconfig |    2 +-
+ lib/Kconfig      |    1 +
+ 2 files changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index e084a73..c42146e 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -730,7 +730,7 @@ config IOMMU_HELPER
+ config MAXSMP
+ 	bool "Enable Maximum number of SMP Processors and NUMA Nodes"
+ 	depends on X86_64 && SMP && DEBUG_KERNEL && EXPERIMENTAL
+-	select CPUMASK_OFFSTACK
++	select CPUMASK_OFFSTACK if !PREEMPT_RT_FULL
+ 	---help---
+ 	  Enable maximum number of CPUS and NUMA Nodes for this architecture.
+ 	  If unsure, say N.
+diff --git a/lib/Kconfig b/lib/Kconfig
+index 32f3e5a..63d81e8 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -231,6 +231,7 @@ config CHECK_SIGNATURE
+ 
+ config CPUMASK_OFFSTACK
+ 	bool "Force CPU masks off stack" if DEBUG_PER_CPU_MAPS
++	depends on !PREEMPT_RT_FULL
+ 	help
+ 	  Use dynamic allocation for cpumask_var_t, instead of putting
+ 	  them on the stack.  This is a bit more expensive, but avoids
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0254-seqlock-Prevent-rt-starvation.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0254-seqlock-Prevent-rt-starvation.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0254-seqlock-Prevent-rt-starvation.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0254-seqlock-Prevent-rt-starvation.patch)
@@ -0,0 +1,171 @@
+From dc277883a518b1b7574e6a17b13ce418f958b9ac Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 22 Feb 2012 12:03:30 +0100
+Subject: [PATCH 254/271] seqlock: Prevent rt starvation
+
+If a low prio writer gets preempted while holding the seqlock write
+locked, a high prio reader spins forever on RT.
+
+To prevent this let the reader grab the spinlock, so it blocks and
+eventually boosts the writer. This way the writer can proceed and
+endless spinning is prevented.
+
+For seqcount writers we disable preemption over the update code
+path. Thaanks to Al Viro for distangling some VFS code to make that
+possible.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+---
+ include/linux/seqlock.h |   55 ++++++++++++++++++++++++++++++++++++++---------
+ include/net/neighbour.h |    2 +-
+ 2 files changed, 46 insertions(+), 11 deletions(-)
+
+diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
+index cc7b65d..4ea1409 100644
+--- a/include/linux/seqlock.h
++++ b/include/linux/seqlock.h
+@@ -125,18 +125,30 @@ static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
+  * Sequence counter only version assumes that callers are using their
+  * own mutexing.
+  */
+-static inline void write_seqcount_begin(seqcount_t *s)
++static inline void __write_seqcount_begin(seqcount_t *s)
+ {
+ 	s->sequence++;
+ 	smp_wmb();
+ }
+ 
+-static inline void write_seqcount_end(seqcount_t *s)
++static inline void write_seqcount_begin(seqcount_t *s)
++{
++	preempt_disable_rt();
++	__write_seqcount_begin(s);
++}
++
++static inline void __write_seqcount_end(seqcount_t *s)
+ {
+ 	smp_wmb();
+ 	s->sequence++;
+ }
+ 
++static inline void write_seqcount_end(seqcount_t *s)
++{
++	__write_seqcount_end(s);
++	preempt_enable_rt();
++}
++
+ /**
+  * write_seqcount_barrier - invalidate in-progress read-side seq operations
+  * @s: pointer to seqcount_t
+@@ -177,10 +189,33 @@ typedef struct {
+ /*
+  * Read side functions for starting and finalizing a read side section.
+  */
++#ifndef CONFIG_PREEMPT_RT_FULL
+ static inline unsigned read_seqbegin(const seqlock_t *sl)
+ {
+ 	return read_seqcount_begin(&sl->seqcount);
+ }
++#else
++/*
++ * Starvation safe read side for RT
++ */
++static inline unsigned read_seqbegin(seqlock_t *sl)
++{
++	unsigned ret;
++
++repeat:
++	ret = sl->seqcount.sequence;
++	if (unlikely(ret & 1)) {
++		/*
++		 * Take the lock and let the writer proceed (i.e. evtl
++		 * boost it), otherwise we could loop here forever.
++		 */
++		spin_lock(&sl->lock);
++		spin_unlock(&sl->lock);
++		goto repeat;
++	}
++	return ret;
++}
++#endif
+ 
+ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
+ {
+@@ -195,36 +230,36 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
+ static inline void write_seqlock(seqlock_t *sl)
+ {
+ 	spin_lock(&sl->lock);
+-	write_seqcount_begin(&sl->seqcount);
++	__write_seqcount_begin(&sl->seqcount);
+ }
+ 
+ static inline void write_sequnlock(seqlock_t *sl)
+ {
+-	write_seqcount_end(&sl->seqcount);
++	__write_seqcount_end(&sl->seqcount);
+ 	spin_unlock(&sl->lock);
+ }
+ 
+ static inline void write_seqlock_bh(seqlock_t *sl)
+ {
+ 	spin_lock_bh(&sl->lock);
+-	write_seqcount_begin(&sl->seqcount);
++	__write_seqcount_begin(&sl->seqcount);
+ }
+ 
+ static inline void write_sequnlock_bh(seqlock_t *sl)
+ {
+-	write_seqcount_end(&sl->seqcount);
++	__write_seqcount_end(&sl->seqcount);
+ 	spin_unlock_bh(&sl->lock);
+ }
+ 
+ static inline void write_seqlock_irq(seqlock_t *sl)
+ {
+ 	spin_lock_irq(&sl->lock);
+-	write_seqcount_begin(&sl->seqcount);
++	__write_seqcount_begin(&sl->seqcount);
+ }
+ 
+ static inline void write_sequnlock_irq(seqlock_t *sl)
+ {
+-	write_seqcount_end(&sl->seqcount);
++	__write_seqcount_end(&sl->seqcount);
+ 	spin_unlock_irq(&sl->lock);
+ }
+ 
+@@ -233,7 +268,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&sl->lock, flags);
+-	write_seqcount_begin(&sl->seqcount);
++	__write_seqcount_begin(&sl->seqcount);
+ 	return flags;
+ }
+ 
+@@ -243,7 +278,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
+ static inline void
+ write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
+ {
+-	write_seqcount_end(&sl->seqcount);
++	__write_seqcount_end(&sl->seqcount);
+ 	spin_unlock_irqrestore(&sl->lock, flags);
+ }
+ 
+diff --git a/include/net/neighbour.h b/include/net/neighbour.h
+index 2720884..6fda9fa 100644
+--- a/include/net/neighbour.h
++++ b/include/net/neighbour.h
+@@ -385,7 +385,7 @@ struct neighbour_cb {
+ 
+ #define NEIGH_CB(skb)	((struct neighbour_cb *)(skb)->cb)
+ 
+-static inline void neigh_ha_snapshot(char *dst, const struct neighbour *n,
++static inline void neigh_ha_snapshot(char *dst, struct neighbour *n,
+ 				     const struct net_device *dev)
+ {
+ 	unsigned int seq;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0255-timer-Fix-hotplug-for-rt.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0255-timer-Fix-hotplug-for-rt.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0255-timer-Fix-hotplug-for-rt.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0255-timer-Fix-hotplug-for-rt.patch)
@@ -0,0 +1,76 @@
+From 5d0a85c358bd29ce6a67afdbf8922ad39191e914 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Thu, 1 Mar 2012 13:55:28 -0500
+Subject: [PATCH 255/271] timer: Fix hotplug for -rt
+
+Revert the RT patch:
+    Author: Ingo Molnar <mingo at elte.hu>
+    Date:   Fri Jul 3 08:30:32 2009 -0500
+    timers: fix timer hotplug on -rt
+
+    Here we are in the CPU_DEAD notifier, and we must not sleep nor
+    enable interrupts.
+
+There's no problem with sleeping in this notifier.
+
+But the get_cpu_var() had to be converted to a get_local_var().
+
+Replace the previous fix with the get_local_var() convert.
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Cc: Carsten Emde <C.Emde at osadl.org>
+Cc: John Kacur <jkacur at redhat.com>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Clark Williams <clark.williams at gmail.com>
+Cc: stable-rt at vger.kernel.org
+Link: http://lkml.kernel.org/r/20120301190344.948157137@goodmis.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/timer.c |   16 +++++-----------
+ 1 file changed, 5 insertions(+), 11 deletions(-)
+
+diff --git a/kernel/timer.c b/kernel/timer.c
+index a791a43..c8c88d1 100644
+--- a/kernel/timer.c
++++ b/kernel/timer.c
+@@ -1743,21 +1743,17 @@ static void __cpuinit migrate_timers(int cpu)
+ {
+ 	struct tvec_base *old_base;
+ 	struct tvec_base *new_base;
+-	unsigned long flags;
+ 	int i;
+ 
+ 	BUG_ON(cpu_online(cpu));
+ 	old_base = per_cpu(tvec_bases, cpu);
+-	new_base = get_cpu_var(tvec_bases);
++	new_base = get_local_var(tvec_bases);
+ 	/*
+ 	 * The caller is globally serialized and nobody else
+ 	 * takes two locks at once, deadlock is not possible.
+ 	 */
+-	local_irq_save(flags);
+-	while (!spin_trylock(&new_base->lock))
+-		cpu_relax();
+-	while (!spin_trylock(&old_base->lock))
+-		cpu_relax();
++	spin_lock_irq(&new_base->lock);
++	spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING);
+ 
+ 	BUG_ON(old_base->running_timer);
+ 
+@@ -1771,10 +1767,8 @@ static void __cpuinit migrate_timers(int cpu)
+ 	}
+ 
+ 	spin_unlock(&old_base->lock);
+-	spin_unlock(&new_base->lock);
+-	local_irq_restore(flags);
+-
+-	put_cpu_var(tvec_bases);
++	spin_unlock_irq(&new_base->lock);
++	put_local_var(tvec_bases);
+ }
+ #endif /* CONFIG_HOTPLUG_CPU */
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0256-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0256-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0256-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0256-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch)
@@ -0,0 +1,48 @@
+From 8730036ba9154e0c287f333118521680289bd8d6 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Thu, 1 Mar 2012 13:55:29 -0500
+Subject: [PATCH 256/271] futex/rt: Fix possible lockup when taking pi_lock in
+ proxy handler
+
+When taking the pi_lock, we must disable interrupts because the
+pi_lock can also be taken in an interrupt handler.
+
+Use raw_spin_lock_irq() instead of raw_spin_lock().
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Cc: Carsten Emde <C.Emde at osadl.org>
+Cc: John Kacur <jkacur at redhat.com>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Clark Williams <clark.williams at gmail.com>
+Cc: stable-rt at vger.kernel.org
+Link: http://lkml.kernel.org/r/20120301190345.165160680@goodmis.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/rtmutex.c |    6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
+index 9850dc0..b525158 100644
+--- a/kernel/rtmutex.c
++++ b/kernel/rtmutex.c
+@@ -1373,14 +1373,14 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
+ 	 * PI_REQUEUE_INPROGRESS, so that if the task is waking up
+ 	 * it will know that we are in the process of requeuing it.
+ 	 */
+-	raw_spin_lock(&task->pi_lock);
++	raw_spin_lock_irq(&task->pi_lock);
+ 	if (task->pi_blocked_on) {
+-		raw_spin_unlock(&task->pi_lock);
++		raw_spin_unlock_irq(&task->pi_lock);
+ 		raw_spin_unlock(&lock->wait_lock);
+ 		return -EAGAIN;
+ 	}
+ 	task->pi_blocked_on = PI_REQUEUE_INPROGRESS;
+-	raw_spin_unlock(&task->pi_lock);
++	raw_spin_unlock_irq(&task->pi_lock);
+ #endif
+ 
+ 	ret = task_blocks_on_rt_mutex(lock, waiter, task, detect_deadlock);
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0257-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0257-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0257-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0257-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch)
@@ -0,0 +1,38 @@
+From ae5e69d73d0b9ebf678376af524bd9273b951914 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Thu, 1 Mar 2012 13:55:32 -0500
+Subject: [PATCH 257/271] ring-buffer/rt: Check for irqs disabled before
+ grabbing reader lock
+
+In RT the reader lock is a mutex and we can not grab it when preemption is
+disabled. The in_atomic() check that is there does not check if irqs are
+disabled. Add that check as well.
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Cc: Carsten Emde <C.Emde at osadl.org>
+Cc: John Kacur <jkacur at redhat.com>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Clark Williams <clark.williams at gmail.com>
+Cc: stable-rt at vger.kernel.org
+Link: http://lkml.kernel.org/r/20120301190345.786365803@goodmis.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/trace/ring_buffer.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 354017f..c060f04 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1054,7 +1054,7 @@ static inline int ok_to_lock(void)
+ 	if (in_nmi())
+ 		return 0;
+ #ifdef CONFIG_PREEMPT_RT_FULL
+-	if (in_atomic())
++	if (in_atomic() || irqs_disabled())
+ 		return 0;
+ #endif
+ 	return 1;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0258-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0258-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0258-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0258-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch)
@@ -0,0 +1,53 @@
+From fac6226570ae53ea221e39d8aea2c2a517ae0e7f Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Thu, 1 Mar 2012 13:55:33 -0500
+Subject: [PATCH 258/271] sched/rt: Fix wait_task_interactive() to test
+ rt_spin_lock state
+
+The wait_task_interactive() will have a task sleep waiting for another
+task to have a certain state. But it ignores the rt_spin_locks state
+and can return with an incorrect result if the task it is waiting
+for is blocked on a rt_spin_lock() and is waking up.
+
+The rt_spin_locks save the tasks state in the saved_state field
+and the wait_task_interactive() must also test that state.
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Cc: Carsten Emde <C.Emde at osadl.org>
+Cc: John Kacur <jkacur at redhat.com>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Clark Williams <clark.williams at gmail.com>
+Cc: stable-rt at vger.kernel.org
+Link: http://lkml.kernel.org/r/20120301190345.979435764@goodmis.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/sched.c |    6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 316205e..95ae97c 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -2450,7 +2450,8 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state)
+ 		 * is actually now running somewhere else!
+ 		 */
+ 		while (task_running(rq, p)) {
+-			if (match_state && unlikely(p->state != match_state))
++			if (match_state && unlikely(p->state != match_state)
++			    && unlikely(p->saved_state != match_state))
+ 				return 0;
+ 			cpu_relax();
+ 		}
+@@ -2465,7 +2466,8 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state)
+ 		running = task_running(rq, p);
+ 		on_rq = p->on_rq;
+ 		ncsw = 0;
+-		if (!match_state || p->state == match_state)
++		if (!match_state || p->state == match_state
++		    || p->saved_state == match_state)
+ 			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
+ 		task_rq_unlock(rq, p, &flags);
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0259-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0259-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0259-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0259-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch)
@@ -0,0 +1,112 @@
+From baa89d4c448bb48defdfe2d8b23a8255155bb8da Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Thu, 1 Mar 2012 13:55:30 -0500
+Subject: [PATCH 259/271] lglock/rt: Use non-rt for_each_cpu() in -rt code
+
+Currently the RT version of the lglocks() does a for_each_online_cpu()
+in the name##_global_lock_online() functions. Non-rt uses its own
+mask for this, and for good reason.
+
+A task may grab a *_global_lock_online(), and in the mean time, one
+of the CPUs goes offline. Now when that task does a *_global_unlock_online()
+it releases all the locks *except* the one that went offline.
+
+Now if that CPU were to come back on line, its lock is now owned by a
+task that never released it when it should have.
+
+This causes all sorts of fun errors. Like owners of a lock no longer
+existing, or sleeping on IO, waiting to be woken up by a task that
+happens to be blocked on the lock it never released.
+
+Convert the RT versions to use the lglock specific cpumasks. As once
+a CPU comes on line, the mask is set, and never cleared even when the
+CPU goes offline. The locks for that CPU will still be taken and released.
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Cc: Carsten Emde <C.Emde at osadl.org>
+Cc: John Kacur <jkacur at redhat.com>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Clark Williams <clark.williams at gmail.com>
+Cc: stable-rt at vger.kernel.org
+Link: http://lkml.kernel.org/r/20120301190345.374756214@goodmis.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ include/linux/lglock.h |   35 ++++++++++++++++++++++++++++++++---
+ 1 file changed, 32 insertions(+), 3 deletions(-)
+
+diff --git a/include/linux/lglock.h b/include/linux/lglock.h
+index 52b289f..cdfcef3 100644
+--- a/include/linux/lglock.h
++++ b/include/linux/lglock.h
+@@ -203,9 +203,31 @@
+ #else /* !PREEMPT_RT_FULL */
+ #define DEFINE_LGLOCK(name)						\
+ 									\
+- DEFINE_PER_CPU(struct rt_mutex, name##_lock);					\
++ DEFINE_PER_CPU(struct rt_mutex, name##_lock);				\
++ DEFINE_SPINLOCK(name##_cpu_lock);					\
++ cpumask_t name##_cpus __read_mostly;					\
+  DEFINE_LGLOCK_LOCKDEP(name);						\
+ 									\
++ static int								\
++ name##_lg_cpu_callback(struct notifier_block *nb,			\
++				unsigned long action, void *hcpu)	\
++ {									\
++	switch (action & ~CPU_TASKS_FROZEN) {				\
++	case CPU_UP_PREPARE:						\
++		spin_lock(&name##_cpu_lock);				\
++		cpu_set((unsigned long)hcpu, name##_cpus);		\
++		spin_unlock(&name##_cpu_lock);				\
++		break;							\
++	case CPU_UP_CANCELED: case CPU_DEAD:				\
++		spin_lock(&name##_cpu_lock);				\
++		cpu_clear((unsigned long)hcpu, name##_cpus);		\
++		spin_unlock(&name##_cpu_lock);				\
++	}								\
++	return NOTIFY_OK;						\
++ }									\
++ static struct notifier_block name##_lg_cpu_notifier = {		\
++	.notifier_call = name##_lg_cpu_callback,			\
++ };									\
+  void name##_lock_init(void) {						\
+ 	int i;								\
+ 	LOCKDEP_INIT_MAP(&name##_lock_dep_map, #name, &name##_lock_key, 0); \
+@@ -214,6 +236,11 @@
+ 		lock = &per_cpu(name##_lock, i);			\
+ 		rt_mutex_init(lock);					\
+ 	}								\
++	register_hotcpu_notifier(&name##_lg_cpu_notifier);		\
++	get_online_cpus();						\
++	for_each_online_cpu(i)						\
++		cpu_set(i, name##_cpus);				\
++	put_online_cpus();						\
+  }									\
+  EXPORT_SYMBOL(name##_lock_init);					\
+ 									\
+@@ -254,7 +281,8 @@
+  void name##_global_lock_online(void) {					\
+ 	int i;								\
+ 	rwlock_acquire(&name##_lock_dep_map, 0, 0, _RET_IP_);		\
+-	for_each_online_cpu(i) {					\
++	spin_lock(&name##_cpu_lock);					\
++	for_each_cpu(i, &name##_cpus) {					\
+ 		struct rt_mutex *lock;					\
+ 		lock = &per_cpu(name##_lock, i);			\
+ 		__rt_spin_lock(lock);					\
+@@ -265,11 +293,12 @@
+  void name##_global_unlock_online(void) {				\
+ 	int i;								\
+ 	rwlock_release(&name##_lock_dep_map, 1, _RET_IP_);		\
+-	for_each_online_cpu(i) {					\
++	for_each_cpu(i, &name##_cpus) {					\
+ 		struct rt_mutex *lock;					\
+ 		lock = &per_cpu(name##_lock, i);			\
+ 		__rt_spin_unlock(lock);					\
+ 	}								\
++	spin_unlock(&name##_cpu_lock);					\
+  }									\
+  EXPORT_SYMBOL(name##_global_unlock_online);				\
+ 									\
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0260-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0260-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0260-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0260-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch)
@@ -0,0 +1,127 @@
+From 08ff6d2aa737b0fd586ed8a7ec7f03fe11150082 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt at goodmis.org>
+Date: Fri, 2 Mar 2012 10:36:57 -0500
+Subject: [PATCH 260/271] cpu: Make hotplug.lock a "sleeping" spinlock on RT
+
+Tasks can block on hotplug.lock in pin_current_cpu(), but their state
+might be != RUNNING. So the mutex wakeup will set the state
+unconditionally to RUNNING. That might cause spurious unexpected
+wakeups. We could provide a state preserving mutex_lock() function,
+but this is semantically backwards. So instead we convert the
+hotplug.lock() to a spinlock for RT, which has the state preserving
+semantics already.
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+Cc: Carsten Emde <C.Emde at osadl.org>
+Cc: John Kacur <jkacur at redhat.com>
+Cc: Peter Zijlstra <peterz at infradead.org>
+Cc: Clark Williams <clark.williams at gmail.com>
+Cc: stable-rt at vger.kernel.org
+Link: http://lkml.kernel.org/r/1330702617.25686.265.camel@gandalf.stny.rr.com
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ kernel/cpu.c |   35 ++++++++++++++++++++++++++---------
+ 1 file changed, 26 insertions(+), 9 deletions(-)
+
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index fa40834..66dfb74 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -46,7 +46,12 @@ static int cpu_hotplug_disabled;
+ 
+ static struct {
+ 	struct task_struct *active_writer;
++#ifdef CONFIG_PREEMPT_RT_FULL
++	/* Makes the lock keep the task's state */
++	spinlock_t lock;
++#else
+ 	struct mutex lock; /* Synchronizes accesses to refcount, */
++#endif
+ 	/*
+ 	 * Also blocks the new readers during
+ 	 * an ongoing cpu hotplug operation.
+@@ -54,10 +59,22 @@ static struct {
+ 	int refcount;
+ } cpu_hotplug = {
+ 	.active_writer = NULL,
++#ifdef CONFIG_PREEMPT_RT_FULL
++	.lock = __SPIN_LOCK_UNLOCKED(cpu_hotplug.lock),
++#else
+ 	.lock = __MUTEX_INITIALIZER(cpu_hotplug.lock),
++#endif
+ 	.refcount = 0,
+ };
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define hotplug_lock() rt_spin_lock(&cpu_hotplug.lock)
++# define hotplug_unlock() rt_spin_unlock(&cpu_hotplug.lock)
++#else
++# define hotplug_lock() mutex_lock(&cpu_hotplug.lock)
++# define hotplug_unlock() mutex_unlock(&cpu_hotplug.lock)
++#endif
++
+ struct hotplug_pcp {
+ 	struct task_struct *unplug;
+ 	int refcount;
+@@ -87,8 +104,8 @@ retry:
+ 		return;
+ 	}
+ 	preempt_enable();
+-	mutex_lock(&cpu_hotplug.lock);
+-	mutex_unlock(&cpu_hotplug.lock);
++	hotplug_lock();
++	hotplug_unlock();
+ 	preempt_disable();
+ 	goto retry;
+ }
+@@ -161,9 +178,9 @@ void get_online_cpus(void)
+ 	might_sleep();
+ 	if (cpu_hotplug.active_writer == current)
+ 		return;
+-	mutex_lock(&cpu_hotplug.lock);
++	hotplug_lock();
+ 	cpu_hotplug.refcount++;
+-	mutex_unlock(&cpu_hotplug.lock);
++	hotplug_unlock();
+ 
+ }
+ EXPORT_SYMBOL_GPL(get_online_cpus);
+@@ -172,10 +189,10 @@ void put_online_cpus(void)
+ {
+ 	if (cpu_hotplug.active_writer == current)
+ 		return;
+-	mutex_lock(&cpu_hotplug.lock);
++	hotplug_lock();
+ 	if (!--cpu_hotplug.refcount && unlikely(cpu_hotplug.active_writer))
+ 		wake_up_process(cpu_hotplug.active_writer);
+-	mutex_unlock(&cpu_hotplug.lock);
++	hotplug_unlock();
+ 
+ }
+ EXPORT_SYMBOL_GPL(put_online_cpus);
+@@ -207,11 +224,11 @@ static void cpu_hotplug_begin(void)
+ 	cpu_hotplug.active_writer = current;
+ 
+ 	for (;;) {
+-		mutex_lock(&cpu_hotplug.lock);
++		hotplug_lock();
+ 		if (likely(!cpu_hotplug.refcount))
+ 			break;
+ 		__set_current_state(TASK_UNINTERRUPTIBLE);
+-		mutex_unlock(&cpu_hotplug.lock);
++		hotplug_unlock();
+ 		schedule();
+ 	}
+ }
+@@ -219,7 +236,7 @@ static void cpu_hotplug_begin(void)
+ static void cpu_hotplug_done(void)
+ {
+ 	cpu_hotplug.active_writer = NULL;
+-	mutex_unlock(&cpu_hotplug.lock);
++	hotplug_unlock();
+ }
+ 
+ #else /* #if CONFIG_HOTPLUG_CPU */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0261-softirq-Check-preemption-after-reenabling-interrupts.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0261-softirq-Check-preemption-after-reenabling-interrupts.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0261-softirq-Check-preemption-after-reenabling-interrupts.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0261-softirq-Check-preemption-after-reenabling-interrupts.patch)
@@ -0,0 +1,158 @@
+From 4101379a8b864056d0b5e87de052d212f1358279 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 13 Nov 2011 17:17:09 +0100
+Subject: [PATCH 261/271] softirq: Check preemption after reenabling
+ interrupts
+
+raise_softirq_irqoff() disables interrupts and wakes the softirq
+daemon, but after reenabling interrupts there is no preemption check,
+so the execution of the softirq thread might be delayed arbitrarily.
+
+In principle we could add that check to local_irq_enable/restore, but
+that's overkill as the rasie_softirq_irqoff() sections are the only
+ones which show this behaviour.
+
+Reported-by: Carsten Emde <cbe at osadl.org>
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+---
+ block/blk-iopoll.c      |    3 +++
+ block/blk-softirq.c     |    3 +++
+ include/linux/preempt.h |    3 +++
+ net/core/dev.c          |    6 ++++++
+ 4 files changed, 15 insertions(+)
+
+diff --git a/block/blk-iopoll.c b/block/blk-iopoll.c
+index 58916af..f7ca9b4 100644
+--- a/block/blk-iopoll.c
++++ b/block/blk-iopoll.c
+@@ -38,6 +38,7 @@ void blk_iopoll_sched(struct blk_iopoll *iop)
+ 	list_add_tail(&iop->list, &__get_cpu_var(blk_cpu_iopoll));
+ 	__raise_softirq_irqoff(BLOCK_IOPOLL_SOFTIRQ);
+ 	local_irq_restore(flags);
++	preempt_check_resched_rt();
+ }
+ EXPORT_SYMBOL(blk_iopoll_sched);
+ 
+@@ -135,6 +136,7 @@ static void blk_iopoll_softirq(struct softirq_action *h)
+ 		__raise_softirq_irqoff(BLOCK_IOPOLL_SOFTIRQ);
+ 
+ 	local_irq_enable();
++	preempt_check_resched_rt();
+ }
+ 
+ /**
+@@ -204,6 +206,7 @@ static int __cpuinit blk_iopoll_cpu_notify(struct notifier_block *self,
+ 				 &__get_cpu_var(blk_cpu_iopoll));
+ 		__raise_softirq_irqoff(BLOCK_IOPOLL_SOFTIRQ);
+ 		local_irq_enable();
++		preempt_check_resched_rt();
+ 	}
+ 
+ 	return NOTIFY_OK;
+diff --git a/block/blk-softirq.c b/block/blk-softirq.c
+index 1366a89..60a88ab 100644
+--- a/block/blk-softirq.c
++++ b/block/blk-softirq.c
+@@ -50,6 +50,7 @@ static void trigger_softirq(void *data)
+ 		raise_softirq_irqoff(BLOCK_SOFTIRQ);
+ 
+ 	local_irq_restore(flags);
++	preempt_check_resched_rt();
+ }
+ 
+ /*
+@@ -92,6 +93,7 @@ static int __cpuinit blk_cpu_notify(struct notifier_block *self,
+ 				 &__get_cpu_var(blk_cpu_done));
+ 		raise_softirq_irqoff(BLOCK_SOFTIRQ);
+ 		local_irq_enable();
++		preempt_check_resched_rt();
+ 	}
+ 
+ 	return NOTIFY_OK;
+@@ -150,6 +152,7 @@ do_local:
+ 		goto do_local;
+ 
+ 	local_irq_restore(flags);
++	preempt_check_resched_rt();
+ }
+ 
+ /**
+diff --git a/include/linux/preempt.h b/include/linux/preempt.h
+index 6450c01..58d8982 100644
+--- a/include/linux/preempt.h
++++ b/include/linux/preempt.h
+@@ -56,8 +56,10 @@ do { \
+ 
+ #ifndef CONFIG_PREEMPT_RT_BASE
+ # define preempt_enable_no_resched()	__preempt_enable_no_resched()
++# define preempt_check_resched_rt()	do { } while (0)
+ #else
+ # define preempt_enable_no_resched()	preempt_enable()
++# define preempt_check_resched_rt()	preempt_check_resched()
+ #endif
+ 
+ #define preempt_enable() \
+@@ -105,6 +107,7 @@ do { \
+ #define preempt_disable_notrace()		do { } while (0)
+ #define preempt_enable_no_resched_notrace()	do { } while (0)
+ #define preempt_enable_notrace()		do { } while (0)
++#define preempt_check_resched_rt()	do { } while (0)
+ 
+ #endif /* CONFIG_PREEMPT_COUNT */
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 546cc6a..30c7a9e 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1803,6 +1803,7 @@ static inline void __netif_reschedule(struct Qdisc *q)
+ 	sd->output_queue_tailp = &q->next_sched;
+ 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
+ 	local_irq_restore(flags);
++	preempt_check_resched_rt();
+ }
+ 
+ void __netif_schedule(struct Qdisc *q)
+@@ -1824,6 +1825,7 @@ void dev_kfree_skb_irq(struct sk_buff *skb)
+ 		sd->completion_queue = skb;
+ 		raise_softirq_irqoff(NET_TX_SOFTIRQ);
+ 		local_irq_restore(flags);
++		preempt_check_resched_rt();
+ 	}
+ }
+ EXPORT_SYMBOL(dev_kfree_skb_irq);
+@@ -2963,6 +2965,7 @@ enqueue:
+ 	rps_unlock(sd);
+ 
+ 	local_irq_restore(flags);
++	preempt_check_resched_rt();
+ 
+ 	atomic_long_inc(&skb->dev->rx_dropped);
+ 	kfree_skb(skb);
+@@ -3784,6 +3787,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
+ 	} else
+ #endif
+ 		local_irq_enable();
++	preempt_check_resched_rt();
+ }
+ 
+ static int process_backlog(struct napi_struct *napi, int quota)
+@@ -3856,6 +3860,7 @@ void __napi_schedule(struct napi_struct *n)
+ 	local_irq_save(flags);
+ 	____napi_schedule(&__get_cpu_var(softnet_data), n);
+ 	local_irq_restore(flags);
++	preempt_check_resched_rt();
+ }
+ EXPORT_SYMBOL(__napi_schedule);
+ 
+@@ -6364,6 +6369,7 @@ static int dev_cpu_callback(struct notifier_block *nfb,
+ 
+ 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
+ 	local_irq_enable();
++	preempt_check_resched_rt();
+ 
+ 	/* Process offline CPU's input_pkt_queue */
+ 	while ((skb = __skb_dequeue(&oldsd->process_queue))) {
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0262-rt-Introduce-cpu_chill.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0262-rt-Introduce-cpu_chill.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0262-rt-Introduce-cpu_chill.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0262-rt-Introduce-cpu_chill.patch)
@@ -0,0 +1,34 @@
+From eccd9dea38ad49e51ba3397ba247a06c7dc0d827 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 7 Mar 2012 20:51:03 +0100
+Subject: [PATCH 262/271] rt: Introduce cpu_chill()
+
+Retry loops on RT might loop forever when the modifying side was
+preempted. Add cpu_chill() to replace cpu_relax(). cpu_chill()
+defaults to cpu_relax() for non RT. On RT it puts the looping task to
+sleep for a tick so the preempted task can make progress.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+---
+ include/linux/delay.h |    6 ++++++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/include/linux/delay.h b/include/linux/delay.h
+index a6ecb34..e23a7c0 100644
+--- a/include/linux/delay.h
++++ b/include/linux/delay.h
+@@ -52,4 +52,10 @@ static inline void ssleep(unsigned int seconds)
+ 	msleep(seconds * 1000);
+ }
+ 
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define cpu_chill()	msleep(1)
++#else
++# define cpu_chill()	cpu_relax()
++#endif
++
+ #endif /* defined(_LINUX_DELAY_H) */
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0263-fs-dcache-Use-cpu_chill-in-trylock-loops.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0263-fs-dcache-Use-cpu_chill-in-trylock-loops.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0263-fs-dcache-Use-cpu_chill-in-trylock-loops.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0263-fs-dcache-Use-cpu_chill-in-trylock-loops.patch)
@@ -0,0 +1,106 @@
+From 0319cf12cf8700a10865119336aece0c06b1da3d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 7 Mar 2012 21:00:34 +0100
+Subject: [PATCH 263/271] fs: dcache: Use cpu_chill() in trylock loops
+
+Retry loops on RT might loop forever when the modifying side was
+preempted. Use cpu_chill() instead of cpu_relax() to let the system
+make progress.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+---
+ fs/autofs4/autofs_i.h |    1 +
+ fs/autofs4/expire.c   |    2 +-
+ fs/dcache.c           |    7 ++++---
+ fs/namespace.c        |    3 ++-
+ 4 files changed, 8 insertions(+), 5 deletions(-)
+
+diff --git a/fs/autofs4/autofs_i.h b/fs/autofs4/autofs_i.h
+index 650d520..fb77639 100644
+--- a/fs/autofs4/autofs_i.h
++++ b/fs/autofs4/autofs_i.h
+@@ -34,6 +34,7 @@
+ #include <linux/sched.h>
+ #include <linux/mount.h>
+ #include <linux/namei.h>
++#include <linux/delay.h>
+ #include <asm/current.h>
+ #include <asm/uaccess.h>
+ 
+diff --git a/fs/autofs4/expire.c b/fs/autofs4/expire.c
+index 450f529..43afbb0 100644
+--- a/fs/autofs4/expire.c
++++ b/fs/autofs4/expire.c
+@@ -170,7 +170,7 @@ again:
+ 			parent = p->d_parent;
+ 			if (!spin_trylock(&parent->d_lock)) {
+ 				spin_unlock(&p->d_lock);
+-				cpu_relax();
++				cpu_chill();
+ 				goto relock;
+ 			}
+ 			spin_unlock(&p->d_lock);
+diff --git a/fs/dcache.c b/fs/dcache.c
+index eb723d3..d47b266 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -37,6 +37,7 @@
+ #include <linux/rculist_bl.h>
+ #include <linux/prefetch.h>
+ #include <linux/ratelimit.h>
++#include <linux/delay.h>
+ #include "internal.h"
+ 
+ /*
+@@ -410,7 +411,7 @@ static inline struct dentry *dentry_kill(struct dentry *dentry, int ref)
+ 	if (inode && !spin_trylock(&inode->i_lock)) {
+ relock:
+ 		spin_unlock(&dentry->d_lock);
+-		cpu_relax();
++		cpu_chill();
+ 		return dentry; /* try again with same dentry */
+ 	}
+ 	if (IS_ROOT(dentry))
+@@ -796,7 +797,7 @@ relock:
+ 
+ 		if (!spin_trylock(&dentry->d_lock)) {
+ 			spin_unlock(&dcache_lru_lock);
+-			cpu_relax();
++			cpu_chill();
+ 			goto relock;
+ 		}
+ 
+@@ -1975,7 +1976,7 @@ again:
+ 	if (dentry->d_count == 1) {
+ 		if (inode && !spin_trylock(&inode->i_lock)) {
+ 			spin_unlock(&dentry->d_lock);
+-			cpu_relax();
++			cpu_chill();
+ 			goto again;
+ 		}
+ 		dentry->d_flags &= ~DCACHE_CANT_MOUNT;
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 644dbde..c45da77 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -31,6 +31,7 @@
+ #include <linux/idr.h>
+ #include <linux/fs_struct.h>
+ #include <linux/fsnotify.h>
++#include <linux/delay.h>
+ #include <asm/uaccess.h>
+ #include <asm/unistd.h>
+ #include "pnode.h"
+@@ -346,7 +347,7 @@ int mnt_want_write(struct vfsmount *mnt)
+ 	 */
+ 	while (mnt->mnt_flags & MNT_WRITE_HOLD) {
+ 		preempt_enable();
+-		cpu_relax();
++		cpu_chill();
+ 		preempt_disable();
+ 	}
+ 	/*
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0264-net-Use-cpu_chill-instead-of-cpu_relax.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0264-net-Use-cpu_chill-instead-of-cpu_relax.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0264-net-Use-cpu_chill-instead-of-cpu_relax.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0264-net-Use-cpu_chill-instead-of-cpu_relax.patch)
@@ -0,0 +1,70 @@
+From 2818e474f3d3f303a1f9860fcb810ed78d3bf87d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 7 Mar 2012 21:10:04 +0100
+Subject: [PATCH 264/271] net: Use cpu_chill() instead of cpu_relax()
+
+Retry loops on RT might loop forever when the modifying side was
+preempted. Use cpu_chill() instead of cpu_relax() to let the system
+make progress.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+---
+ net/packet/af_packet.c |    5 +++--
+ net/rds/ib_rdma.c      |    3 ++-
+ 2 files changed, 5 insertions(+), 3 deletions(-)
+
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index d9d4970..ed8ef1e 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -89,6 +89,7 @@
+ #include <linux/virtio_net.h>
+ #include <linux/errqueue.h>
+ #include <linux/net_tstamp.h>
++#include <linux/delay.h>
+ 
+ #ifdef CONFIG_INET
+ #include <net/inet_common.h>
+@@ -673,7 +674,7 @@ static void prb_retire_rx_blk_timer_expired(unsigned long data)
+ 	if (BLOCK_NUM_PKTS(pbd)) {
+ 		while (atomic_read(&pkc->blk_fill_in_prog)) {
+ 			/* Waiting for skb_copy_bits to finish... */
+-			cpu_relax();
++			cpu_chill();
+ 		}
+ 	}
+ 
+@@ -928,7 +929,7 @@ static void prb_retire_current_block(struct tpacket_kbdq_core *pkc,
+ 		if (!(status & TP_STATUS_BLK_TMO)) {
+ 			while (atomic_read(&pkc->blk_fill_in_prog)) {
+ 				/* Waiting for skb_copy_bits to finish... */
+-				cpu_relax();
++				cpu_chill();
+ 			}
+ 		}
+ 		prb_close_block(pkc, pbd, po, status);
+diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
+index e8fdb17..5a44c6e 100644
+--- a/net/rds/ib_rdma.c
++++ b/net/rds/ib_rdma.c
+@@ -34,6 +34,7 @@
+ #include <linux/slab.h>
+ #include <linux/rculist.h>
+ #include <linux/llist.h>
++#include <linux/delay.h>
+ 
+ #include "rds.h"
+ #include "ib.h"
+@@ -286,7 +287,7 @@ static inline void wait_clean_list_grace(void)
+ 	for_each_online_cpu(cpu) {
+ 		flag = &per_cpu(clean_list_grace, cpu);
+ 		while (test_bit(CLEAN_LIST_BUSY_BIT, flag))
+-			cpu_relax();
++			cpu_chill();
+ 	}
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0265-kconfig-disable-a-few-options-rt.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0265-kconfig-disable-a-few-options-rt.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0265-kconfig-disable-a-few-options-rt.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0265-kconfig-disable-a-few-options-rt.patch.patch)
@@ -0,0 +1,54 @@
+From 08aecc40057694874911b9903c59231d80bea040 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Sun, 24 Jul 2011 12:11:43 +0200
+Subject: [PATCH 265/271] kconfig-disable-a-few-options-rt.patch
+
+Disable stuff which is known to have issues on RT
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ arch/Kconfig        |    1 +
+ drivers/net/Kconfig |    1 +
+ mm/Kconfig          |    2 +-
+ 3 files changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 4b0669c..73fc7cd 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -6,6 +6,7 @@ config OPROFILE
+ 	tristate "OProfile system profiling"
+ 	depends on PROFILING
+ 	depends on HAVE_OPROFILE
++	depends on !PREEMPT_RT_FULL
+ 	select RING_BUFFER
+ 	select RING_BUFFER_ALLOW_SWAP
+ 	help
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index 654a5e9..0f5149d 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -154,6 +154,7 @@ config MACVTAP
+ 
+ config NETCONSOLE
+ 	tristate "Network console logging support"
++	depends on !PREEMPT_RT_FULL
+ 	---help---
+ 	If you want to log kernel messages over the network, enable this.
+ 	See <file:Documentation/networking/netconsole.txt> for details.
+diff --git a/mm/Kconfig b/mm/Kconfig
+index 011b110..0526445 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -307,7 +307,7 @@ config NOMMU_INITIAL_TRIM_EXCESS
+ 
+ config TRANSPARENT_HUGEPAGE
+ 	bool "Transparent Hugepage Support"
+-	depends on X86 && MMU
++	depends on X86 && MMU && !PREEMPT_RT_FULL
+ 	select COMPACTION
+ 	help
+ 	  Transparent Hugepages allows the kernel to use huge pages and
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0266-kconfig-preempt-rt-full.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0266-kconfig-preempt-rt-full.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0266-kconfig-preempt-rt-full.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0266-kconfig-preempt-rt-full.patch.patch)
@@ -0,0 +1,65 @@
+From a298ba591146845659546839c65f2d2b61cac361 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Wed, 29 Jun 2011 14:58:57 +0200
+Subject: [PATCH 266/271] kconfig-preempt-rt-full.patch
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+---
+ init/Makefile          |    2 +-
+ kernel/Kconfig.preempt |    7 +++++++
+ scripts/mkcompile_h    |    4 +++-
+ 3 files changed, 11 insertions(+), 2 deletions(-)
+
+diff --git a/init/Makefile b/init/Makefile
+index 0bf677a..6b473cd 100644
+--- a/init/Makefile
++++ b/init/Makefile
+@@ -29,4 +29,4 @@ silent_chk_compile.h = :
+ include/generated/compile.h: FORCE
+ 	@$($(quiet)chk_compile.h)
+ 	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkcompile_h $@ \
+-	"$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CONFIG_PREEMPT)" "$(CC) $(KBUILD_CFLAGS)"
++	"$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CONFIG_PREEMPT)" "$(CONFIG_PREEMPT_RT_FULL)" "$(CC) $(KBUILD_CFLAGS)"
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index 35c6f20..d0e9372 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -66,6 +66,13 @@ config PREEMPT_RTB
+ 	  enables changes which are preliminary for the full preemptiple
+ 	  RT kernel.
+ 
++config PREEMPT_RT_FULL
++	bool "Fully Preemptible Kernel (RT)"
++	depends on IRQ_FORCED_THREADING
++	select PREEMPT_RT_BASE
++	help
++	  All and everything
++
+ endchoice
+ 
+ config PREEMPT_COUNT
+diff --git a/scripts/mkcompile_h b/scripts/mkcompile_h
+index f221ddf..5f44009 100755
+--- a/scripts/mkcompile_h
++++ b/scripts/mkcompile_h
+@@ -4,7 +4,8 @@ TARGET=$1
+ ARCH=$2
+ SMP=$3
+ PREEMPT=$4
+-CC=$5
++RT=$5
++CC=$6
+ 
+ vecho() { [ "${quiet}" = "silent_" ] || echo "$@" ; }
+ 
+@@ -57,6 +58,7 @@ UTS_VERSION="#$VERSION"
+ CONFIG_FLAGS=""
+ if [ -n "$SMP" ] ; then CONFIG_FLAGS="SMP"; fi
+ if [ -n "$PREEMPT" ] ; then CONFIG_FLAGS="$CONFIG_FLAGS PREEMPT"; fi
++if [ -n "$RT" ] ; then CONFIG_FLAGS="$CONFIG_FLAGS RT"; fi
+ UTS_VERSION="$UTS_VERSION $CONFIG_FLAGS $TIMESTAMP"
+ 
+ # Truncate to maximum length
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0267-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0267-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0267-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0267-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch)
@@ -0,0 +1,54 @@
+From b3c1becb96c9c5507a39012a630457263a319cf4 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <srostedt at redhat.com>
+Date: Mon, 16 Apr 2012 21:51:54 -0400
+Subject: [PATCH 267/271] rt: Make migrate_disable/enable() and
+ __rt_mutex_init non-GPL only
+
+Modules that load on the normal vanilla kernel should also load on
+an -rt kernel as well. This does not mean we condone non-GPL modules,
+we are only being consistent.
+
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+---
+ kernel/rtmutex.c |    2 +-
+ kernel/sched.c   |    4 ++--
+ 2 files changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
+index b525158..9c4f6e5 100644
+--- a/kernel/rtmutex.c
++++ b/kernel/rtmutex.c
+@@ -1290,7 +1290,7 @@ void __rt_mutex_init(struct rt_mutex *lock, const char *name)
+ 
+ 	debug_rt_mutex_init(lock, name);
+ }
+-EXPORT_SYMBOL_GPL(__rt_mutex_init);
++EXPORT_SYMBOL(__rt_mutex_init);
+ 
+ /**
+  * rt_mutex_init_proxy_locked - initialize and lock a rt_mutex on behalf of a
+diff --git a/kernel/sched.c b/kernel/sched.c
+index 95ae97c..20f5b6a 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -4425,7 +4425,7 @@ void migrate_disable(void)
+ 	p->migrate_disable = 1;
+ 	preempt_enable();
+ }
+-EXPORT_SYMBOL_GPL(migrate_disable);
++EXPORT_SYMBOL(migrate_disable);
+ 
+ void migrate_enable(void)
+ {
+@@ -4477,7 +4477,7 @@ void migrate_enable(void)
+ 	unpin_current_cpu();
+ 	preempt_enable();
+ }
+-EXPORT_SYMBOL_GPL(migrate_enable);
++EXPORT_SYMBOL(migrate_enable);
+ #else
+ static inline void update_migrate_disable(struct task_struct *p) { }
+ #define migrate_disabled_updated(p)		0
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0268-scsi-qla2xxx-Use-local_irq_save_nort-in-qla2x00_poll.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0268-scsi-qla2xxx-Use-local_irq_save_nort-in-qla2x00_poll.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0268-scsi-qla2xxx-Use-local_irq_save_nort-in-qla2x00_poll.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0268-scsi-qla2xxx-Use-local_irq_save_nort-in-qla2x00_poll.patch)
@@ -0,0 +1,55 @@
+From 0bb0dcc93417eebf25235aefda523ed3733d877b Mon Sep 17 00:00:00 2001
+From: John Kacur <jkacur at redhat.com>
+Date: Fri, 27 Apr 2012 12:48:46 +0200
+Subject: [PATCH 268/271] scsi: qla2xxx: Use local_irq_save_nort() in
+ qla2x00_poll
+
+RT triggers the following:
+
+[   11.307652]  [<ffffffff81077b27>] __might_sleep+0xe7/0x110
+[   11.307663]  [<ffffffff8150e524>] rt_spin_lock+0x24/0x60
+[   11.307670]  [<ffffffff8150da78>] ? rt_spin_lock_slowunlock+0x78/0x90
+[   11.307703]  [<ffffffffa0272d83>] qla24xx_intr_handler+0x63/0x2d0 [qla2xxx]
+[   11.307736]  [<ffffffffa0262307>] qla2x00_poll+0x67/0x90 [qla2xxx]
+
+Function qla2x00_poll does local_irq_save() before calling qla24xx_intr_handler
+which has a spinlock. Since spinlocks are sleepable on rt, it is not allowed
+to call them with interrupts disabled. Therefore we use local_irq_save_nort()
+instead which saves flags without disabling interrupts.
+
+This fix needs to be applied to v3.0-rt, v3.2-rt and v3.4-rt
+
+Suggested-by: Thomas Gleixner
+Signed-off-by: John Kacur <jkacur at redhat.com>
+Cc: Steven Rostedt <rostedt at goodmis.org>
+Cc: David Sommerseth <davids at redhat.com>
+Link: http://lkml.kernel.org/r/1335523726-10024-1-git-send-email-jkacur@redhat.com
+Cc: stable-rt at vger.kernel.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+---
+ drivers/scsi/qla2xxx/qla_inline.h |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index 9902834..6d01db6 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -36,12 +36,12 @@ qla2x00_poll(struct rsp_que *rsp)
+ {
+ 	unsigned long flags;
+ 	struct qla_hw_data *ha = rsp->hw;
+-	local_irq_save(flags);
++	local_irq_save_nort(flags);
+ 	if (IS_QLA82XX(ha))
+ 		qla82xx_poll(0, rsp);
+ 	else
+ 		ha->isp_ops->intr_handler(0, rsp);
+-	local_irq_restore(flags);
++	local_irq_restore_nort(flags);
+ }
+ 
+ static inline uint8_t *
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0269-net-RT-REmove-preemption-disabling-in-netif_rx.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0269-net-RT-REmove-preemption-disabling-in-netif_rx.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0269-net-RT-REmove-preemption-disabling-in-netif_rx.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0269-net-RT-REmove-preemption-disabling-in-netif_rx.patch)
@@ -0,0 +1,70 @@
+From e5bcba89face7647749907f8a5da0d10442d04a6 Mon Sep 17 00:00:00 2001
+From: Priyanka Jain <Priyanka.Jain at freescale.com>
+Date: Thu, 17 May 2012 09:35:11 +0530
+Subject: [PATCH 269/271] net,RT:REmove preemption disabling in netif_rx()
+
+1)enqueue_to_backlog() (called from netif_rx) should be
+  bind to a particluar CPU. This can be achieved by
+  disabling migration. No need to disable preemption
+
+2)Fixes crash "BUG: scheduling while atomic: ksoftirqd"
+  in case of RT.
+  If preemption is disabled, enqueue_to_backog() is called
+  in atomic context. And if backlog exceeds its count,
+  kfree_skb() is called. But in RT, kfree_skb() might
+  gets scheduled out, so it expects non atomic context.
+
+3)When CONFIG_PREEMPT_RT_FULL is not defined,
+ migrate_enable(), migrate_disable() maps to
+ preempt_enable() and preempt_disable(), so no
+ change in functionality in case of non-RT.
+
+-Replace preempt_enable(), preempt_disable() with
+ migrate_enable(), migrate_disable() respectively
+-Replace get_cpu(), put_cpu() with get_cpu_light(),
+ put_cpu_light() respectively
+
+Signed-off-by: Priyanka Jain <Priyanka.Jain at freescale.com>
+Acked-by: Rajan Srivastava <Rajan.Srivastava at freescale.com>
+Cc: <rostedt at goodmis.orgn>
+Link: http://lkml.kernel.org/r/1337227511-2271-1-git-send-email-Priyanka.Jain@freescale.com
+Cc: stable-rt at vger.kernel.org
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+---
+ net/core/dev.c |    8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 30c7a9e..9085ef7 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3004,7 +3004,7 @@ int netif_rx(struct sk_buff *skb)
+ 		struct rps_dev_flow voidflow, *rflow = &voidflow;
+ 		int cpu;
+ 
+-		preempt_disable();
++		migrate_disable();
+ 		rcu_read_lock();
+ 
+ 		cpu = get_rps_cpu(skb->dev, skb, &rflow);
+@@ -3014,13 +3014,13 @@ int netif_rx(struct sk_buff *skb)
+ 		ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
+ 
+ 		rcu_read_unlock();
+-		preempt_enable();
++		migrate_enable();
+ 	}
+ #else
+ 	{
+ 		unsigned int qtail;
+-		ret = enqueue_to_backlog(skb, get_cpu(), &qtail);
+-		put_cpu();
++		ret = enqueue_to_backlog(skb, get_cpu_light(), &qtail);
++		put_cpu_light();
+ 	}
+ #endif
+ 	return ret;
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0270-mips-remove-smp-reserve-lock.patch.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0270-mips-remove-smp-reserve-lock.patch.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0270-mips-remove-smp-reserve-lock.patch.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0270-mips-remove-smp-reserve-lock.patch.patch)
@@ -0,0 +1,48 @@
+From 002636683a8da0e4ed480d13c038ee06c977d01e Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 22 May 2012 21:15:10 +0200
+Subject: [PATCH 270/271] mips-remove-smp-reserve-lock.patch
+
+Instead of making the lock raw, remove it as it protects nothing.
+
+Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
+Cc: stable-rt at vger.kernel.org
+Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
+---
+ arch/mips/cavium-octeon/smp.c |    6 ------
+ 1 file changed, 6 deletions(-)
+
+diff --git a/arch/mips/cavium-octeon/smp.c b/arch/mips/cavium-octeon/smp.c
+index efcfff4..86fce15 100644
+--- a/arch/mips/cavium-octeon/smp.c
++++ b/arch/mips/cavium-octeon/smp.c
+@@ -257,8 +257,6 @@ DEFINE_PER_CPU(int, cpu_state);
+ 
+ extern void fixup_irqs(void);
+ 
+-static DEFINE_SPINLOCK(smp_reserve_lock);
+-
+ static int octeon_cpu_disable(void)
+ {
+ 	unsigned int cpu = smp_processor_id();
+@@ -266,8 +264,6 @@ static int octeon_cpu_disable(void)
+ 	if (cpu == 0)
+ 		return -EBUSY;
+ 
+-	spin_lock(&smp_reserve_lock);
+-
+ 	cpu_clear(cpu, cpu_online_map);
+ 	cpu_clear(cpu, cpu_callin_map);
+ 	local_irq_disable();
+@@ -277,8 +273,6 @@ static int octeon_cpu_disable(void)
+ 	flush_cache_all();
+ 	local_flush_tlb_all();
+ 
+-	spin_unlock(&smp_reserve_lock);
+-
+ 	return 0;
+ }
+ 
+-- 
+1.7.10
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/rt/0271-Linux-3.2.20-rt32-REBASE.patch (from r19226, dists/sid/linux/debian/patches/features/all/rt/0271-Linux-3.2.20-rt32-REBASE.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/0271-Linux-3.2.20-rt32-REBASE.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/rt/0271-Linux-3.2.20-rt32-REBASE.patch)
@@ -0,0 +1,19 @@
+From 776fb4d39fa4da747d2a0fd1929e3e16f4eeee92 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <srostedt at redhat.com>
+Date: Thu, 7 Jun 2012 11:22:24 -0400
+Subject: [PATCH 271/271] Linux 3.2.20-rt32 REBASE
+
+---
+ localversion-rt |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/localversion-rt b/localversion-rt
+index b2111a2..ce6a482 100644
+--- a/localversion-rt
++++ b/localversion-rt
+@@ -1 +1 @@
+--rt24
++-rt32
+-- 
+1.7.10
+

Modified: dists/squeeze-backports/linux/debian/patches/features/all/rt/series
==============================================================================
--- dists/squeeze-backports/linux/debian/patches/features/all/rt/series	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/patches/features/all/rt/series	Fri Aug 17 02:04:57 2012	(r19325)
@@ -1,267 +1,271 @@
-0001-x86-Call-idle-notifier-after-irq_enter.patch
-0002-slab-lockdep-Annotate-all-slab-caches.patch
-0003-x86-kprobes-Remove-remove-bogus-preempt_enable.patch
-0004-x86-hpet-Disable-MSI-on-Lenovo-W510.patch
-0005-block-Shorten-interrupt-disabled-regions.patch
-0006-sched-Distangle-worker-accounting-from-rq-3Elock.patch
-0007-mips-enable-interrupts-in-signal.patch.patch
-0008-arm-enable-interrupts-in-signal-code.patch.patch
-0009-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch
-0010-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch
-0011-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch
-0012-powerpc-Allow-irq-threading.patch
-0013-sched-Keep-period-timer-ticking-when-throttling-acti.patch
-0014-sched-Do-not-throttle-due-to-PI-boosting.patch
-0015-time-Remove-bogus-comments.patch
-0016-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch
-0017-x86-vdso-Use-seqcount-instead-of-seqlock.patch
-0018-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch
-0019-seqlock-Remove-unused-functions.patch
-0020-seqlock-Use-seqcount.patch
-0021-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch
-0022-timekeeping-Split-xtime_lock.patch
-0023-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch
-0024-mm-memcg-shorten-preempt-disabled-section-around-eve.patch
-0025-tracing-Account-for-preempt-off-in-preempt_schedule.patch
-0026-signal-revert-ptrace-preempt-magic.patch.patch
-0027-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch
-0028-arm-Allow-forced-irq-threading.patch
-0029-preempt-rt-Convert-arm-boot_lock-to-raw.patch
-0030-sched-Create-schedule_preempt_disabled.patch
-0031-sched-Use-schedule_preempt_disabled.patch
-0032-signals-Do-not-wakeup-self.patch
-0033-posix-timers-Prevent-broadcast-signals.patch
-0034-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch
-0035-signal-x86-Delay-calling-signals-in-atomic.patch
-0036-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch
-0037-drivers-random-Reduce-preempt-disabled-region.patch
-0038-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch
-0039-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch
-0040-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch
-0041-drivers-net-Use-disable_irq_nosync-in-8139too.patch
-0042-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch
-0043-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch
-0044-preempt-mark-legitimated-no-resched-sites.patch.patch
-0045-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch
-0046-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch
-0047-mm-pagefault_disabled.patch
-0048-mm-raw_pagefault_disable.patch
-0049-filemap-fix-up.patch.patch
-0050-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch
-0051-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch
-0052-suspend-Prevent-might-sleep-splats.patch
-0053-OF-Fixup-resursive-locking-code-paths.patch
-0054-of-convert-devtree-lock.patch.patch
-0055-list-add-list-last-entry.patch.patch
-0056-mm-page-alloc-use-list-last-entry.patch.patch
-0057-mm-slab-move-debug-out.patch.patch
-0058-rwsem-inlcude-fix.patch.patch
-0059-sysctl-include-fix.patch.patch
-0060-net-flip-lock-dep-thingy.patch.patch
-0061-softirq-thread-do-softirq.patch.patch
-0062-softirq-split-out-code.patch.patch
-0063-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch
-0064-x86-32-fix-signal-crap.patch.patch
-0065-x86-Do-not-disable-preemption-in-int3-on-32bit.patch
-0066-rcu-Reduce-lock-section.patch
-0067-locking-various-init-fixes.patch.patch
-0068-wait-Provide-__wake_up_all_locked.patch
-0069-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch
-0070-latency-hist.patch.patch
-0071-hwlatdetect.patch.patch
-0072-localversion.patch.patch
-0073-early-printk-consolidate.patch.patch
-0074-printk-kill.patch.patch
-0075-printk-force_early_printk-boot-param-to-help-with-de.patch
-0076-rt-preempt-base-config.patch.patch
-0077-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch
-0078-rt-local_irq_-variants-depending-on-RT-RT.patch
-0079-preempt-Provide-preempt_-_-no-rt-variants.patch
-0080-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch
-0081-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch
-0082-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch
-0083-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch
-0084-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch
-0085-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch
-0086-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch
-0087-usb-Use-local_irq_-_nort-variants.patch
-0088-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch
-0089-mm-scatterlist-dont-disable-irqs-on-RT.patch
-0090-signal-fix-up-rcu-wreckage.patch.patch
-0091-net-wireless-warn-nort.patch.patch
-0092-mm-Replace-cgroup_page-bit-spinlock.patch
-0093-buffer_head-Replace-bh_uptodate_lock-for-rt.patch
-0094-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch
-0095-genirq-Disable-DEBUG_SHIRQ-for-rt.patch
-0096-genirq-Disable-random-call-on-preempt-rt.patch
-0097-genirq-disable-irqpoll-on-rt.patch
-0098-genirq-force-threading.patch.patch
-0099-drivers-net-fix-livelock-issues.patch
-0100-drivers-net-vortex-fix-locking-issues.patch
-0101-drivers-net-gianfar-Make-RT-aware.patch
-0102-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch
-0103-local-var.patch.patch
-0104-rt-local-irq-lock.patch.patch
-0105-cpu-rt-variants.patch.patch
-0106-mm-slab-wrap-functions.patch.patch
-0107-slab-Fix-__do_drain-to-use-the-right-array-cache.patch
-0108-mm-More-lock-breaks-in-slab.c.patch
-0109-mm-page_alloc-rt-friendly-per-cpu-pages.patch
-0110-mm-page_alloc-reduce-lock-sections-further.patch
-0111-mm-page-alloc-fix.patch.patch
-0112-mm-convert-swap-to-percpu-locked.patch
-0113-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch
-0114-mm-make-vmstat-rt-aware.patch
-0115-mm-shrink-the-page-frame-to-rt-size.patch
-0116-ARM-Initialize-ptl-lock-for-vector-page.patch
-0117-mm-Allow-only-slab-on-RT.patch
-0118-radix-tree-rt-aware.patch.patch
-0119-panic-disable-random-on-rt.patch
-0120-ipc-Make-the-ipc-code-rt-aware.patch
-0121-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch
-0122-relay-fix-timer-madness.patch
-0123-net-ipv4-route-use-locks-on-up-rt.patch.patch
-0124-workqueue-avoid-the-lock-in-cpu-dying.patch.patch
-0125-timers-prepare-for-full-preemption.patch
-0126-timers-preempt-rt-support.patch
-0127-timers-fix-timer-hotplug-on-rt.patch
-0128-timers-mov-printk_tick-to-soft-interrupt.patch
-0129-timer-delay-waking-softirqs-from-the-jiffy-tick.patch
-0130-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch
-0131-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch
-0132-hrtimers-prepare-full-preemption.patch
-0133-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
-0134-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch
-0135-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch
-0136-hrtimer-fix-reprogram-madness.patch.patch
-0137-timer-fd-Prevent-live-lock.patch
-0138-posix-timers-thread-posix-cpu-timers-on-rt.patch
-0139-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch
-0140-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch
-0141-sched-delay-put-task.patch.patch
-0142-sched-limit-nr-migrate.patch.patch
-0143-sched-mmdrop-delayed.patch.patch
-0144-sched-rt-mutex-wakeup.patch.patch
-0145-sched-prevent-idle-boost.patch.patch
-0146-sched-might-sleep-do-not-account-rcu-depth.patch.patch
-0147-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch
-0148-sched-cond-resched.patch.patch
-0149-cond-resched-softirq-fix.patch.patch
-0150-sched-no-work-when-pi-blocked.patch.patch
-0151-cond-resched-lock-rt-tweak.patch.patch
-0152-sched-disable-ttwu-queue.patch.patch
-0153-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch
-0154-sched-ttwu-Return-success-when-only-changing-the-sav.patch
-0155-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch
-0156-stomp-machine-mark-stomper-thread.patch.patch
-0157-stomp-machine-raw-lock.patch.patch
-0158-hotplug-Lightweight-get-online-cpus.patch
-0159-hotplug-sync_unplug-No.patch
-0160-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch
-0161-sched-migrate-disable.patch.patch
-0162-hotplug-use-migrate-disable.patch.patch
-0163-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch
-0164-ftrace-migrate-disable-tracing.patch.patch
-0165-tracing-Show-padding-as-unsigned-short.patch
-0166-migrate-disable-rt-variant.patch.patch
-0167-sched-Optimize-migrate_disable.patch
-0168-sched-Generic-migrate_disable.patch
-0169-sched-rt-Fix-migrate_enable-thinko.patch
-0170-sched-teach-migrate_disable-about-atomic-contexts.patch
-0171-sched-Postpone-actual-migration-disalbe-to-schedule.patch
-0172-sched-Do-not-compare-cpu-masks-in-scheduler.patch
-0173-sched-Have-migrate_disable-ignore-bounded-threads.patch
-0174-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch
-0175-ftrace-crap.patch.patch
-0176-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch
-0177-net-netif_rx_ni-migrate-disable.patch.patch
-0178-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch
-0179-lockdep-rt.patch.patch
-0180-mutex-no-spin-on-rt.patch.patch
-0181-softirq-local-lock.patch.patch
-0182-softirq-Export-in_serving_softirq.patch
-0183-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch
-0184-softirq-Fix-unplug-deadlock.patch
-0185-softirq-disable-softirq-stacks-for-rt.patch.patch
-0186-softirq-make-fifo.patch.patch
-0187-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch
-0188-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch
-0189-local-vars-migrate-disable.patch.patch
-0190-md-raid5-Make-raid5_percpu-handling-RT-aware.patch
-0191-rtmutex-lock-killable.patch.patch
-0192-rtmutex-futex-prepare-rt.patch.patch
-0193-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch
-0194-rt-mutex-add-sleeping-spinlocks-support.patch.patch
-0195-spinlock-types-separate-raw.patch.patch
-0196-rtmutex-avoid-include-hell.patch.patch
-0197-rt-add-rt-spinlocks.patch.patch
-0198-rt-add-rt-to-mutex-headers.patch.patch
-0199-rwsem-add-rt-variant.patch.patch
-0200-rt-Add-the-preempt-rt-lock-replacement-APIs.patch
-0201-rwlocks-Fix-section-mismatch.patch
-0202-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch
-0203-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch
-0204-rcu-Frob-softirq-test.patch
-0205-rcu-Merge-RCU-bh-into-RCU-preempt.patch
-0206-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch
-0207-rcu-more-fallout.patch.patch
-0208-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch
-0209-rt-rcutree-Move-misplaced-prototype.patch
-0210-lglocks-rt.patch.patch
-0211-serial-8250-Clean-up-the-locking-for-rt.patch
-0212-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch
-0213-drivers-tty-fix-omap-lock-crap.patch.patch
-0214-rt-Improve-the-serial-console-PASS_LIMIT.patch
-0215-fs-namespace-preemption-fix.patch
-0216-mm-protect-activate-switch-mm.patch.patch
-0217-fs-block-rt-support.patch.patch
-0218-fs-ntfs-disable-interrupt-only-on-RT.patch
-0219-x86-Convert-mce-timer-to-hrtimer.patch
-0220-x86-stackprotector-Avoid-random-pool-on-rt.patch
-0221-x86-Use-generic-rwsem_spinlocks-on-rt.patch
-0222-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch
-0223-workqueue-use-get-cpu-light.patch.patch
-0224-epoll.patch.patch
-0225-mm-vmalloc.patch.patch
-0226-workqueue-Fix-cpuhotplug-trainwreck.patch
-0227-workqueue-Fix-PF_THREAD_BOUND-abuse.patch
-0228-workqueue-Use-get_cpu_light-in-flush_gcwq.patch
-0229-hotplug-stuff.patch.patch
-0230-debugobjects-rt.patch.patch
-0231-jump-label-rt.patch.patch
-0232-skbufhead-raw-lock.patch.patch
-0233-x86-no-perf-irq-work-rt.patch.patch
-0234-console-make-rt-friendly.patch.patch
-0235-printk-Disable-migration-instead-of-preemption.patch
-0236-power-use-generic-rwsem-on-rt.patch
-0237-power-disable-highmem-on-rt.patch.patch
-0238-arm-disable-highmem-on-rt.patch.patch
-0239-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch
-0240-mips-disable-highmem-on-rt.patch.patch
-0241-net-Avoid-livelock-in-net_tx_action-on-RT.patch
-0242-ping-sysrq.patch.patch
-0243-kgdb-serial-Short-term-workaround.patch
-0244-add-sys-kernel-realtime-entry.patch
-0245-mm-rt-kmap_atomic-scheduling.patch
-0246-ipc-sem-Rework-semaphore-wakeups.patch
-0247-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch
-0248-x86-kvm-require-const-tsc-for-rt.patch.patch
-0249-scsi-fcoe-rt-aware.patch.patch
-0250-x86-crypto-Reduce-preempt-disabled-regions.patch
-0251-dm-Make-rt-aware.patch
-0252-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch
-0253-seqlock-Prevent-rt-starvation.patch
-0254-timer-Fix-hotplug-for-rt.patch
-0255-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch
-0256-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch
-0257-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch
-0258-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch
-0259-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch
-0260-softirq-Check-preemption-after-reenabling-interrupts.patch
-0261-rt-Introduce-cpu_chill.patch
-0262-fs-dcache-Use-cpu_chill-in-trylock-loops.patch
-0263-net-Use-cpu_chill-instead-of-cpu_relax.patch
-0264-kconfig-disable-a-few-options-rt.patch.patch
-0265-kconfig-preempt-rt-full.patch.patch
-0266-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch
-0267-Linux-3.2.16-rt27-REBASE.patch
+0001-Revert-workqueue-skip-nr_running-sanity-check-in-wor.patch
+0002-x86-Call-idle-notifier-after-irq_enter.patch
+0003-slab-lockdep-Annotate-all-slab-caches.patch
+0004-x86-kprobes-Remove-remove-bogus-preempt_enable.patch
+0005-x86-hpet-Disable-MSI-on-Lenovo-W510.patch
+0006-block-Shorten-interrupt-disabled-regions.patch
+0007-sched-Distangle-worker-accounting-from-rq-3Elock.patch
+0008-mips-enable-interrupts-in-signal.patch.patch
+0009-arm-enable-interrupts-in-signal-code.patch.patch
+0010-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch
+0011-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch
+0012-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch
+0013-powerpc-Allow-irq-threading.patch
+0014-sched-Keep-period-timer-ticking-when-throttling-acti.patch
+0015-sched-Do-not-throttle-due-to-PI-boosting.patch
+0016-time-Remove-bogus-comments.patch
+0017-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch
+0018-x86-vdso-Use-seqcount-instead-of-seqlock.patch
+0019-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch
+0020-seqlock-Remove-unused-functions.patch
+0021-seqlock-Use-seqcount.patch
+0022-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch
+0023-timekeeping-Split-xtime_lock.patch
+0024-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch
+0025-mm-memcg-shorten-preempt-disabled-section-around-eve.patch
+0026-tracing-Account-for-preempt-off-in-preempt_schedule.patch
+0027-signal-revert-ptrace-preempt-magic.patch.patch
+0028-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch
+0029-arm-Allow-forced-irq-threading.patch
+0030-preempt-rt-Convert-arm-boot_lock-to-raw.patch
+0031-sched-Create-schedule_preempt_disabled.patch
+0032-sched-Use-schedule_preempt_disabled.patch
+0033-signals-Do-not-wakeup-self.patch
+0034-posix-timers-Prevent-broadcast-signals.patch
+0035-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch
+0036-signal-x86-Delay-calling-signals-in-atomic.patch
+0037-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch
+0038-drivers-random-Reduce-preempt-disabled-region.patch
+0039-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch
+0040-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch
+0041-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch
+0042-drivers-net-Use-disable_irq_nosync-in-8139too.patch
+0043-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch
+0044-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch
+0045-preempt-mark-legitimated-no-resched-sites.patch.patch
+0046-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch
+0047-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch
+0048-mm-pagefault_disabled.patch
+0049-mm-raw_pagefault_disable.patch
+0050-filemap-fix-up.patch.patch
+0051-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch
+0052-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch
+0053-suspend-Prevent-might-sleep-splats.patch
+0054-OF-Fixup-resursive-locking-code-paths.patch
+0055-of-convert-devtree-lock.patch.patch
+0056-list-add-list-last-entry.patch.patch
+0057-mm-page-alloc-use-list-last-entry.patch.patch
+0058-mm-slab-move-debug-out.patch.patch
+0059-rwsem-inlcude-fix.patch.patch
+0060-sysctl-include-fix.patch.patch
+0061-net-flip-lock-dep-thingy.patch.patch
+0062-softirq-thread-do-softirq.patch.patch
+0063-softirq-split-out-code.patch.patch
+0064-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch
+0065-x86-32-fix-signal-crap.patch.patch
+0066-x86-Do-not-disable-preemption-in-int3-on-32bit.patch
+0067-rcu-Reduce-lock-section.patch
+0068-locking-various-init-fixes.patch.patch
+0069-wait-Provide-__wake_up_all_locked.patch
+0070-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch
+0071-latency-hist.patch.patch
+0072-hwlatdetect.patch.patch
+0073-localversion.patch.patch
+0074-early-printk-consolidate.patch.patch
+0075-printk-kill.patch.patch
+0076-printk-force_early_printk-boot-param-to-help-with-de.patch
+0077-rt-preempt-base-config.patch.patch
+0078-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch
+0079-rt-local_irq_-variants-depending-on-RT-RT.patch
+0080-preempt-Provide-preempt_-_-no-rt-variants.patch
+0081-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch
+0082-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch
+0083-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch
+0084-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch
+0085-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch
+0086-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch
+0087-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch
+0088-usb-Use-local_irq_-_nort-variants.patch
+0089-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch
+0090-mm-scatterlist-dont-disable-irqs-on-RT.patch
+0091-signal-fix-up-rcu-wreckage.patch.patch
+0092-net-wireless-warn-nort.patch.patch
+0093-mm-Replace-cgroup_page-bit-spinlock.patch
+0094-buffer_head-Replace-bh_uptodate_lock-for-rt.patch
+0095-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch
+0096-genirq-Disable-DEBUG_SHIRQ-for-rt.patch
+0097-genirq-Disable-random-call-on-preempt-rt.patch
+0098-genirq-disable-irqpoll-on-rt.patch
+0099-genirq-force-threading.patch.patch
+0100-drivers-net-fix-livelock-issues.patch
+0101-drivers-net-vortex-fix-locking-issues.patch
+0102-drivers-net-gianfar-Make-RT-aware.patch
+0103-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch
+0104-local-var.patch.patch
+0105-rt-local-irq-lock.patch.patch
+0106-cpu-rt-variants.patch.patch
+0107-mm-slab-wrap-functions.patch.patch
+0108-slab-Fix-__do_drain-to-use-the-right-array-cache.patch
+0109-mm-More-lock-breaks-in-slab.c.patch
+0110-mm-page_alloc-rt-friendly-per-cpu-pages.patch
+0111-mm-page_alloc-reduce-lock-sections-further.patch
+0112-mm-page-alloc-fix.patch.patch
+0113-mm-convert-swap-to-percpu-locked.patch
+0114-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch
+0115-mm-make-vmstat-rt-aware.patch
+0116-mm-shrink-the-page-frame-to-rt-size.patch
+0117-ARM-Initialize-ptl-lock-for-vector-page.patch
+0118-mm-Allow-only-slab-on-RT.patch
+0119-radix-tree-rt-aware.patch.patch
+0120-panic-disable-random-on-rt.patch
+0121-ipc-Make-the-ipc-code-rt-aware.patch
+0122-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch
+0123-relay-fix-timer-madness.patch
+0124-net-ipv4-route-use-locks-on-up-rt.patch.patch
+0125-workqueue-avoid-the-lock-in-cpu-dying.patch.patch
+0126-timers-prepare-for-full-preemption.patch
+0127-timers-preempt-rt-support.patch
+0128-timers-fix-timer-hotplug-on-rt.patch
+0129-timers-mov-printk_tick-to-soft-interrupt.patch
+0130-timer-delay-waking-softirqs-from-the-jiffy-tick.patch
+0131-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch
+0132-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch
+0133-hrtimers-prepare-full-preemption.patch
+0134-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
+0135-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch
+0136-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch
+0137-hrtimer-fix-reprogram-madness.patch.patch
+0138-timer-fd-Prevent-live-lock.patch
+0139-posix-timers-thread-posix-cpu-timers-on-rt.patch
+0140-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch
+0141-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch
+0142-sched-delay-put-task.patch.patch
+0143-sched-limit-nr-migrate.patch.patch
+0144-sched-mmdrop-delayed.patch.patch
+0145-sched-rt-mutex-wakeup.patch.patch
+0146-sched-prevent-idle-boost.patch.patch
+0147-sched-might-sleep-do-not-account-rcu-depth.patch.patch
+0148-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch
+0149-sched-cond-resched.patch.patch
+0150-cond-resched-softirq-fix.patch.patch
+0151-sched-no-work-when-pi-blocked.patch.patch
+0152-cond-resched-lock-rt-tweak.patch.patch
+0153-sched-disable-ttwu-queue.patch.patch
+0154-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch
+0155-sched-ttwu-Return-success-when-only-changing-the-sav.patch
+0156-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch
+0157-stomp-machine-mark-stomper-thread.patch.patch
+0158-stomp-machine-raw-lock.patch.patch
+0159-hotplug-Lightweight-get-online-cpus.patch
+0160-hotplug-sync_unplug-No.patch
+0161-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch
+0162-sched-migrate-disable.patch.patch
+0163-hotplug-use-migrate-disable.patch.patch
+0164-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch
+0165-ftrace-migrate-disable-tracing.patch.patch
+0166-tracing-Show-padding-as-unsigned-short.patch
+0167-migrate-disable-rt-variant.patch.patch
+0168-sched-Optimize-migrate_disable.patch
+0169-sched-Generic-migrate_disable.patch
+0170-sched-rt-Fix-migrate_enable-thinko.patch
+0171-sched-teach-migrate_disable-about-atomic-contexts.patch
+0172-sched-Postpone-actual-migration-disalbe-to-schedule.patch
+0173-sched-Do-not-compare-cpu-masks-in-scheduler.patch
+0174-sched-Have-migrate_disable-ignore-bounded-threads.patch
+0175-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch
+0176-ftrace-crap.patch.patch
+0177-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch
+0178-net-netif_rx_ni-migrate-disable.patch.patch
+0179-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch
+0180-lockdep-rt.patch.patch
+0181-mutex-no-spin-on-rt.patch.patch
+0182-softirq-local-lock.patch.patch
+0183-softirq-Export-in_serving_softirq.patch
+0184-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch
+0185-softirq-Fix-unplug-deadlock.patch
+0186-softirq-disable-softirq-stacks-for-rt.patch.patch
+0187-softirq-make-fifo.patch.patch
+0188-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch
+0189-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch
+0190-local-vars-migrate-disable.patch.patch
+0191-md-raid5-Make-raid5_percpu-handling-RT-aware.patch
+0192-rtmutex-lock-killable.patch.patch
+0193-rtmutex-futex-prepare-rt.patch.patch
+0194-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch
+0195-rt-mutex-add-sleeping-spinlocks-support.patch.patch
+0196-spinlock-types-separate-raw.patch.patch
+0197-rtmutex-avoid-include-hell.patch.patch
+0198-rt-add-rt-spinlocks.patch.patch
+0199-rt-add-rt-to-mutex-headers.patch.patch
+0200-rwsem-add-rt-variant.patch.patch
+0201-rt-Add-the-preempt-rt-lock-replacement-APIs.patch
+0202-rwlocks-Fix-section-mismatch.patch
+0203-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch
+0204-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch
+0205-rcu-Frob-softirq-test.patch
+0206-rcu-Merge-RCU-bh-into-RCU-preempt.patch
+0207-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch
+0208-rcu-more-fallout.patch.patch
+0209-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch
+0210-rt-rcutree-Move-misplaced-prototype.patch
+0211-lglocks-rt.patch.patch
+0212-serial-8250-Clean-up-the-locking-for-rt.patch
+0213-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch
+0214-drivers-tty-fix-omap-lock-crap.patch.patch
+0215-rt-Improve-the-serial-console-PASS_LIMIT.patch
+0216-fs-namespace-preemption-fix.patch
+0217-mm-protect-activate-switch-mm.patch.patch
+0218-fs-block-rt-support.patch.patch
+0219-fs-ntfs-disable-interrupt-only-on-RT.patch
+0220-x86-Convert-mce-timer-to-hrtimer.patch
+0221-x86-stackprotector-Avoid-random-pool-on-rt.patch
+0222-x86-Use-generic-rwsem_spinlocks-on-rt.patch
+0223-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch
+0224-workqueue-use-get-cpu-light.patch.patch
+0225-epoll.patch.patch
+0226-mm-vmalloc.patch.patch
+0227-workqueue-Fix-cpuhotplug-trainwreck.patch
+0228-workqueue-Fix-PF_THREAD_BOUND-abuse.patch
+0229-workqueue-Use-get_cpu_light-in-flush_gcwq.patch
+0230-hotplug-stuff.patch.patch
+0231-debugobjects-rt.patch.patch
+0232-jump-label-rt.patch.patch
+0233-skbufhead-raw-lock.patch.patch
+0234-x86-no-perf-irq-work-rt.patch.patch
+0235-console-make-rt-friendly.patch.patch
+0236-printk-Disable-migration-instead-of-preemption.patch
+0237-power-use-generic-rwsem-on-rt.patch
+0238-power-disable-highmem-on-rt.patch.patch
+0239-arm-disable-highmem-on-rt.patch.patch
+0240-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch
+0241-mips-disable-highmem-on-rt.patch.patch
+0242-net-Avoid-livelock-in-net_tx_action-on-RT.patch
+0243-ping-sysrq.patch.patch
+0244-kgdb-serial-Short-term-workaround.patch
+0245-add-sys-kernel-realtime-entry.patch
+0246-mm-rt-kmap_atomic-scheduling.patch
+0247-ipc-sem-Rework-semaphore-wakeups.patch
+0248-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch
+0249-x86-kvm-require-const-tsc-for-rt.patch.patch
+0250-scsi-fcoe-rt-aware.patch.patch
+0251-x86-crypto-Reduce-preempt-disabled-regions.patch
+0252-dm-Make-rt-aware.patch
+0253-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch
+0254-seqlock-Prevent-rt-starvation.patch
+0255-timer-Fix-hotplug-for-rt.patch
+0256-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch
+0257-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch
+0258-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch
+0259-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch
+0260-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch
+0261-softirq-Check-preemption-after-reenabling-interrupts.patch
+0262-rt-Introduce-cpu_chill.patch
+0263-fs-dcache-Use-cpu_chill-in-trylock-loops.patch
+0264-net-Use-cpu_chill-instead-of-cpu_relax.patch
+0265-kconfig-disable-a-few-options-rt.patch.patch
+0266-kconfig-preempt-rt-full.patch.patch
+0267-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch
+0268-scsi-qla2xxx-Use-local_irq_save_nort-in-qla2x00_poll.patch
+0269-net-RT-REmove-preemption-disabling-in-netif_rx.patch
+0270-mips-remove-smp-reserve-lock.patch.patch
+0271-Linux-3.2.20-rt32-REBASE.patch

Modified: dists/squeeze-backports/linux/debian/patches/features/all/wacom/0026-Input-wacom-return-proper-error-if-usb_get_extra_des.patch
==============================================================================
--- dists/squeeze-backports/linux/debian/patches/features/all/wacom/0026-Input-wacom-return-proper-error-if-usb_get_extra_des.patch	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/patches/features/all/wacom/0026-Input-wacom-return-proper-error-if-usb_get_extra_des.patch	Fri Aug 17 02:04:57 2012	(r19325)
@@ -36,6 +36,3 @@
  			goto out;
  		}
  	}
--- 
-1.7.10
-

Copied: dists/squeeze-backports/linux/debian/patches/features/all/wacom/0027-wacom-do-not-crash-when-retrieving-touch_max.patch (from r19226, dists/sid/linux/debian/patches/features/all/wacom/0027-wacom-do-not-crash-when-retrieving-touch_max.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/wacom/0027-wacom-do-not-crash-when-retrieving-touch_max.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/wacom/0027-wacom-do-not-crash-when-retrieving-touch_max.patch)
@@ -0,0 +1,44 @@
+From: Ping Cheng <pinglinux at gmail.com>
+Date: Sun, 24 Jun 2012 09:48:03 -0500
+Subject: wacom: do not crash when retrieving touch_max
+Bug-Debian: http://bugs.debian.org/678798
+
+When rep_data was an array
+
+	unsigned char rep_data[2];
+
+spelling its address as &rep_data was perfectly valid, but now that
+it is dynamically allocated
+
+	unsigned char *rep_data = kmalloc(2, GFP_KERNEL);
+
+that expression returns a pointer to the pointer rather than to the
+array itself.  Regression introduced by commit f393ee2b814e (Input:
+wacom - retrieve maximum number of touch points, 2012-04-29).
+
+[jn: from mailing list discussion, with new description.
+ This change is also available as part of a larger commit in the
+ input-wacom repository.]
+
+Signed-off-by: Ping Cheng <pingc at wacom.com>
+Signed-off-by: Jonathan Nieder <jrnieder at gmail.com>
+---
+ drivers/input/tablet/wacom_sys.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/input/tablet/wacom_sys.c b/drivers/input/tablet/wacom_sys.c
+index cad5602d3ce4..6b1cd71ba320 100644
+--- a/drivers/input/tablet/wacom_sys.c
++++ b/drivers/input/tablet/wacom_sys.c
+@@ -216,7 +216,7 @@ static void wacom_retrieve_report_data(struct usb_interface *intf,
+ 
+ 		rep_data[0] = 12;
+ 		result = wacom_get_report(intf, WAC_HID_FEATURE_REPORT,
+-					  rep_data[0], &rep_data, 2,
++					  rep_data[0], rep_data, 2,
+ 					  WAC_MSG_RETRIES);
+ 
+ 		if (result >= 0 && rep_data[1] > 2)
+-- 
+1.7.11.rc3
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/wacom/0028-wacom-leave-touch_max-as-is-if-predefined.patch (from r19226, dists/sid/linux/debian/patches/features/all/wacom/0028-wacom-leave-touch_max-as-is-if-predefined.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/wacom/0028-wacom-leave-touch_max-as-is-if-predefined.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/wacom/0028-wacom-leave-touch_max-as-is-if-predefined.patch)
@@ -0,0 +1,37 @@
+From: Ping Cheng <pinglinux at gmail.com>
+Date: Sun, 24 Jun 2012 23:00:29 -0500
+Subject: wacom: leave touch_max as is if predefined
+Bug-Debian: http://bugs.debian.org/677164
+
+Another fixup to f393ee2b814e (Input: wacom - retrieve maximum number
+of touch points, 2012-04-29).  The 0xE6 tablet in the Thinkpad x220t
+reports the wrong value for MAXCONTACTS so the hardcoded value must
+take precedence.
+
+[jn: extracted from a larger commit in the input-wacom repository,
+ with new description]
+
+Signed-off-by: Ping Cheng <pingc at wacom.com>
+Signed-off-by: Jonathan Nieder <jrnieder at gmail.com>
+---
+ drivers/input/tablet/wacom_sys.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/input/tablet/wacom_sys.c b/drivers/input/tablet/wacom_sys.c
+index 6b1cd71ba320..8b31473a81fe 100644
+--- a/drivers/input/tablet/wacom_sys.c
++++ b/drivers/input/tablet/wacom_sys.c
+@@ -401,7 +401,9 @@ static int wacom_parse_hid(struct usb_interface *intf,
+ 				break;
+ 
+ 			case HID_USAGE_CONTACTMAX:
+-				wacom_retrieve_report_data(intf, features);
++				/* leave touch_max as is if predefined */
++				if (!features->touch_max)
++					wacom_retrieve_report_data(intf, features);
+ 				i++;
+ 				break;
+ 			}
+-- 
+1.7.11.rc3
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/wacom/0029-wacom-do-not-request-tablet-data-on-MT-Tablet-PC-pen.patch (from r19226, dists/sid/linux/debian/patches/features/all/wacom/0029-wacom-do-not-request-tablet-data-on-MT-Tablet-PC-pen.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/wacom/0029-wacom-do-not-request-tablet-data-on-MT-Tablet-PC-pen.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/wacom/0029-wacom-do-not-request-tablet-data-on-MT-Tablet-PC-pen.patch)
@@ -0,0 +1,37 @@
+From: Ping Cheng <pinglinux at gmail.com>
+Date: Sun, 24 Jun 2012 23:29:29 -0500
+Subject: wacom: do not request tablet data on MT Tablet PC pen interface
+Bug-Debian: http://bugs.debian.org/677164
+
+When in commit 1963518b9b1b (Input: wacom - add 0xE5 (MT device)
+support, 2012-04-29) the driver stopped asking for multitouch tablet
+data on the pen interface of a tablet PC, as a side effect we started
+executing the "else" to that if statement.  Oops.
+
+This is needed for the 0xE6 tablet in the Thinkpad x220t to be usable
+again.  Meanwhile the 0xE3 works fine without this.  Not sure why. -jn
+
+[jn: extracted from a larger commit in the input-wacom repository,
+ with new description]
+
+Signed-off-by: Ping Cheng <pingc at wacom.com>
+Signed-off-by: Jonathan Nieder <jrnieder at gmail.com>
+---
+ drivers/input/tablet/wacom_sys.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/input/tablet/wacom_sys.c b/drivers/input/tablet/wacom_sys.c
+index 8b31473a81fe..19e4725858dd 100644
+--- a/drivers/input/tablet/wacom_sys.c
++++ b/drivers/input/tablet/wacom_sys.c
+@@ -467,6 +467,7 @@ static int wacom_query_tablet_data(struct usb_interface *intf, struct wacom_feat
+ 		}
+ 	} else if (features->type != TABLETPC &&
+ 		   features->type != WIRELESS &&
++		   features->type != TABLETPC2FG &&
+ 		   features->device_type == BTN_TOOL_PEN) {
+ 		do {
+ 			rep_data[0] = 2;
+-- 
+1.7.11.rc3
+

Copied: dists/squeeze-backports/linux/debian/patches/features/all/wacom/0030-wacom-ignore-new-style-Wacom-multi-touch-packets-on-.patch (from r19226, dists/sid/linux/debian/patches/features/all/wacom/0030-wacom-ignore-new-style-Wacom-multi-touch-packets-on-.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/all/wacom/0030-wacom-ignore-new-style-Wacom-multi-touch-packets-on-.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/all/wacom/0030-wacom-ignore-new-style-Wacom-multi-touch-packets-on-.patch)
@@ -0,0 +1,50 @@
+From: Ping Cheng <pinglinux at gmail.com>
+Date: Sun, 24 Jun 2012 23:44:46 -0500
+Subject: wacom: ignore new-style Wacom multi touch packets on MT Tablet PC
+Bug-Debian: http://bugs.debian.org/677164
+
+Tablets such as 0xE6 (Thinkpad x220t) already worked fine before
+adding support for the new packet format, so let's drop the
+functionality for such devices for now.  Meanwhile 0xE5 can still use
+the new packet format.
+
+This should bring the behavior of TABLETPC2FG devices closer to that
+from before 1963518b9b1b (Input: wacom - add 0xE5 (MT device) support,
+2012-04-29).
+
+[jn: extracted from a larger commit in the input-wacom repository,
+ with new description]
+
+Signed-off-by: Ping Cheng <pingc at wacom.com>
+Signed-off-by: Jonathan Nieder <jrnieder at gmail.com>
+---
+ drivers/input/tablet/wacom_wac.c | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/input/tablet/wacom_wac.c b/drivers/input/tablet/wacom_wac.c
+index 004bc1bb1544..d696ab7ecc2b 100644
+--- a/drivers/input/tablet/wacom_wac.c
++++ b/drivers/input/tablet/wacom_wac.c
+@@ -1547,7 +1547,6 @@ int wacom_setup_input_capabilities(struct input_dev *input_dev,
+ 		__set_bit(INPUT_PROP_POINTER, input_dev->propbit);
+ 		break;
+ 
+-	case TABLETPC2FG:
+ 	case MTSCREEN:
+ 		if (features->device_type == BTN_TOOL_FINGER) {
+ 
+@@ -1559,6 +1558,11 @@ int wacom_setup_input_capabilities(struct input_dev *input_dev,
+ 
+ 			for (i = 0; i < features->touch_max; i++)
+ 				wacom_wac->slots[i] = -1;
++		}
++		/* fall through */
++
++	case TABLETPC2FG:
++		if (features->device_type == BTN_TOOL_FINGER) {
+ 
+ 			input_mt_init_slots(input_dev, features->touch_max);
+ 			input_set_abs_params(input_dev, ABS_MT_TOOL_TYPE,
+-- 
+1.7.11.rc3
+

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/ARM-7259-3-net-JIT-compiler-for-packet-filters.patch (from r19226, dists/sid/linux/debian/patches/features/arm/ARM-7259-3-net-JIT-compiler-for-packet-filters.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/ARM-7259-3-net-JIT-compiler-for-packet-filters.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/ARM-7259-3-net-JIT-compiler-for-packet-filters.patch)
@@ -0,0 +1,1183 @@
+From ddecdfcea0ae891f782ae853771c867ab51024c2 Mon Sep 17 00:00:00 2001
+From: Mircea Gherzan <mgherzan at gmail.com>
+Date: Fri, 16 Mar 2012 13:37:12 +0100
+Subject: [PATCH] ARM: 7259/3: net: JIT compiler for packet filters
+
+Based of Matt Evans's PPC64 implementation.
+
+The compiler generates ARM instructions but interworking is
+supported for Thumb2 kernels.
+
+Supports both little and big endian. Unaligned loads are emitted
+for ARMv6+. Not all the BPF opcodes that deal with ancillary data
+are supported. The scratch memory of the filter lives on the stack.
+Hardware integer division is used if it is available.
+
+Enabled in the same way as for x86-64 and PPC64:
+
+	echo 1 > /proc/sys/net/core/bpf_jit_enable
+
+A value greater than 1 enables opcode output.
+
+Signed-off-by: Mircea Gherzan <mgherzan at gmail.com>
+Acked-by: David S. Miller <davem at davemloft.net>
+Acked-by: Eric Dumazet <eric.dumazet at gmail.com>
+Signed-off-by: Russell King <rmk+kernel at arm.linux.org.uk>
+---
+ arch/arm/Kconfig          |    1 +
+ arch/arm/Makefile         |    1 +
+ arch/arm/net/Makefile     |    3 +
+ arch/arm/net/bpf_jit_32.c |  915 +++++++++++++++++++++++++++++++++++++++++++++
+ arch/arm/net/bpf_jit_32.h |  190 ++++++++++
+ 5 files changed, 1110 insertions(+)
+ create mode 100644 arch/arm/net/Makefile
+ create mode 100644 arch/arm/net/bpf_jit_32.c
+ create mode 100644 arch/arm/net/bpf_jit_32.h
+
+Index: linux/arch/arm/Kconfig
+===================================================================
+--- linux.orig/arch/arm/Kconfig	2012-06-20 00:18:30.000000000 +0200
++++ linux/arch/arm/Kconfig	2012-06-24 23:38:52.000000000 +0200
+@@ -30,6 +30,7 @@
+ 	select HAVE_SPARSE_IRQ
+ 	select GENERIC_IRQ_SHOW
+ 	select CPU_PM if (SUSPEND || CPU_IDLE)
++	select HAVE_BPF_JIT
+ 	help
+ 	  The ARM series is a line of low-power-consumption RISC chip designs
+ 	  licensed by ARM Ltd and targeted at embedded applications and
+Index: linux/arch/arm/Makefile
+===================================================================
+--- linux.orig/arch/arm/Makefile	2012-06-20 00:18:30.000000000 +0200
++++ linux/arch/arm/Makefile	2012-06-24 23:38:52.000000000 +0200
+@@ -255,6 +255,7 @@
+ 
+ # If we have a machine-specific directory, then include it in the build.
+ core-y				+= arch/arm/kernel/ arch/arm/mm/ arch/arm/common/
++core-y				+= arch/arm/net/
+ core-y				+= $(machdirs) $(platdirs)
+ 
+ drivers-$(CONFIG_OPROFILE)      += arch/arm/oprofile/
+Index: linux/arch/arm/net/Makefile
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ linux/arch/arm/net/Makefile	2012-06-24 23:38:52.000000000 +0200
+@@ -0,0 +1,3 @@
++# ARM-specific networking code
++
++obj-$(CONFIG_BPF_JIT) += bpf_jit_32.o
+Index: linux/arch/arm/net/bpf_jit_32.c
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ linux/arch/arm/net/bpf_jit_32.c	2012-06-24 23:38:52.000000000 +0200
+@@ -0,0 +1,915 @@
++/*
++ * Just-In-Time compiler for BPF filters on 32bit ARM
++ *
++ * Copyright (c) 2011 Mircea Gherzan <mgherzan at gmail.com>
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License as published by the
++ * Free Software Foundation; version 2 of the License.
++ */
++
++#include <linux/bitops.h>
++#include <linux/compiler.h>
++#include <linux/errno.h>
++#include <linux/filter.h>
++#include <linux/moduleloader.h>
++#include <linux/netdevice.h>
++#include <linux/string.h>
++#include <linux/slab.h>
++#include <asm/cacheflush.h>
++#include <asm/hwcap.h>
++
++#include "bpf_jit_32.h"
++
++/*
++ * ABI:
++ *
++ * r0	scratch register
++ * r4	BPF register A
++ * r5	BPF register X
++ * r6	pointer to the skb
++ * r7	skb->data
++ * r8	skb_headlen(skb)
++ */
++
++#define r_scratch	ARM_R0
++/* r1-r3 are (also) used for the unaligned loads on the non-ARMv7 slowpath */
++#define r_off		ARM_R1
++#define r_A		ARM_R4
++#define r_X		ARM_R5
++#define r_skb		ARM_R6
++#define r_skb_data	ARM_R7
++#define r_skb_hl	ARM_R8
++
++#define SCRATCH_SP_OFFSET	0
++#define SCRATCH_OFF(k)		(SCRATCH_SP_OFFSET + (k))
++
++#define SEEN_MEM		((1 << BPF_MEMWORDS) - 1)
++#define SEEN_MEM_WORD(k)	(1 << (k))
++#define SEEN_X			(1 << BPF_MEMWORDS)
++#define SEEN_CALL		(1 << (BPF_MEMWORDS + 1))
++#define SEEN_SKB		(1 << (BPF_MEMWORDS + 2))
++#define SEEN_DATA		(1 << (BPF_MEMWORDS + 3))
++
++#define FLAG_NEED_X_RESET	(1 << 0)
++
++struct jit_ctx {
++	const struct sk_filter *skf;
++	unsigned idx;
++	unsigned prologue_bytes;
++	int ret0_fp_idx;
++	u32 seen;
++	u32 flags;
++	u32 *offsets;
++	u32 *target;
++#if __LINUX_ARM_ARCH__ < 7
++	u16 epilogue_bytes;
++	u16 imm_count;
++	u32 *imms;
++#endif
++};
++
++int bpf_jit_enable __read_mostly;
++
++static u64 jit_get_skb_b(struct sk_buff *skb, unsigned offset)
++{
++	u8 ret;
++	int err;
++
++	err = skb_copy_bits(skb, offset, &ret, 1);
++
++	return (u64)err << 32 | ret;
++}
++
++static u64 jit_get_skb_h(struct sk_buff *skb, unsigned offset)
++{
++	u16 ret;
++	int err;
++
++	err = skb_copy_bits(skb, offset, &ret, 2);
++
++	return (u64)err << 32 | ntohs(ret);
++}
++
++static u64 jit_get_skb_w(struct sk_buff *skb, unsigned offset)
++{
++	u32 ret;
++	int err;
++
++	err = skb_copy_bits(skb, offset, &ret, 4);
++
++	return (u64)err << 32 | ntohl(ret);
++}
++
++/*
++ * Wrapper that handles both OABI and EABI and assures Thumb2 interworking
++ * (where the assembly routines like __aeabi_uidiv could cause problems).
++ */
++static u32 jit_udiv(u32 dividend, u32 divisor)
++{
++	return dividend / divisor;
++}
++
++static inline void _emit(int cond, u32 inst, struct jit_ctx *ctx)
++{
++	if (ctx->target != NULL)
++		ctx->target[ctx->idx] = inst | (cond << 28);
++
++	ctx->idx++;
++}
++
++/*
++ * Emit an instruction that will be executed unconditionally.
++ */
++static inline void emit(u32 inst, struct jit_ctx *ctx)
++{
++	_emit(ARM_COND_AL, inst, ctx);
++}
++
++static u16 saved_regs(struct jit_ctx *ctx)
++{
++	u16 ret = 0;
++
++	if ((ctx->skf->len > 1) ||
++	    (ctx->skf->insns[0].code == BPF_S_RET_A))
++		ret |= 1 << r_A;
++
++#ifdef CONFIG_FRAME_POINTER
++	ret |= (1 << ARM_FP) | (1 << ARM_IP) | (1 << ARM_LR) | (1 << ARM_PC);
++#else
++	if (ctx->seen & SEEN_CALL)
++		ret |= 1 << ARM_LR;
++#endif
++	if (ctx->seen & (SEEN_DATA | SEEN_SKB))
++		ret |= 1 << r_skb;
++	if (ctx->seen & SEEN_DATA)
++		ret |= (1 << r_skb_data) | (1 << r_skb_hl);
++	if (ctx->seen & SEEN_X)
++		ret |= 1 << r_X;
++
++	return ret;
++}
++
++static inline int mem_words_used(struct jit_ctx *ctx)
++{
++	/* yes, we do waste some stack space IF there are "holes" in the set" */
++	return fls(ctx->seen & SEEN_MEM);
++}
++
++static inline bool is_load_to_a(u16 inst)
++{
++	switch (inst) {
++	case BPF_S_LD_W_LEN:
++	case BPF_S_LD_W_ABS:
++	case BPF_S_LD_H_ABS:
++	case BPF_S_LD_B_ABS:
++	case BPF_S_ANC_CPU:
++	case BPF_S_ANC_IFINDEX:
++	case BPF_S_ANC_MARK:
++	case BPF_S_ANC_PROTOCOL:
++	case BPF_S_ANC_RXHASH:
++	case BPF_S_ANC_QUEUE:
++		return true;
++	default:
++		return false;
++	}
++}
++
++static void build_prologue(struct jit_ctx *ctx)
++{
++	u16 reg_set = saved_regs(ctx);
++	u16 first_inst = ctx->skf->insns[0].code;
++	u16 off;
++
++#ifdef CONFIG_FRAME_POINTER
++	emit(ARM_MOV_R(ARM_IP, ARM_SP), ctx);
++	emit(ARM_PUSH(reg_set), ctx);
++	emit(ARM_SUB_I(ARM_FP, ARM_IP, 4), ctx);
++#else
++	if (reg_set)
++		emit(ARM_PUSH(reg_set), ctx);
++#endif
++
++	if (ctx->seen & (SEEN_DATA | SEEN_SKB))
++		emit(ARM_MOV_R(r_skb, ARM_R0), ctx);
++
++	if (ctx->seen & SEEN_DATA) {
++		off = offsetof(struct sk_buff, data);
++		emit(ARM_LDR_I(r_skb_data, r_skb, off), ctx);
++		/* headlen = len - data_len */
++		off = offsetof(struct sk_buff, len);
++		emit(ARM_LDR_I(r_skb_hl, r_skb, off), ctx);
++		off = offsetof(struct sk_buff, data_len);
++		emit(ARM_LDR_I(r_scratch, r_skb, off), ctx);
++		emit(ARM_SUB_R(r_skb_hl, r_skb_hl, r_scratch), ctx);
++	}
++
++	if (ctx->flags & FLAG_NEED_X_RESET)
++		emit(ARM_MOV_I(r_X, 0), ctx);
++
++	/* do not leak kernel data to userspace */
++	if ((first_inst != BPF_S_RET_K) && !(is_load_to_a(first_inst)))
++		emit(ARM_MOV_I(r_A, 0), ctx);
++
++	/* stack space for the BPF_MEM words */
++	if (ctx->seen & SEEN_MEM)
++		emit(ARM_SUB_I(ARM_SP, ARM_SP, mem_words_used(ctx) * 4), ctx);
++}
++
++static void build_epilogue(struct jit_ctx *ctx)
++{
++	u16 reg_set = saved_regs(ctx);
++
++	if (ctx->seen & SEEN_MEM)
++		emit(ARM_ADD_I(ARM_SP, ARM_SP, mem_words_used(ctx) * 4), ctx);
++
++	reg_set &= ~(1 << ARM_LR);
++
++#ifdef CONFIG_FRAME_POINTER
++	/* the first instruction of the prologue was: mov ip, sp */
++	reg_set &= ~(1 << ARM_IP);
++	reg_set |= (1 << ARM_SP);
++	emit(ARM_LDM(ARM_SP, reg_set), ctx);
++#else
++	if (reg_set) {
++		if (ctx->seen & SEEN_CALL)
++			reg_set |= 1 << ARM_PC;
++		emit(ARM_POP(reg_set), ctx);
++	}
++
++	if (!(ctx->seen & SEEN_CALL))
++		emit(ARM_BX(ARM_LR), ctx);
++#endif
++}
++
++static int16_t imm8m(u32 x)
++{
++	u32 rot;
++
++	for (rot = 0; rot < 16; rot++)
++		if ((x & ~ror32(0xff, 2 * rot)) == 0)
++			return rol32(x, 2 * rot) | (rot << 8);
++
++	return -1;
++}
++
++#if __LINUX_ARM_ARCH__ < 7
++
++static u16 imm_offset(u32 k, struct jit_ctx *ctx)
++{
++	unsigned i = 0, offset;
++	u16 imm;
++
++	/* on the "fake" run we just count them (duplicates included) */
++	if (ctx->target == NULL) {
++		ctx->imm_count++;
++		return 0;
++	}
++
++	while ((i < ctx->imm_count) && ctx->imms[i]) {
++		if (ctx->imms[i] == k)
++			break;
++		i++;
++	}
++
++	if (ctx->imms[i] == 0)
++		ctx->imms[i] = k;
++
++	/* constants go just after the epilogue */
++	offset =  ctx->offsets[ctx->skf->len];
++	offset += ctx->prologue_bytes;
++	offset += ctx->epilogue_bytes;
++	offset += i * 4;
++
++	ctx->target[offset / 4] = k;
++
++	/* PC in ARM mode == address of the instruction + 8 */
++	imm = offset - (8 + ctx->idx * 4);
++
++	return imm;
++}
++
++#endif /* __LINUX_ARM_ARCH__ */
++
++/*
++ * Move an immediate that's not an imm8m to a core register.
++ */
++static inline void emit_mov_i_no8m(int rd, u32 val, struct jit_ctx *ctx)
++{
++#if __LINUX_ARM_ARCH__ < 7
++	emit(ARM_LDR_I(rd, ARM_PC, imm_offset(val, ctx)), ctx);
++#else
++	emit(ARM_MOVW(rd, val & 0xffff), ctx);
++	if (val > 0xffff)
++		emit(ARM_MOVT(rd, val >> 16), ctx);
++#endif
++}
++
++static inline void emit_mov_i(int rd, u32 val, struct jit_ctx *ctx)
++{
++	int imm12 = imm8m(val);
++
++	if (imm12 >= 0)
++		emit(ARM_MOV_I(rd, imm12), ctx);
++	else
++		emit_mov_i_no8m(rd, val, ctx);
++}
++
++#if __LINUX_ARM_ARCH__ < 6
++
++static void emit_load_be32(u8 cond, u8 r_res, u8 r_addr, struct jit_ctx *ctx)
++{
++	_emit(cond, ARM_LDRB_I(ARM_R3, r_addr, 1), ctx);
++	_emit(cond, ARM_LDRB_I(ARM_R1, r_addr, 0), ctx);
++	_emit(cond, ARM_LDRB_I(ARM_R2, r_addr, 3), ctx);
++	_emit(cond, ARM_LSL_I(ARM_R3, ARM_R3, 16), ctx);
++	_emit(cond, ARM_LDRB_I(ARM_R0, r_addr, 2), ctx);
++	_emit(cond, ARM_ORR_S(ARM_R3, ARM_R3, ARM_R1, SRTYPE_LSL, 24), ctx);
++	_emit(cond, ARM_ORR_R(ARM_R3, ARM_R3, ARM_R2), ctx);
++	_emit(cond, ARM_ORR_S(r_res, ARM_R3, ARM_R0, SRTYPE_LSL, 8), ctx);
++}
++
++static void emit_load_be16(u8 cond, u8 r_res, u8 r_addr, struct jit_ctx *ctx)
++{
++	_emit(cond, ARM_LDRB_I(ARM_R1, r_addr, 0), ctx);
++	_emit(cond, ARM_LDRB_I(ARM_R2, r_addr, 1), ctx);
++	_emit(cond, ARM_ORR_S(r_res, ARM_R2, ARM_R1, SRTYPE_LSL, 8), ctx);
++}
++
++static inline void emit_swap16(u8 r_dst, u8 r_src, struct jit_ctx *ctx)
++{
++	emit(ARM_LSL_R(ARM_R1, r_src, 8), ctx);
++	emit(ARM_ORR_S(r_dst, ARM_R1, r_src, SRTYPE_LSL, 8), ctx);
++	emit(ARM_LSL_I(r_dst, r_dst, 8), ctx);
++	emit(ARM_LSL_R(r_dst, r_dst, 8), ctx);
++}
++
++#else  /* ARMv6+ */
++
++static void emit_load_be32(u8 cond, u8 r_res, u8 r_addr, struct jit_ctx *ctx)
++{
++	_emit(cond, ARM_LDR_I(r_res, r_addr, 0), ctx);
++#ifdef __LITTLE_ENDIAN
++	_emit(cond, ARM_REV(r_res, r_res), ctx);
++#endif
++}
++
++static void emit_load_be16(u8 cond, u8 r_res, u8 r_addr, struct jit_ctx *ctx)
++{
++	_emit(cond, ARM_LDRH_I(r_res, r_addr, 0), ctx);
++#ifdef __LITTLE_ENDIAN
++	_emit(cond, ARM_REV16(r_res, r_res), ctx);
++#endif
++}
++
++static inline void emit_swap16(u8 r_dst __maybe_unused,
++			       u8 r_src __maybe_unused,
++			       struct jit_ctx *ctx __maybe_unused)
++{
++#ifdef __LITTLE_ENDIAN
++	emit(ARM_REV16(r_dst, r_src), ctx);
++#endif
++}
++
++#endif /* __LINUX_ARM_ARCH__ < 6 */
++
++
++/* Compute the immediate value for a PC-relative branch. */
++static inline u32 b_imm(unsigned tgt, struct jit_ctx *ctx)
++{
++	u32 imm;
++
++	if (ctx->target == NULL)
++		return 0;
++	/*
++	 * BPF allows only forward jumps and the offset of the target is
++	 * still the one computed during the first pass.
++	 */
++	imm  = ctx->offsets[tgt] + ctx->prologue_bytes - (ctx->idx * 4 + 8);
++
++	return imm >> 2;
++}
++
++#define OP_IMM3(op, r1, r2, imm_val, ctx)				\
++	do {								\
++		imm12 = imm8m(imm_val);					\
++		if (imm12 < 0) {					\
++			emit_mov_i_no8m(r_scratch, imm_val, ctx);	\
++			emit(op ## _R((r1), (r2), r_scratch), ctx);	\
++		} else {						\
++			emit(op ## _I((r1), (r2), imm12), ctx);		\
++		}							\
++	} while (0)
++
++static inline void emit_err_ret(u8 cond, struct jit_ctx *ctx)
++{
++	if (ctx->ret0_fp_idx >= 0) {
++		_emit(cond, ARM_B(b_imm(ctx->ret0_fp_idx, ctx)), ctx);
++		/* NOP to keep the size constant between passes */
++		emit(ARM_MOV_R(ARM_R0, ARM_R0), ctx);
++	} else {
++		_emit(cond, ARM_MOV_I(ARM_R0, 0), ctx);
++		_emit(cond, ARM_B(b_imm(ctx->skf->len, ctx)), ctx);
++	}
++}
++
++static inline void emit_blx_r(u8 tgt_reg, struct jit_ctx *ctx)
++{
++#if __LINUX_ARM_ARCH__ < 5
++	emit(ARM_MOV_R(ARM_LR, ARM_PC), ctx);
++
++	if (elf_hwcap & HWCAP_THUMB)
++		emit(ARM_BX(tgt_reg), ctx);
++	else
++		emit(ARM_MOV_R(ARM_PC, tgt_reg), ctx);
++#else
++	emit(ARM_BLX_R(tgt_reg), ctx);
++#endif
++}
++
++static inline void emit_udiv(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx)
++{
++#if __LINUX_ARM_ARCH__ == 7
++	if (elf_hwcap & HWCAP_IDIVA) {
++		emit(ARM_UDIV(rd, rm, rn), ctx);
++		return;
++	}
++#endif
++	if (rm != ARM_R0)
++		emit(ARM_MOV_R(ARM_R0, rm), ctx);
++	if (rn != ARM_R1)
++		emit(ARM_MOV_R(ARM_R1, rn), ctx);
++
++	ctx->seen |= SEEN_CALL;
++	emit_mov_i(ARM_R3, (u32)jit_udiv, ctx);
++	emit_blx_r(ARM_R3, ctx);
++
++	if (rd != ARM_R0)
++		emit(ARM_MOV_R(rd, ARM_R0), ctx);
++}
++
++static inline void update_on_xread(struct jit_ctx *ctx)
++{
++	if (!(ctx->seen & SEEN_X))
++		ctx->flags |= FLAG_NEED_X_RESET;
++
++	ctx->seen |= SEEN_X;
++}
++
++static int build_body(struct jit_ctx *ctx)
++{
++	void *load_func[] = {jit_get_skb_b, jit_get_skb_h, jit_get_skb_w};
++	const struct sk_filter *prog = ctx->skf;
++	const struct sock_filter *inst;
++	unsigned i, load_order, off, condt;
++	int imm12;
++	u32 k;
++
++	for (i = 0; i < prog->len; i++) {
++		inst = &(prog->insns[i]);
++		/* K as an immediate value operand */
++		k = inst->k;
++
++		/* compute offsets only in the fake pass */
++		if (ctx->target == NULL)
++			ctx->offsets[i] = ctx->idx * 4;
++
++		switch (inst->code) {
++		case BPF_S_LD_IMM:
++			emit_mov_i(r_A, k, ctx);
++			break;
++		case BPF_S_LD_W_LEN:
++			ctx->seen |= SEEN_SKB;
++			BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, len) != 4);
++			emit(ARM_LDR_I(r_A, r_skb,
++				       offsetof(struct sk_buff, len)), ctx);
++			break;
++		case BPF_S_LD_MEM:
++			/* A = scratch[k] */
++			ctx->seen |= SEEN_MEM_WORD(k);
++			emit(ARM_LDR_I(r_A, ARM_SP, SCRATCH_OFF(k)), ctx);
++			break;
++		case BPF_S_LD_W_ABS:
++			load_order = 2;
++			goto load;
++		case BPF_S_LD_H_ABS:
++			load_order = 1;
++			goto load;
++		case BPF_S_LD_B_ABS:
++			load_order = 0;
++load:
++			/* the interpreter will deal with the negative K */
++			if ((int)k < 0)
++				return -ENOTSUPP;
++			emit_mov_i(r_off, k, ctx);
++load_common:
++			ctx->seen |= SEEN_DATA | SEEN_CALL;
++
++			if (load_order > 0) {
++				emit(ARM_SUB_I(r_scratch, r_skb_hl,
++					       1 << load_order), ctx);
++				emit(ARM_CMP_R(r_scratch, r_off), ctx);
++				condt = ARM_COND_HS;
++			} else {
++				emit(ARM_CMP_R(r_skb_hl, r_off), ctx);
++				condt = ARM_COND_HI;
++			}
++
++			_emit(condt, ARM_ADD_R(r_scratch, r_off, r_skb_data),
++			      ctx);
++
++			if (load_order == 0)
++				_emit(condt, ARM_LDRB_I(r_A, r_scratch, 0),
++				      ctx);
++			else if (load_order == 1)
++				emit_load_be16(condt, r_A, r_scratch, ctx);
++			else if (load_order == 2)
++				emit_load_be32(condt, r_A, r_scratch, ctx);
++
++			_emit(condt, ARM_B(b_imm(i + 1, ctx)), ctx);
++
++			/* the slowpath */
++			emit_mov_i(ARM_R3, (u32)load_func[load_order], ctx);
++			emit(ARM_MOV_R(ARM_R0, r_skb), ctx);
++			/* the offset is already in R1 */
++			emit_blx_r(ARM_R3, ctx);
++			/* check the result of skb_copy_bits */
++			emit(ARM_CMP_I(ARM_R1, 0), ctx);
++			emit_err_ret(ARM_COND_NE, ctx);
++			emit(ARM_MOV_R(r_A, ARM_R0), ctx);
++			break;
++		case BPF_S_LD_W_IND:
++			load_order = 2;
++			goto load_ind;
++		case BPF_S_LD_H_IND:
++			load_order = 1;
++			goto load_ind;
++		case BPF_S_LD_B_IND:
++			load_order = 0;
++load_ind:
++			OP_IMM3(ARM_ADD, r_off, r_X, k, ctx);
++			goto load_common;
++		case BPF_S_LDX_IMM:
++			ctx->seen |= SEEN_X;
++			emit_mov_i(r_X, k, ctx);
++			break;
++		case BPF_S_LDX_W_LEN:
++			ctx->seen |= SEEN_X | SEEN_SKB;
++			emit(ARM_LDR_I(r_X, r_skb,
++				       offsetof(struct sk_buff, len)), ctx);
++			break;
++		case BPF_S_LDX_MEM:
++			ctx->seen |= SEEN_X | SEEN_MEM_WORD(k);
++			emit(ARM_LDR_I(r_X, ARM_SP, SCRATCH_OFF(k)), ctx);
++			break;
++		case BPF_S_LDX_B_MSH:
++			/* x = ((*(frame + k)) & 0xf) << 2; */
++			ctx->seen |= SEEN_X | SEEN_DATA | SEEN_CALL;
++			/* the interpreter should deal with the negative K */
++			if (k < 0)
++				return -1;
++			/* offset in r1: we might have to take the slow path */
++			emit_mov_i(r_off, k, ctx);
++			emit(ARM_CMP_R(r_skb_hl, r_off), ctx);
++
++			/* load in r0: common with the slowpath */
++			_emit(ARM_COND_HI, ARM_LDRB_R(ARM_R0, r_skb_data,
++						      ARM_R1), ctx);
++			/*
++			 * emit_mov_i() might generate one or two instructions,
++			 * the same holds for emit_blx_r()
++			 */
++			_emit(ARM_COND_HI, ARM_B(b_imm(i + 1, ctx) - 2), ctx);
++
++			emit(ARM_MOV_R(ARM_R0, r_skb), ctx);
++			/* r_off is r1 */
++			emit_mov_i(ARM_R3, (u32)jit_get_skb_b, ctx);
++			emit_blx_r(ARM_R3, ctx);
++			/* check the return value of skb_copy_bits */
++			emit(ARM_CMP_I(ARM_R1, 0), ctx);
++			emit_err_ret(ARM_COND_NE, ctx);
++
++			emit(ARM_AND_I(r_X, ARM_R0, 0x00f), ctx);
++			emit(ARM_LSL_I(r_X, r_X, 2), ctx);
++			break;
++		case BPF_S_ST:
++			ctx->seen |= SEEN_MEM_WORD(k);
++			emit(ARM_STR_I(r_A, ARM_SP, SCRATCH_OFF(k)), ctx);
++			break;
++		case BPF_S_STX:
++			update_on_xread(ctx);
++			ctx->seen |= SEEN_MEM_WORD(k);
++			emit(ARM_STR_I(r_X, ARM_SP, SCRATCH_OFF(k)), ctx);
++			break;
++		case BPF_S_ALU_ADD_K:
++			/* A += K */
++			OP_IMM3(ARM_ADD, r_A, r_A, k, ctx);
++			break;
++		case BPF_S_ALU_ADD_X:
++			update_on_xread(ctx);
++			emit(ARM_ADD_R(r_A, r_A, r_X), ctx);
++			break;
++		case BPF_S_ALU_SUB_K:
++			/* A -= K */
++			OP_IMM3(ARM_SUB, r_A, r_A, k, ctx);
++			break;
++		case BPF_S_ALU_SUB_X:
++			update_on_xread(ctx);
++			emit(ARM_SUB_R(r_A, r_A, r_X), ctx);
++			break;
++		case BPF_S_ALU_MUL_K:
++			/* A *= K */
++			emit_mov_i(r_scratch, k, ctx);
++			emit(ARM_MUL(r_A, r_A, r_scratch), ctx);
++			break;
++		case BPF_S_ALU_MUL_X:
++			update_on_xread(ctx);
++			emit(ARM_MUL(r_A, r_A, r_X), ctx);
++			break;
++		case BPF_S_ALU_DIV_K:
++			/* current k == reciprocal_value(userspace k) */
++			emit_mov_i(r_scratch, k, ctx);
++			/* A = top 32 bits of the product */
++			emit(ARM_UMULL(r_scratch, r_A, r_A, r_scratch), ctx);
++			break;
++		case BPF_S_ALU_DIV_X:
++			update_on_xread(ctx);
++			emit(ARM_CMP_I(r_X, 0), ctx);
++			emit_err_ret(ARM_COND_EQ, ctx);
++			emit_udiv(r_A, r_A, r_X, ctx);
++			break;
++		case BPF_S_ALU_OR_K:
++			/* A |= K */
++			OP_IMM3(ARM_ORR, r_A, r_A, k, ctx);
++			break;
++		case BPF_S_ALU_OR_X:
++			update_on_xread(ctx);
++			emit(ARM_ORR_R(r_A, r_A, r_X), ctx);
++			break;
++		case BPF_S_ALU_AND_K:
++			/* A &= K */
++			OP_IMM3(ARM_AND, r_A, r_A, k, ctx);
++			break;
++		case BPF_S_ALU_AND_X:
++			update_on_xread(ctx);
++			emit(ARM_AND_R(r_A, r_A, r_X), ctx);
++			break;
++		case BPF_S_ALU_LSH_K:
++			if (unlikely(k > 31))
++				return -1;
++			emit(ARM_LSL_I(r_A, r_A, k), ctx);
++			break;
++		case BPF_S_ALU_LSH_X:
++			update_on_xread(ctx);
++			emit(ARM_LSL_R(r_A, r_A, r_X), ctx);
++			break;
++		case BPF_S_ALU_RSH_K:
++			if (unlikely(k > 31))
++				return -1;
++			emit(ARM_LSR_I(r_A, r_A, k), ctx);
++			break;
++		case BPF_S_ALU_RSH_X:
++			update_on_xread(ctx);
++			emit(ARM_LSR_R(r_A, r_A, r_X), ctx);
++			break;
++		case BPF_S_ALU_NEG:
++			/* A = -A */
++			emit(ARM_RSB_I(r_A, r_A, 0), ctx);
++			break;
++		case BPF_S_JMP_JA:
++			/* pc += K */
++			emit(ARM_B(b_imm(i + k + 1, ctx)), ctx);
++			break;
++		case BPF_S_JMP_JEQ_K:
++			/* pc += (A == K) ? pc->jt : pc->jf */
++			condt  = ARM_COND_EQ;
++			goto cmp_imm;
++		case BPF_S_JMP_JGT_K:
++			/* pc += (A > K) ? pc->jt : pc->jf */
++			condt  = ARM_COND_HI;
++			goto cmp_imm;
++		case BPF_S_JMP_JGE_K:
++			/* pc += (A >= K) ? pc->jt : pc->jf */
++			condt  = ARM_COND_HS;
++cmp_imm:
++			imm12 = imm8m(k);
++			if (imm12 < 0) {
++				emit_mov_i_no8m(r_scratch, k, ctx);
++				emit(ARM_CMP_R(r_A, r_scratch), ctx);
++			} else {
++				emit(ARM_CMP_I(r_A, imm12), ctx);
++			}
++cond_jump:
++			if (inst->jt)
++				_emit(condt, ARM_B(b_imm(i + inst->jt + 1,
++						   ctx)), ctx);
++			if (inst->jf)
++				_emit(condt ^ 1, ARM_B(b_imm(i + inst->jf + 1,
++							     ctx)), ctx);
++			break;
++		case BPF_S_JMP_JEQ_X:
++			/* pc += (A == X) ? pc->jt : pc->jf */
++			condt   = ARM_COND_EQ;
++			goto cmp_x;
++		case BPF_S_JMP_JGT_X:
++			/* pc += (A > X) ? pc->jt : pc->jf */
++			condt   = ARM_COND_HI;
++			goto cmp_x;
++		case BPF_S_JMP_JGE_X:
++			/* pc += (A >= X) ? pc->jt : pc->jf */
++			condt   = ARM_COND_CS;
++cmp_x:
++			update_on_xread(ctx);
++			emit(ARM_CMP_R(r_A, r_X), ctx);
++			goto cond_jump;
++		case BPF_S_JMP_JSET_K:
++			/* pc += (A & K) ? pc->jt : pc->jf */
++			condt  = ARM_COND_NE;
++			/* not set iff all zeroes iff Z==1 iff EQ */
++
++			imm12 = imm8m(k);
++			if (imm12 < 0) {
++				emit_mov_i_no8m(r_scratch, k, ctx);
++				emit(ARM_TST_R(r_A, r_scratch), ctx);
++			} else {
++				emit(ARM_TST_I(r_A, imm12), ctx);
++			}
++			goto cond_jump;
++		case BPF_S_JMP_JSET_X:
++			/* pc += (A & X) ? pc->jt : pc->jf */
++			update_on_xread(ctx);
++			condt  = ARM_COND_NE;
++			emit(ARM_TST_R(r_A, r_X), ctx);
++			goto cond_jump;
++		case BPF_S_RET_A:
++			emit(ARM_MOV_R(ARM_R0, r_A), ctx);
++			goto b_epilogue;
++		case BPF_S_RET_K:
++			if ((k == 0) && (ctx->ret0_fp_idx < 0))
++				ctx->ret0_fp_idx = i;
++			emit_mov_i(ARM_R0, k, ctx);
++b_epilogue:
++			if (i != ctx->skf->len - 1)
++				emit(ARM_B(b_imm(prog->len, ctx)), ctx);
++			break;
++		case BPF_S_MISC_TAX:
++			/* X = A */
++			ctx->seen |= SEEN_X;
++			emit(ARM_MOV_R(r_X, r_A), ctx);
++			break;
++		case BPF_S_MISC_TXA:
++			/* A = X */
++			update_on_xread(ctx);
++			emit(ARM_MOV_R(r_A, r_X), ctx);
++			break;
++		case BPF_S_ANC_PROTOCOL:
++			/* A = ntohs(skb->protocol) */
++			ctx->seen |= SEEN_SKB;
++			BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff,
++						  protocol) != 2);
++			off = offsetof(struct sk_buff, protocol);
++			emit(ARM_LDRH_I(r_scratch, r_skb, off), ctx);
++			emit_swap16(r_A, r_scratch, ctx);
++			break;
++		case BPF_S_ANC_CPU:
++			/* r_scratch = current_thread_info() */
++			OP_IMM3(ARM_BIC, r_scratch, ARM_SP, THREAD_SIZE - 1, ctx);
++			/* A = current_thread_info()->cpu */
++			BUILD_BUG_ON(FIELD_SIZEOF(struct thread_info, cpu) != 4);
++			off = offsetof(struct thread_info, cpu);
++			emit(ARM_LDR_I(r_A, r_scratch, off), ctx);
++			break;
++		case BPF_S_ANC_IFINDEX:
++			/* A = skb->dev->ifindex */
++			ctx->seen |= SEEN_SKB;
++			off = offsetof(struct sk_buff, dev);
++			emit(ARM_LDR_I(r_scratch, r_skb, off), ctx);
++
++			emit(ARM_CMP_I(r_scratch, 0), ctx);
++			emit_err_ret(ARM_COND_EQ, ctx);
++
++			BUILD_BUG_ON(FIELD_SIZEOF(struct net_device,
++						  ifindex) != 4);
++			off = offsetof(struct net_device, ifindex);
++			emit(ARM_LDR_I(r_A, r_scratch, off), ctx);
++			break;
++		case BPF_S_ANC_MARK:
++			ctx->seen |= SEEN_SKB;
++			BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, mark) != 4);
++			off = offsetof(struct sk_buff, mark);
++			emit(ARM_LDR_I(r_A, r_skb, off), ctx);
++			break;
++		case BPF_S_ANC_RXHASH:
++			ctx->seen |= SEEN_SKB;
++			BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, rxhash) != 4);
++			off = offsetof(struct sk_buff, rxhash);
++			emit(ARM_LDR_I(r_A, r_skb, off), ctx);
++			break;
++		case BPF_S_ANC_QUEUE:
++			ctx->seen |= SEEN_SKB;
++			BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff,
++						  queue_mapping) != 2);
++			BUILD_BUG_ON(offsetof(struct sk_buff,
++					      queue_mapping) > 0xff);
++			off = offsetof(struct sk_buff, queue_mapping);
++			emit(ARM_LDRH_I(r_A, r_skb, off), ctx);
++			break;
++		default:
++			return -1;
++		}
++	}
++
++	/* compute offsets only during the first pass */
++	if (ctx->target == NULL)
++		ctx->offsets[i] = ctx->idx * 4;
++
++	return 0;
++}
++
++
++void bpf_jit_compile(struct sk_filter *fp)
++{
++	struct jit_ctx ctx;
++	unsigned tmp_idx;
++	unsigned alloc_size;
++
++	if (!bpf_jit_enable)
++		return;
++
++	memset(&ctx, 0, sizeof(ctx));
++	ctx.skf		= fp;
++	ctx.ret0_fp_idx = -1;
++
++	ctx.offsets = kzalloc(GFP_KERNEL, 4 * (ctx.skf->len + 1));
++	if (ctx.offsets == NULL)
++		return;
++
++	/* fake pass to fill in the ctx->seen */
++	if (unlikely(build_body(&ctx)))
++		goto out;
++
++	tmp_idx = ctx.idx;
++	build_prologue(&ctx);
++	ctx.prologue_bytes = (ctx.idx - tmp_idx) * 4;
++
++#if __LINUX_ARM_ARCH__ < 7
++	tmp_idx = ctx.idx;
++	build_epilogue(&ctx);
++	ctx.epilogue_bytes = (ctx.idx - tmp_idx) * 4;
++
++	ctx.idx += ctx.imm_count;
++	if (ctx.imm_count) {
++		ctx.imms = kzalloc(GFP_KERNEL, 4 * ctx.imm_count);
++		if (ctx.imms == NULL)
++			goto out;
++	}
++#else
++	/* there's nothing after the epilogue on ARMv7 */
++	build_epilogue(&ctx);
++#endif
++
++	alloc_size = 4 * ctx.idx;
++	ctx.target = module_alloc(max(sizeof(struct work_struct),
++				      alloc_size));
++	if (unlikely(ctx.target == NULL))
++		goto out;
++
++	ctx.idx = 0;
++	build_prologue(&ctx);
++	build_body(&ctx);
++	build_epilogue(&ctx);
++
++	flush_icache_range((u32)ctx.target, (u32)(ctx.target + ctx.idx));
++
++#if __LINUX_ARM_ARCH__ < 7
++	if (ctx.imm_count)
++		kfree(ctx.imms);
++#endif
++
++	if (bpf_jit_enable > 1)
++		print_hex_dump(KERN_INFO, "BPF JIT code: ",
++			       DUMP_PREFIX_ADDRESS, 16, 4, ctx.target,
++			       alloc_size, false);
++
++	fp->bpf_func = (void *)ctx.target;
++out:
++	kfree(ctx.offsets);
++	return;
++}
++
++static void bpf_jit_free_worker(struct work_struct *work)
++{
++	module_free(NULL, work);
++}
++
++void bpf_jit_free(struct sk_filter *fp)
++{
++	struct work_struct *work;
++
++	if (fp->bpf_func != sk_run_filter) {
++		work = (struct work_struct *)fp->bpf_func;
++
++		INIT_WORK(work, bpf_jit_free_worker);
++		schedule_work(work);
++	}
++}
+Index: linux/arch/arm/net/bpf_jit_32.h
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ linux/arch/arm/net/bpf_jit_32.h	2012-06-24 23:38:52.000000000 +0200
+@@ -0,0 +1,190 @@
++/*
++ * Just-In-Time compiler for BPF filters on 32bit ARM
++ *
++ * Copyright (c) 2011 Mircea Gherzan <mgherzan at gmail.com>
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License as published by the
++ * Free Software Foundation; version 2 of the License.
++ */
++
++#ifndef PFILTER_OPCODES_ARM_H
++#define PFILTER_OPCODES_ARM_H
++
++#define ARM_R0	0
++#define ARM_R1	1
++#define ARM_R2	2
++#define ARM_R3	3
++#define ARM_R4	4
++#define ARM_R5	5
++#define ARM_R6	6
++#define ARM_R7	7
++#define ARM_R8	8
++#define ARM_R9	9
++#define ARM_R10	10
++#define ARM_FP	11
++#define ARM_IP	12
++#define ARM_SP	13
++#define ARM_LR	14
++#define ARM_PC	15
++
++#define ARM_COND_EQ		0x0
++#define ARM_COND_NE		0x1
++#define ARM_COND_CS		0x2
++#define ARM_COND_HS		ARM_COND_CS
++#define ARM_COND_CC		0x3
++#define ARM_COND_LO		ARM_COND_CC
++#define ARM_COND_MI		0x4
++#define ARM_COND_PL		0x5
++#define ARM_COND_VS		0x6
++#define ARM_COND_VC		0x7
++#define ARM_COND_HI		0x8
++#define ARM_COND_LS		0x9
++#define ARM_COND_GE		0xa
++#define ARM_COND_LT		0xb
++#define ARM_COND_GT		0xc
++#define ARM_COND_LE		0xd
++#define ARM_COND_AL		0xe
++
++/* register shift types */
++#define SRTYPE_LSL		0
++#define SRTYPE_LSR		1
++#define SRTYPE_ASR		2
++#define SRTYPE_ROR		3
++
++#define ARM_INST_ADD_R		0x00800000
++#define ARM_INST_ADD_I		0x02800000
++
++#define ARM_INST_AND_R		0x00000000
++#define ARM_INST_AND_I		0x02000000
++
++#define ARM_INST_BIC_R		0x01c00000
++#define ARM_INST_BIC_I		0x03c00000
++
++#define ARM_INST_B		0x0a000000
++#define ARM_INST_BX		0x012FFF10
++#define ARM_INST_BLX_R		0x012fff30
++
++#define ARM_INST_CMP_R		0x01500000
++#define ARM_INST_CMP_I		0x03500000
++
++#define ARM_INST_LDRB_I		0x05d00000
++#define ARM_INST_LDRB_R		0x07d00000
++#define ARM_INST_LDRH_I		0x01d000b0
++#define ARM_INST_LDR_I		0x05900000
++
++#define ARM_INST_LDM		0x08900000
++
++#define ARM_INST_LSL_I		0x01a00000
++#define ARM_INST_LSL_R		0x01a00010
++
++#define ARM_INST_LSR_I		0x01a00020
++#define ARM_INST_LSR_R		0x01a00030
++
++#define ARM_INST_MOV_R		0x01a00000
++#define ARM_INST_MOV_I		0x03a00000
++#define ARM_INST_MOVW		0x03000000
++#define ARM_INST_MOVT		0x03400000
++
++#define ARM_INST_MUL		0x00000090
++
++#define ARM_INST_POP		0x08bd0000
++#define ARM_INST_PUSH		0x092d0000
++
++#define ARM_INST_ORR_R		0x01800000
++#define ARM_INST_ORR_I		0x03800000
++
++#define ARM_INST_REV		0x06bf0f30
++#define ARM_INST_REV16		0x06bf0fb0
++
++#define ARM_INST_RSB_I		0x02600000
++
++#define ARM_INST_SUB_R		0x00400000
++#define ARM_INST_SUB_I		0x02400000
++
++#define ARM_INST_STR_I		0x05800000
++
++#define ARM_INST_TST_R		0x01100000
++#define ARM_INST_TST_I		0x03100000
++
++#define ARM_INST_UDIV		0x0730f010
++
++#define ARM_INST_UMULL		0x00800090
++
++/* register */
++#define _AL3_R(op, rd, rn, rm)	((op ## _R) | (rd) << 12 | (rn) << 16 | (rm))
++/* immediate */
++#define _AL3_I(op, rd, rn, imm)	((op ## _I) | (rd) << 12 | (rn) << 16 | (imm))
++
++#define ARM_ADD_R(rd, rn, rm)	_AL3_R(ARM_INST_ADD, rd, rn, rm)
++#define ARM_ADD_I(rd, rn, imm)	_AL3_I(ARM_INST_ADD, rd, rn, imm)
++
++#define ARM_AND_R(rd, rn, rm)	_AL3_R(ARM_INST_AND, rd, rn, rm)
++#define ARM_AND_I(rd, rn, imm)	_AL3_I(ARM_INST_AND, rd, rn, imm)
++
++#define ARM_BIC_R(rd, rn, rm)	_AL3_R(ARM_INST_BIC, rd, rn, rm)
++#define ARM_BIC_I(rd, rn, imm)	_AL3_I(ARM_INST_BIC, rd, rn, imm)
++
++#define ARM_B(imm24)		(ARM_INST_B | ((imm24) & 0xffffff))
++#define ARM_BX(rm)		(ARM_INST_BX | (rm))
++#define ARM_BLX_R(rm)		(ARM_INST_BLX_R | (rm))
++
++#define ARM_CMP_R(rn, rm)	_AL3_R(ARM_INST_CMP, 0, rn, rm)
++#define ARM_CMP_I(rn, imm)	_AL3_I(ARM_INST_CMP, 0, rn, imm)
++
++#define ARM_LDR_I(rt, rn, off)	(ARM_INST_LDR_I | (rt) << 12 | (rn) << 16 \
++				 | (off))
++#define ARM_LDRB_I(rt, rn, off)	(ARM_INST_LDRB_I | (rt) << 12 | (rn) << 16 \
++				 | (off))
++#define ARM_LDRB_R(rt, rn, rm)	(ARM_INST_LDRB_R | (rt) << 12 | (rn) << 16 \
++				 | (rm))
++#define ARM_LDRH_I(rt, rn, off)	(ARM_INST_LDRH_I | (rt) << 12 | (rn) << 16 \
++				 | (((off) & 0xf0) << 4) | ((off) & 0xf))
++
++#define ARM_LDM(rn, regs)	(ARM_INST_LDM | (rn) << 16 | (regs))
++
++#define ARM_LSL_R(rd, rn, rm)	(_AL3_R(ARM_INST_LSL, rd, 0, rn) | (rm) << 8)
++#define ARM_LSL_I(rd, rn, imm)	(_AL3_I(ARM_INST_LSL, rd, 0, rn) | (imm) << 7)
++
++#define ARM_LSR_R(rd, rn, rm)	(_AL3_R(ARM_INST_LSR, rd, 0, rn) | (rm) << 8)
++#define ARM_LSR_I(rd, rn, imm)	(_AL3_I(ARM_INST_LSR, rd, 0, rn) | (imm) << 7)
++
++#define ARM_MOV_R(rd, rm)	_AL3_R(ARM_INST_MOV, rd, 0, rm)
++#define ARM_MOV_I(rd, imm)	_AL3_I(ARM_INST_MOV, rd, 0, imm)
++
++#define ARM_MOVW(rd, imm)	\
++	(ARM_INST_MOVW | ((imm) >> 12) << 16 | (rd) << 12 | ((imm) & 0x0fff))
++
++#define ARM_MOVT(rd, imm)	\
++	(ARM_INST_MOVT | ((imm) >> 12) << 16 | (rd) << 12 | ((imm) & 0x0fff))
++
++#define ARM_MUL(rd, rm, rn)	(ARM_INST_MUL | (rd) << 16 | (rm) << 8 | (rn))
++
++#define ARM_POP(regs)		(ARM_INST_POP | (regs))
++#define ARM_PUSH(regs)		(ARM_INST_PUSH | (regs))
++
++#define ARM_ORR_R(rd, rn, rm)	_AL3_R(ARM_INST_ORR, rd, rn, rm)
++#define ARM_ORR_I(rd, rn, imm)	_AL3_I(ARM_INST_ORR, rd, rn, imm)
++#define ARM_ORR_S(rd, rn, rm, type, rs)	\
++	(ARM_ORR_R(rd, rn, rm) | (type) << 5 | (rs) << 7)
++
++#define ARM_REV(rd, rm)		(ARM_INST_REV | (rd) << 12 | (rm))
++#define ARM_REV16(rd, rm)	(ARM_INST_REV16 | (rd) << 12 | (rm))
++
++#define ARM_RSB_I(rd, rn, imm)	_AL3_I(ARM_INST_RSB, rd, rn, imm)
++
++#define ARM_SUB_R(rd, rn, rm)	_AL3_R(ARM_INST_SUB, rd, rn, rm)
++#define ARM_SUB_I(rd, rn, imm)	_AL3_I(ARM_INST_SUB, rd, rn, imm)
++
++#define ARM_STR_I(rt, rn, off)	(ARM_INST_STR_I | (rt) << 12 | (rn) << 16 \
++				 | (off))
++
++#define ARM_TST_R(rn, rm)	_AL3_R(ARM_INST_TST, 0, rn, rm)
++#define ARM_TST_I(rn, imm)	_AL3_I(ARM_INST_TST, 0, rn, imm)
++
++#define ARM_UDIV(rd, rn, rm)	(ARM_INST_UDIV | (rd) << 16 | (rn) | (rm) << 8)
++
++#define ARM_UMULL(rd_lo, rd_hi, rn, rm)	(ARM_INST_UMULL | (rd_hi) << 16 \
++					 | (rd_lo) << 12 | (rm) << 8 | rn)
++
++#endif /* PFILTER_OPCODES_ARM_H */

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/ARM-fix-Kconfig-warning-for-HAVE_BPF_JIT.patch (from r19226, dists/sid/linux/debian/patches/features/arm/ARM-fix-Kconfig-warning-for-HAVE_BPF_JIT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/ARM-fix-Kconfig-warning-for-HAVE_BPF_JIT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/ARM-fix-Kconfig-warning-for-HAVE_BPF_JIT.patch)
@@ -0,0 +1,29 @@
+From fada8dcf2d085f4e2eb1ba760c8d37111977dbec Mon Sep 17 00:00:00 2001
+From: Russell King <rmk+kernel at arm.linux.org.uk>
+Date: Tue, 27 Mar 2012 10:44:23 +0100
+Subject: [PATCH] ARM: fix Kconfig warning for HAVE_BPF_JIT
+
+Last night's randconfig and the allnoconfig builds spat out the
+following warning while building:
+
+warning: (ARM) selects HAVE_BPF_JIT which has unmet direct dependencies (NET)
+
+Acked-by: Mircea Gherzan <mgherzan at gmail.com>
+Signed-off-by: Russell King <rmk+kernel at arm.linux.org.uk>
+---
+ arch/arm/Kconfig |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+Index: linux/arch/arm/Kconfig
+===================================================================
+--- linux.orig/arch/arm/Kconfig	2012-06-24 23:38:52.000000000 +0200
++++ linux/arch/arm/Kconfig	2012-06-24 23:41:24.000000000 +0200
+@@ -30,7 +30,7 @@
+ 	select HAVE_SPARSE_IRQ
+ 	select GENERIC_IRQ_SHOW
+ 	select CPU_PM if (SUSPEND || CPU_IDLE)
+-	select HAVE_BPF_JIT
++	select HAVE_BPF_JIT if NET
+ 	help
+ 	  The ARM series is a line of low-power-consumption RISC chip designs
+ 	  licensed by ARM Ltd and targeted at embedded applications and

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-add-configuration-for-mpp12-as-gpio.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-add-configuration-for-mpp12-as-gpio.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-add-configuration-for-mpp12-as-gpio.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-add-configuration-for-mpp12-as-gpio.patch)
@@ -0,0 +1,27 @@
+commit 527ef0550d79e3b3a0ef8f5061072075afef6aaf
+Author: Arnaud Patard <arnaud.patard at rtp-net.org>
+Date:   Thu Dec 1 11:58:25 2011 +0100
+
+    ARM: Kirkwood: Add configuration for MPP12 as GPIO
+    
+    The MPP12 is listed in the 6281 HW manual as output only but the iconnect
+    board from iomega is using it as GPIO (there's a button connected on it). So,
+    I'm adding a definition for the MPP12 as GPIO. As I've no informations about
+    this and which kirkwood are "affected", I'm adding a new #define instead of
+    modifying the current one for MPP12.
+    
+    Signed-off-by: Arnaud Patard <arnaud.patard at rtp-net.org>
+    Signed-off-by: Nicolas Pitre <nico at fluxnic.net>
+
+Index: sid/arch/arm/mach-kirkwood/mpp.h
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/mpp.h	2012-05-31 01:44:12.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/mpp.h	2012-06-10 10:18:11.502678583 +0200
+@@ -102,6 +102,7 @@
+ #define MPP11_SATA0_ACTn	MPP( 11, 0x5, 0, 0, 0,   1,   1,   1,   1 )
+ 
+ #define MPP12_GPO		MPP( 12, 0x0, 0, 1, 1,   1,   1,   1,   1 )
++#define MPP12_GPIO		MPP( 12, 0x0, 1, 1, 0,   0,   0,   1,   0 )
+ #define MPP12_SD_CLK		MPP( 12, 0x1, 0, 0, 1,   1,   1,   1,   1 )
+ #define MPP12_AU_SPDIF0		MPP( 12, 0xa, 0, 0, 0,   0,   0,   0,   1 )
+ #define MPP12_SPI_MOSI		MPP( 12, 0xb, 0, 0, 0,   0,   0,   0,   1 )

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-add-dreamplug-fdt-support.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-add-dreamplug-fdt-support.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-add-dreamplug-fdt-support.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-add-dreamplug-fdt-support.patch)
@@ -0,0 +1,291 @@
+commit 3d468b6d6052293ad3b8538b8277077981c28286
+Author: Jason Cooper <jason at lakedaemon.net>
+Date:   Mon Feb 27 16:07:13 2012 +0000
+
+    ARM: kirkwood: add dreamplug (fdt) support.
+    
+    Initially, copied guruplug-setup.c and did s/guruplug/dreamplug/g.
+    Then, switched to SPI based NOR flash.
+    
+    After talking to Arnd Bergman, chose an incremental approach to adding
+    devicetree support.  First, we use the dtb to tell us we are on the
+    dreamplug, then we gradually port over drivers.
+    
+    Driver porting will start with the uart (see next patch), and progress
+    from there.  Possibly, spi/flash/partitions will be next.
+    
+    When done, board-dt.c will no longer be dreamplug specific, and dt's can
+    be made for the other kirkwood boards.
+    
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+    Reviewed-by: Arnd Bergmann <arnd at arndb.de>
+    Acked-by: Nicolas Pitre <nico at linaro.org>
+    Signed-off-by: Arnd Bergmann <arnd at arndb.de>
+
+diff --git a/arch/arm/boot/dts/kirkwood-dreamplug.dts b/arch/arm/boot/dts/kirkwood-dreamplug.dts
+new file mode 100644
+index 0000000..0424d99
+--- /dev/null
++++ b/arch/arm/boot/dts/kirkwood-dreamplug.dts
+@@ -0,0 +1,18 @@
++/dts-v1/;
++
++/include/ "kirkwood.dtsi"
++
++/ {
++	model = "Globalscale Technologies Dreamplug";
++	compatible = "globalscale,dreamplug-003-ds2001", "globalscale,dreamplug", "marvell,kirkwood-88f6281", "marvell,kirkwood";
++
++	memory {
++		device_type = "memory";
++		reg = <0x00000000 0x20000000>;
++	};
++
++	chosen {
++		bootargs = "console=ttyS0,115200n8 earlyprintk";
++	};
++
++};
+diff --git a/arch/arm/boot/dts/kirkwood.dtsi b/arch/arm/boot/dts/kirkwood.dtsi
+new file mode 100644
+index 0000000..771c6bb
+--- /dev/null
++++ b/arch/arm/boot/dts/kirkwood.dtsi
+@@ -0,0 +1,6 @@
++/include/ "skeleton.dtsi"
++
++/ {
++	compatible = "marvell,kirkwood";
++};
++
+diff --git a/arch/arm/mach-kirkwood/Kconfig b/arch/arm/mach-kirkwood/Kconfig
+index 7fc603b..90ceab7 100644
+--- a/arch/arm/mach-kirkwood/Kconfig
++++ b/arch/arm/mach-kirkwood/Kconfig
+@@ -44,6 +44,20 @@ config MACH_GURUPLUG
+ 	  Say 'Y' here if you want your kernel to support the
+ 	  Marvell GuruPlug Reference Board.
+ 
++config ARCH_KIRKWOOD_DT
++	bool "Marvell Kirkwood Flattened Device Tree"
++	select USE_OF
++	help
++	  Say 'Y' here if you want your kernel to support the
++	  Marvell Kirkwood using flattened device tree.
++
++config MACH_DREAMPLUG_DT
++	bool "Marvell DreamPlug (Flattened Device Tree)"
++	select ARCH_KIRKWOOD_DT
++	help
++	  Say 'Y' here if you want your kernel to support the
++	  Marvell DreamPlug (Flattened Device Tree).
++
+ config MACH_TS219
+ 	bool "QNAP TS-110, TS-119, TS-119P+, TS-210, TS-219, TS-219P and TS-219P+ Turbo NAS"
+ 	help
+diff --git a/arch/arm/mach-kirkwood/Makefile b/arch/arm/mach-kirkwood/Makefile
+index 5dcaa81..acbc5e1 100644
+--- a/arch/arm/mach-kirkwood/Makefile
++++ b/arch/arm/mach-kirkwood/Makefile
+@@ -20,3 +20,4 @@ obj-$(CONFIG_MACH_NET5BIG_V2)		+= netxbig_v2-setup.o lacie_v2-common.o
+ obj-$(CONFIG_MACH_T5325)		+= t5325-setup.o
+ 
+ obj-$(CONFIG_CPU_IDLE)			+= cpuidle.o
++obj-$(CONFIG_ARCH_KIRKWOOD_DT)		+= board-dt.o
+diff --git a/arch/arm/mach-kirkwood/Makefile.boot b/arch/arm/mach-kirkwood/Makefile.boot
+index 760a0ef..16f9385 100644
+--- a/arch/arm/mach-kirkwood/Makefile.boot
++++ b/arch/arm/mach-kirkwood/Makefile.boot
+@@ -1,3 +1,5 @@
+    zreladdr-y	+= 0x00008000
+ params_phys-y	:= 0x00000100
+ initrd_phys-y	:= 0x00800000
++
++dtb-$(CONFIG_MACH_DREAMPLUG_DT) += kirkwood-dreamplug.dtb
+diff --git a/arch/arm/mach-kirkwood/board-dt.c b/arch/arm/mach-kirkwood/board-dt.c
+new file mode 100644
+index 0000000..76392af
+--- /dev/null
++++ b/arch/arm/mach-kirkwood/board-dt.c
+@@ -0,0 +1,181 @@
++/*
++ * Copyright 2012 (C), Jason Cooper <jason at lakedaemon.net>
++ *
++ * arch/arm/mach-kirkwood/board-dt.c
++ *
++ * Marvell DreamPlug Reference Board Setup
++ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2.  This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
++ */
++
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/platform_device.h>
++#include <linux/mtd/partitions.h>
++#include <linux/ata_platform.h>
++#include <linux/mv643xx_eth.h>
++#include <linux/of.h>
++#include <linux/of_address.h>
++#include <linux/of_fdt.h>
++#include <linux/of_irq.h>
++#include <linux/of_platform.h>
++#include <linux/gpio.h>
++#include <linux/leds.h>
++#include <linux/mtd/physmap.h>
++#include <linux/spi/flash.h>
++#include <linux/spi/spi.h>
++#include <linux/spi/orion_spi.h>
++#include <asm/mach-types.h>
++#include <asm/mach/arch.h>
++#include <mach/kirkwood.h>
++#include <plat/mvsdio.h>
++#include "common.h"
++#include "mpp.h"
++
++static struct of_device_id kirkwood_dt_match_table[] __initdata = {
++	{ .compatible = "simple-bus", },
++	{ }
++};
++
++struct mtd_partition dreamplug_partitions[] = {
++	{
++		.name	= "u-boot",
++		.size	= SZ_512K,
++		.offset = 0,
++	},
++	{
++		.name	= "u-boot env",
++		.size	= SZ_64K,
++		.offset = SZ_512K + SZ_512K,
++	},
++	{
++		.name	= "dtb",
++		.size	= SZ_64K,
++		.offset = SZ_512K + SZ_512K + SZ_512K,
++	},
++};
++
++static const struct flash_platform_data dreamplug_spi_slave_data = {
++	.type		= "mx25l1606e",
++	.name		= "spi_flash",
++	.parts		= dreamplug_partitions,
++	.nr_parts	= ARRAY_SIZE(dreamplug_partitions),
++};
++
++static struct spi_board_info __initdata dreamplug_spi_slave_info[] = {
++	{
++		.modalias	= "m25p80",
++		.platform_data	= &dreamplug_spi_slave_data,
++		.irq		= -1,
++		.max_speed_hz	= 50000000,
++		.bus_num	= 0,
++		.chip_select	= 0,
++	},
++};
++
++static struct mv643xx_eth_platform_data dreamplug_ge00_data = {
++	.phy_addr	= MV643XX_ETH_PHY_ADDR(0),
++};
++
++static struct mv643xx_eth_platform_data dreamplug_ge01_data = {
++	.phy_addr	= MV643XX_ETH_PHY_ADDR(1),
++};
++
++static struct mv_sata_platform_data dreamplug_sata_data = {
++	.n_ports	= 1,
++};
++
++static struct mvsdio_platform_data dreamplug_mvsdio_data = {
++	/* unfortunately the CD signal has not been connected */
++};
++
++static struct gpio_led dreamplug_led_pins[] = {
++	{
++		.name			= "dreamplug:blue:bluetooth",
++		.gpio			= 47,
++		.active_low		= 1,
++	},
++	{
++		.name			= "dreamplug:green:wifi",
++		.gpio			= 48,
++		.active_low		= 1,
++	},
++	{
++		.name			= "dreamplug:green:wifi_ap",
++		.gpio			= 49,
++		.active_low		= 1,
++	},
++};
++
++static struct gpio_led_platform_data dreamplug_led_data = {
++	.leds		= dreamplug_led_pins,
++	.num_leds	= ARRAY_SIZE(dreamplug_led_pins),
++};
++
++static struct platform_device dreamplug_leds = {
++	.name	= "leds-gpio",
++	.id	= -1,
++	.dev	= {
++		.platform_data	= &dreamplug_led_data,
++	}
++};
++
++static unsigned int dreamplug_mpp_config[] __initdata = {
++	MPP0_SPI_SCn,
++	MPP1_SPI_MOSI,
++	MPP2_SPI_SCK,
++	MPP3_SPI_MISO,
++	MPP47_GPIO,	/* Bluetooth LED */
++	MPP48_GPIO,	/* Wifi LED */
++	MPP49_GPIO,	/* Wifi AP LED */
++	0
++};
++
++static void __init dreamplug_init(void)
++{
++	/*
++	 * Basic setup. Needs to be called early.
++	 */
++	kirkwood_mpp_conf(dreamplug_mpp_config);
++
++	kirkwood_uart0_init();
++
++	spi_register_board_info(dreamplug_spi_slave_info,
++				ARRAY_SIZE(dreamplug_spi_slave_info));
++	kirkwood_spi_init();
++
++	kirkwood_ehci_init();
++	kirkwood_ge00_init(&dreamplug_ge00_data);
++	kirkwood_ge01_init(&dreamplug_ge01_data);
++	kirkwood_sata_init(&dreamplug_sata_data);
++	kirkwood_sdio_init(&dreamplug_mvsdio_data);
++
++	platform_device_register(&dreamplug_leds);
++}
++
++static void __init kirkwood_dt_init(void)
++{
++	kirkwood_init();
++
++	if (of_machine_is_compatible("globalscale,dreamplug"))
++		dreamplug_init();
++
++	of_platform_populate(NULL, kirkwood_dt_match_table, NULL, NULL);
++}
++
++static const char *kirkwood_dt_board_compat[] = {
++	"globalscale,dreamplug",
++	NULL
++};
++
++DT_MACHINE_START(KIRKWOOD_DT, "Marvell Kirkwood (Flattened Device Tree)")
++	/* Maintainer: Jason Cooper <jason at lakedaemon.net> */
++	.map_io		= kirkwood_map_io,
++	.init_early	= kirkwood_init_early,
++	.init_irq	= kirkwood_init_irq,
++	.timer		= &kirkwood_timer,
++	.init_machine	= kirkwood_dt_init,
++	.dt_compat	= kirkwood_dt_board_compat,
++MACHINE_END

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-add-iconnect-support.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-add-iconnect-support.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-add-iconnect-support.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-add-iconnect-support.patch)
@@ -0,0 +1,285 @@
+commit c06cd9bfcad4960023bac1f052da748824e24961
+Author: Arnaud Patard (Rtp) <arnaud.patard at rtp-net.org>
+Date:   Wed Apr 18 23:16:41 2012 +0200
+
+    kirkwood: Add iconnect support
+    
+    Add support for Iomega Iconnect system.
+    
+    Signed-off-by: Arnaud Patard <arnaud.patard at rtp-net.org>
+    Tested-By: Adam Baker <linux at baker-net.org.uk>
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+
+Index: sid/arch/arm/boot/dts/kirkwood-iconnect.dts
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ sid/arch/arm/boot/dts/kirkwood-iconnect.dts	2012-06-10 01:24:28.300087489 +0200
+@@ -0,0 +1,26 @@
++/dts-v1/;
++
++/include/ "kirkwood.dtsi"
++
++/ {
++	model = "Iomega Iconnect";
++	compatible = "iom,iconnect-1.1", "iom,iconnect", "mrvl,kirkwood-88f6281", "mrvl,kirkwood";
++
++	memory {
++		device_type = "memory";
++		reg = <0x00000000 0x10000000>;
++	};
++
++	chosen {
++		bootargs = "console=ttyS0,115200n8 earlyprintk mtdparts=orion_nand:0xc0000 at 0x0(uboot),0x20000 at 0xa0000(env),0x300000 at 0x100000(zImage),0x300000 at 0x540000(initrd),0x1f400000 at 0x980000(boot)";
++		linux,initrd-start = <0x4500040>;
++		linux,initrd-end   = <0x4800000>;
++	};
++
++	ocp at f1000000 {
++		serial at 12000 {
++			clock-frequency = <200000000>;
++			status = "ok";
++		};
++	};
++};
+Index: sid/arch/arm/mach-kirkwood/Kconfig
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/Kconfig	2012-06-10 01:13:01.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/Kconfig	2012-06-10 01:24:28.300087489 +0200
+@@ -58,6 +58,12 @@ config MACH_DREAMPLUG_DT
+ 	  Say 'Y' here if you want your kernel to support the
+ 	  Marvell DreamPlug (Flattened Device Tree).
+ 
++config MACH_ICONNECT_DT
++	bool "Iomega Iconnect (Flattened Device Tree)"
++	select ARCH_KIRKWOOD_DT
++	help
++	  Say 'Y' here to enable Iomega Iconnect support.
++
+ config MACH_TS219
+ 	bool "QNAP TS-110, TS-119, TS-119P+, TS-210, TS-219, TS-219P and TS-219P+ Turbo NAS"
+ 	help
+Index: sid/arch/arm/mach-kirkwood/Makefile
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/Makefile	2012-06-10 01:13:58.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/Makefile	2012-06-10 01:24:28.300087489 +0200
+@@ -22,3 +22,4 @@ obj-$(CONFIG_MACH_T5325)		+= t5325-setup
+ obj-$(CONFIG_CPU_IDLE)			+= cpuidle.o
+ obj-$(CONFIG_ARCH_KIRKWOOD_DT)		+= board-dt.o
+ obj-$(CONFIG_MACH_DREAMPLUG_DT)		+= board-dreamplug.o
++obj-$(CONFIG_MACH_ICONNECT_DT)		+= board-iconnect.o
+Index: sid/arch/arm/mach-kirkwood/Makefile.boot
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/Makefile.boot	2012-06-10 01:13:01.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/Makefile.boot	2012-06-10 01:24:28.300087489 +0200
+@@ -3,3 +3,4 @@ params_phys-y	:= 0x00000100
+ initrd_phys-y	:= 0x00800000
+ 
+ dtb-$(CONFIG_MACH_DREAMPLUG_DT) += kirkwood-dreamplug.dtb
++dtb-$(CONFIG_MACH_ICONNECT_DT) += kirkwood-iconnect.dtb
+Index: sid/arch/arm/mach-kirkwood/board-dt.c
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/board-dt.c	2012-06-10 01:14:30.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/board-dt.c	2012-06-10 01:24:28.300087489 +0200
+@@ -56,11 +56,15 @@ static void __init kirkwood_dt_init(void
+ 	if (of_machine_is_compatible("globalscale,dreamplug"))
+ 		dreamplug_init();
+ 
++	if (of_machine_is_compatible("iom,iconnect"))
++		iconnect_init();
++
+ 	of_platform_populate(NULL, kirkwood_dt_match_table, NULL, NULL);
+ }
+ 
+ static const char *kirkwood_dt_board_compat[] = {
+ 	"globalscale,dreamplug",
++	"iom,iconnect",
+ 	NULL
+ };
+ 
+Index: sid/arch/arm/mach-kirkwood/board-iconnect.c
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ sid/arch/arm/mach-kirkwood/board-iconnect.c	2012-06-10 01:24:28.300087489 +0200
+@@ -0,0 +1,165 @@
++/*
++ * arch/arm/mach-kirkwood/board-iconnect.c
++ *
++ * Iomega i-connect Board Setup
++ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2.  This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
++ */
++
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/platform_device.h>
++#include <linux/of.h>
++#include <linux/of_address.h>
++#include <linux/of_fdt.h>
++#include <linux/of_irq.h>
++#include <linux/of_platform.h>
++#include <linux/mtd/partitions.h>
++#include <linux/mv643xx_eth.h>
++#include <linux/gpio.h>
++#include <linux/leds.h>
++#include <linux/spi/flash.h>
++#include <linux/spi/spi.h>
++#include <linux/spi/orion_spi.h>
++#include <linux/i2c.h>
++#include <linux/input.h>
++#include <linux/gpio_keys.h>
++#include <asm/mach/arch.h>
++#include <mach/kirkwood.h>
++#include "common.h"
++#include "mpp.h"
++
++static struct mv643xx_eth_platform_data iconnect_ge00_data = {
++	.phy_addr	= MV643XX_ETH_PHY_ADDR(11),
++};
++
++static struct gpio_led iconnect_led_pins[] = {
++	{
++		.name		= "led_level",
++		.gpio		= 41,
++		.default_trigger = "default-on",
++	}, {
++		.name		= "power:blue",
++		.gpio		= 42,
++		.default_trigger = "timer",
++	}, {
++		.name		= "power:red",
++		.gpio		= 43,
++	}, {
++		.name		= "usb1:blue",
++		.gpio		= 44,
++	}, {
++		.name		= "usb2:blue",
++		.gpio		= 45,
++	}, {
++		.name		= "usb3:blue",
++		.gpio		= 46,
++	}, {
++		.name		= "usb4:blue",
++		.gpio		= 47,
++	}, {
++		.name		= "otb:blue",
++		.gpio		= 48,
++	},
++};
++
++static struct gpio_led_platform_data iconnect_led_data = {
++	.leds		= iconnect_led_pins,
++	.num_leds	= ARRAY_SIZE(iconnect_led_pins),
++	.gpio_blink_set	= orion_gpio_led_blink_set,
++};
++
++static struct platform_device iconnect_leds = {
++	.name	= "leds-gpio",
++	.id	= -1,
++	.dev	= {
++		.platform_data	= &iconnect_led_data,
++	}
++};
++
++static unsigned int iconnect_mpp_config[] __initdata = {
++	MPP12_GPIO,
++	MPP35_GPIO,
++	MPP41_GPIO,
++	MPP42_GPIO,
++	MPP43_GPIO,
++	MPP44_GPIO,
++	MPP45_GPIO,
++	MPP46_GPIO,
++	MPP47_GPIO,
++	MPP48_GPIO,
++	0
++};
++
++static struct i2c_board_info __initdata iconnect_board_info[] = {
++	{
++		I2C_BOARD_INFO("lm63", 0x4c),
++	},
++};
++
++static struct mtd_partition iconnect_nand_parts[] = {
++	{
++		.name = "flash",
++		.offset = 0,
++		.size = MTDPART_SIZ_FULL,
++	},
++};
++
++/* yikes... theses are the original input buttons */
++/* but I'm not convinced by the sw event choices  */
++static struct gpio_keys_button iconnect_buttons[] = {
++	{
++		.type		= EV_SW,
++		.code		= SW_LID,
++		.gpio		= 12,
++		.desc		= "Reset Button",
++		.active_low	= 1,
++		.debounce_interval = 100,
++	}, {
++		.type		= EV_SW,
++		.code		= SW_TABLET_MODE,
++		.gpio		= 35,
++		.desc		= "OTB Button",
++		.active_low	= 1,
++		.debounce_interval = 100,
++	},
++};
++
++static struct gpio_keys_platform_data iconnect_button_data = {
++	.buttons	= iconnect_buttons,
++	.nbuttons	= ARRAY_SIZE(iconnect_buttons),
++};
++
++static struct platform_device iconnect_button_device = {
++	.name		= "gpio-keys",
++	.id		= -1,
++	.num_resources	= 0,
++	.dev        = {
++		.platform_data  = &iconnect_button_data,
++	},
++};
++
++void __init iconnect_init(void)
++{
++	kirkwood_mpp_conf(iconnect_mpp_config);
++	kirkwood_nand_init(ARRAY_AND_SIZE(iconnect_nand_parts), 25);
++	kirkwood_i2c_init();
++	i2c_register_board_info(0, iconnect_board_info,
++		ARRAY_SIZE(iconnect_board_info));
++
++	kirkwood_ehci_init();
++	kirkwood_ge00_init(&iconnect_ge00_data);
++
++	platform_device_register(&iconnect_button_device);
++	platform_device_register(&iconnect_leds);
++}
++
++static int __init iconnect_pci_init(void)
++{
++	if (of_machine_is_compatible("iom,iconnect"))
++		kirkwood_pcie_init(KW_PCIE0);
++	return 0;
++}
++subsys_initcall(iconnect_pci_init);
+Index: sid/arch/arm/mach-kirkwood/common.h
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/common.h	2012-06-10 01:14:15.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/common.h	2012-06-10 01:24:28.300087489 +0200
+@@ -58,6 +58,12 @@ void dreamplug_init(void);
+ static inline void dreamplug_init(void) {};
+ #endif
+ 
++#ifdef CONFIG_MACH_ICONNECT_DT
++void iconnect_init(void);
++#else
++static inline void iconnect_init(void) {};
++#endif
++
+ /* early init functions not converted to fdt yet */
+ char *kirkwood_id(void);
+ void kirkwood_l2_init(void);

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-create-a-generic-function-for-gpio-led-blinking.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-create-a-generic-function-for-gpio-led-blinking.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-create-a-generic-function-for-gpio-led-blinking.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-create-a-generic-function-for-gpio-led-blinking.patch)
@@ -0,0 +1,116 @@
+commit ff3e660b5a881b401b2b6735aa5334f433237dcb
+Author: Arnaud Patard (Rtp) <arnaud.patard at rtp-net.org>
+Date:   Wed Apr 18 23:16:40 2012 +0200
+
+    orion/kirkwood: create a generic function for gpio led blinking
+    
+    dns323 and (at least) iconnect platforms are using hw led blinking, so,
+    instead of having 2 identicals .gpio_blink_set gpio-led hooks, move
+    dns323 code into gpio.c
+    
+    Signed-off-by: Arnaud Patard <arnaud.patard at rtp-net.org>
+    Tested-By: Adam Baker <linux at baker-net.org.uk>
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+
+diff --git a/arch/arm/mach-orion5x/dns323-setup.c b/arch/arm/mach-orion5x/dns323-setup.c
+index c3ed15b..13d2bec 100644
+--- a/arch/arm/mach-orion5x/dns323-setup.c
++++ b/arch/arm/mach-orion5x/dns323-setup.c
+@@ -253,27 +253,6 @@ error_fail:
+  * GPIO LEDs (simple - doesn't use hardware blinking support)
+  */
+ 
+-#define ORION_BLINK_HALF_PERIOD 100 /* ms */
+-
+-static int dns323_gpio_blink_set(unsigned gpio, int state,
+-	unsigned long *delay_on, unsigned long *delay_off)
+-{
+-
+-	if (delay_on && delay_off && !*delay_on && !*delay_off)
+-		*delay_on = *delay_off = ORION_BLINK_HALF_PERIOD;
+-
+-	switch(state) {
+-	case GPIO_LED_NO_BLINK_LOW:
+-	case GPIO_LED_NO_BLINK_HIGH:
+-		orion_gpio_set_blink(gpio, 0);
+-		gpio_set_value(gpio, state);
+-		break;
+-	case GPIO_LED_BLINK:
+-		orion_gpio_set_blink(gpio, 1);
+-	}
+-	return 0;
+-}
+-
+ static struct gpio_led dns323ab_leds[] = {
+ 	{
+ 		.name = "power:blue",
+@@ -312,13 +291,13 @@ static struct gpio_led dns323c_leds[] = {
+ static struct gpio_led_platform_data dns323ab_led_data = {
+ 	.num_leds	= ARRAY_SIZE(dns323ab_leds),
+ 	.leds		= dns323ab_leds,
+-	.gpio_blink_set = dns323_gpio_blink_set,
++	.gpio_blink_set = orion_gpio_led_blink_set,
+ };
+ 
+ static struct gpio_led_platform_data dns323c_led_data = {
+ 	.num_leds	= ARRAY_SIZE(dns323c_leds),
+ 	.leds		= dns323c_leds,
+-	.gpio_blink_set = dns323_gpio_blink_set,
++	.gpio_blink_set = orion_gpio_led_blink_set,
+ };
+ 
+ static struct platform_device dns323_gpio_leds = {
+diff --git a/arch/arm/plat-orion/gpio.c b/arch/arm/plat-orion/gpio.c
+index d3401e7..af95af2 100644
+--- a/arch/arm/plat-orion/gpio.c
++++ b/arch/arm/plat-orion/gpio.c
+@@ -16,6 +16,7 @@
+ #include <linux/bitops.h>
+ #include <linux/io.h>
+ #include <linux/gpio.h>
++#include <linux/leds.h>
+ 
+ /*
+  * GPIO unit register offsets.
+@@ -295,6 +296,28 @@ void orion_gpio_set_blink(unsigned pin, int blink)
+ }
+ EXPORT_SYMBOL(orion_gpio_set_blink);
+ 
++#define ORION_BLINK_HALF_PERIOD 100 /* ms */
++
++int orion_gpio_led_blink_set(unsigned gpio, int state,
++	unsigned long *delay_on, unsigned long *delay_off)
++{
++
++	if (delay_on && delay_off && !*delay_on && !*delay_off)
++		*delay_on = *delay_off = ORION_BLINK_HALF_PERIOD;
++
++	switch (state) {
++	case GPIO_LED_NO_BLINK_LOW:
++	case GPIO_LED_NO_BLINK_HIGH:
++		orion_gpio_set_blink(gpio, 0);
++		gpio_set_value(gpio, state);
++		break;
++	case GPIO_LED_BLINK:
++		orion_gpio_set_blink(gpio, 1);
++	}
++	return 0;
++}
++EXPORT_SYMBOL_GPL(orion_gpio_led_blink_set);
++
+ 
+ /*****************************************************************************
+  * Orion GPIO IRQ
+diff --git a/arch/arm/plat-orion/include/plat/gpio.h b/arch/arm/plat-orion/include/plat/gpio.h
+index 3abf304..bec0c98 100644
+--- a/arch/arm/plat-orion/include/plat/gpio.h
++++ b/arch/arm/plat-orion/include/plat/gpio.h
+@@ -19,6 +19,8 @@
+  */
+ void orion_gpio_set_unused(unsigned pin);
+ void orion_gpio_set_blink(unsigned pin, int blink);
++int orion_gpio_led_blink_set(unsigned gpio, int state,
++	unsigned long *delay_on, unsigned long *delay_off);
+ 
+ #define GPIO_INPUT_OK		(1 << 0)
+ #define GPIO_OUTPUT_OK		(1 << 1)

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-absorb-kirkwood_init.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-absorb-kirkwood_init.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-absorb-kirkwood_init.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-absorb-kirkwood_init.patch)
@@ -0,0 +1,141 @@
+commit 2b45e05f51a79c2818523c923dfe008b8b2f4227
+Author: Jason Cooper <jason at lakedaemon.net>
+Date:   Wed Feb 29 17:39:08 2012 +0000
+
+    ARM: kirkwood: fdt: absorb kirkwood_init()
+    
+    We need to absorb kirkwood_init() into kirkwood_dt_init() so that as we
+    convert drivers, we can remove the platform call, eg
+    kirkwood_rtc_init().  This maintains compatibility with non-fdt
+    configurations because they still call kirkwood_init() in common.c.
+    
+    As drivers are converted, we will reinstate the 'static' qualifier in
+    common.c.
+    
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+
+Index: sid/arch/arm/mach-kirkwood/board-dt.c
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/board-dt.c	2012-06-10 20:02:17.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/board-dt.c	2012-06-10 20:02:22.677136456 +0200
+@@ -29,7 +29,9 @@
+ #include <linux/spi/orion_spi.h>
+ #include <asm/mach-types.h>
+ #include <asm/mach/arch.h>
++#include <asm/mach/map.h>
+ #include <mach/kirkwood.h>
++#include <mach/bridge-regs.h>
+ #include <plat/mvsdio.h>
+ #include "common.h"
+ #include "mpp.h"
+@@ -155,7 +157,32 @@ static void __init dreamplug_init(void)
+ 
+ static void __init kirkwood_dt_init(void)
+ {
+-	kirkwood_init();
++	pr_info("Kirkwood: %s, TCLK=%d.\n", kirkwood_id(), kirkwood_tclk);
++
++	/*
++	 * Disable propagation of mbus errors to the CPU local bus,
++	 * as this causes mbus errors (which can occur for example
++	 * for PCI aborts) to throw CPU aborts, which we're not set
++	 * up to deal with.
++	 */
++	writel(readl(CPU_CONFIG) & ~CPU_CONFIG_ERROR_PROP, CPU_CONFIG);
++
++	kirkwood_setup_cpu_mbus();
++
++#ifdef CONFIG_CACHE_FEROCEON_L2
++	kirkwood_l2_init();
++#endif
++
++	/* internal devices that every board has */
++	kirkwood_rtc_init();
++	kirkwood_wdt_init();
++	kirkwood_xor0_init();
++	kirkwood_xor1_init();
++	kirkwood_crypto_init();
++
++#ifdef CONFIG_KEXEC
++	kexec_reinit = kirkwood_enable_pcie;
++#endif
+ 
+ 	if (of_machine_is_compatible("globalscale,dreamplug"))
+ 		dreamplug_init();
+Index: sid/arch/arm/mach-kirkwood/common.c
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/common.c	2012-06-10 19:52:45.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/common.c	2012-06-10 20:02:22.677136456 +0200
+@@ -164,7 +164,7 @@ void __init kirkwood_nand_init_rnb(struc
+ /*****************************************************************************
+  * SoC RTC
+  ****************************************************************************/
+-static void __init kirkwood_rtc_init(void)
++void __init kirkwood_rtc_init(void)
+ {
+ 	orion_rtc_init(RTC_PHYS_BASE, IRQ_KIRKWOOD_RTC);
+ }
+@@ -282,7 +282,7 @@ void __init kirkwood_crypto_init(void)
+ /*****************************************************************************
+  * XOR0
+  ****************************************************************************/
+-static void __init kirkwood_xor0_init(void)
++void __init kirkwood_xor0_init(void)
+ {
+ 	kirkwood_clk_ctrl |= CGC_XOR0;
+ 
+@@ -295,7 +295,7 @@ static void __init kirkwood_xor0_init(vo
+ /*****************************************************************************
+  * XOR1
+  ****************************************************************************/
+-static void __init kirkwood_xor1_init(void)
++void __init kirkwood_xor1_init(void)
+ {
+ 	kirkwood_clk_ctrl |= CGC_XOR1;
+ 
+@@ -307,7 +307,7 @@ static void __init kirkwood_xor1_init(vo
+ /*****************************************************************************
+  * Watchdog
+  ****************************************************************************/
+-static void __init kirkwood_wdt_init(void)
++void __init kirkwood_wdt_init(void)
+ {
+ 	orion_wdt_init(kirkwood_tclk);
+ }
+@@ -397,7 +397,7 @@ void __init kirkwood_audio_init(void)
+ /*
+  * Identify device ID and revision.
+  */
+-static char * __init kirkwood_id(void)
++char * __init kirkwood_id(void)
+ {
+ 	u32 dev, rev;
+ 
+@@ -440,7 +440,7 @@ static char * __init kirkwood_id(void)
+ 	}
+ }
+ 
+-static void __init kirkwood_l2_init(void)
++void __init kirkwood_l2_init(void)
+ {
+ #ifdef CONFIG_CACHE_FEROCEON_L2_WRITETHROUGH
+ 	writel(readl(L2_CONFIG_REG) | L2_WRITETHROUGH, L2_CONFIG_REG);
+Index: sid/arch/arm/mach-kirkwood/common.h
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/common.h	2012-06-10 19:52:45.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/common.h	2012-06-10 20:02:22.677136456 +0200
+@@ -51,6 +51,14 @@ void kirkwood_nand_init(struct mtd_parti
+ void kirkwood_nand_init_rnb(struct mtd_partition *parts, int nr_parts, int (*dev_ready)(struct mtd_info *));
+ void kirkwood_audio_init(void);
+ 
++char *kirkwood_id(void);
++void kirkwood_l2_init(void);
++void kirkwood_rtc_init(void);
++void kirkwood_wdt_init(void);
++void kirkwood_xor0_init(void);
++void kirkwood_xor1_init(void);
++void kirkwood_crypto_init(void);
++
+ extern int kirkwood_tclk;
+ extern struct sys_timer kirkwood_timer;
+ 

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-convert-uart0-to-devicetree.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-convert-uart0-to-devicetree.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-convert-uart0-to-devicetree.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-convert-uart0-to-devicetree.patch)
@@ -0,0 +1,43 @@
+commit 759a45185ac0e4dfaf8bbfcb390ec73aca4b7a34
+Author: Jason Cooper <jason at lakedaemon.net>
+Date:   Mon Feb 27 16:07:14 2012 +0000
+
+    ARM: kirkwood: convert uart0 to devicetree.
+    
+    This uart is the primary console for the dreamplug.  Removed
+    kirkwood_uart0_init() call from board-dt.c.
+    
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+    Reviewed-by: Arnd Bergmann <arnd at arndb.de>
+    Acked-by: Nicolas Pitre <nico at linaro.org>
+    Signed-off-by: Arnd Bergmann <arnd at arndb.de>
+
+diff --git a/arch/arm/boot/dts/kirkwood-dreamplug.dts b/arch/arm/boot/dts/kirkwood-dreamplug.dts
+index 0424d99..8a5dff8 100644
+--- a/arch/arm/boot/dts/kirkwood-dreamplug.dts
++++ b/arch/arm/boot/dts/kirkwood-dreamplug.dts
+@@ -15,4 +15,11 @@
+ 		bootargs = "console=ttyS0,115200n8 earlyprintk";
+ 	};
+ 
++	serial at f1012000 {
++		compatible = "ns16550a";
++		reg = <0xf1012000 0xff>;
++		reg-shift = <2>;
++		interrupts = <33>;
++		clock-frequency = <200000000>;
++	};
+ };
+diff --git a/arch/arm/mach-kirkwood/board-dt.c b/arch/arm/mach-kirkwood/board-dt.c
+index 76392af..fbe6405 100644
+--- a/arch/arm/mach-kirkwood/board-dt.c
++++ b/arch/arm/mach-kirkwood/board-dt.c
+@@ -140,8 +140,6 @@ static void __init dreamplug_init(void)
+ 	 */
+ 	kirkwood_mpp_conf(dreamplug_mpp_config);
+ 
+-	kirkwood_uart0_init();
+-
+ 	spi_register_board_info(dreamplug_spi_slave_info,
+ 				ARRAY_SIZE(dreamplug_spi_slave_info));
+ 	kirkwood_spi_init();

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-define-uart01-as-disabled.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-define-uart01-as-disabled.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-define-uart01-as-disabled.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-define-uart01-as-disabled.patch)
@@ -0,0 +1,71 @@
+commit 163f2cea673a4ae831ad2cd26d8f01977c3add93
+Author: Jason Cooper <jason at lakedaemon.net>
+Date:   Thu Mar 15 01:00:27 2012 +0000
+
+    ARM: kirkwood: fdt: define uart[01] as disabled, enable uart0
+    
+    Define both uarts in kirkwood.dtsi as they are common to all kirkwood
+    SoCs.  Each board may enable all or none of them, so they are disabled
+    by default.  uart0 is enabled for the dreamplug.
+    
+    tclk can vary for each board, so we leave it undefined in the kirkwood
+    dtsi.  Each board can then set it as appropriate when enabling the uart.
+    
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+
+diff --git a/arch/arm/boot/dts/kirkwood-dreamplug.dts b/arch/arm/boot/dts/kirkwood-dreamplug.dts
+index 333f11b..a5376b8 100644
+--- a/arch/arm/boot/dts/kirkwood-dreamplug.dts
++++ b/arch/arm/boot/dts/kirkwood-dreamplug.dts
+@@ -15,11 +15,10 @@
+ 		bootargs = "console=ttyS0,115200n8 earlyprintk";
+ 	};
+ 
+-	serial at f1012000 {
+-		compatible = "ns16550a";
+-		reg = <0xf1012000 0x100>;
+-		reg-shift = <2>;
+-		interrupts = <33>;
+-		clock-frequency = <200000000>;
++	ocp at f1000000 {
++		serial at 12000 {
++			clock-frequency = <200000000>;
++			status = "ok";
++		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/kirkwood.dtsi b/arch/arm/boot/dts/kirkwood.dtsi
+index 702b955..825310b 100644
+--- a/arch/arm/boot/dts/kirkwood.dtsi
++++ b/arch/arm/boot/dts/kirkwood.dtsi
+@@ -2,5 +2,29 @@
+ 
+ / {
+ 	compatible = "mrvl,kirkwood";
+-};
+ 
++	ocp at f1000000 {
++		compatible = "simple-bus";
++		ranges = <0 0xf1000000 0x1000000>;
++		#address-cells = <1>;
++		#size-cells = <1>;
++
++		serial at 12000 {
++			compatible = "ns16550a";
++			reg = <0x12000 0x100>;
++			reg-shift = <2>;
++			interrupts = <33>;
++			/* set clock-frequency in board dts */
++			status = "disabled";
++		};
++
++		serial at 12100 {
++			compatible = "ns16550a";
++			reg = <0x12100 0x100>;
++			reg-shift = <2>;
++			interrupts = <34>;
++			/* set clock-frequency in board dts */
++			status = "disabled";
++		};
++	};
++};

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-facilitate-new-boards-during-fdt-migration.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-facilitate-new-boards-during-fdt-migration.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-facilitate-new-boards-during-fdt-migration.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-facilitate-new-boards-during-fdt-migration.patch)
@@ -0,0 +1,360 @@
+commit 6fa6b8781fbd5e6cd5e313c5e3bdd73b426d8f30
+Author: Jason Cooper <jason at lakedaemon.net>
+Date:   Thu Mar 15 00:52:31 2012 +0000
+
+    ARM: kirkwood: fdt: facilitate new boards during fdt migration
+    
+    Move all dreamplug-specific code out of board-dt.c and into
+    board-dreamplug.c.  This way new boards that are added during the
+    conversion to fdt don't clutter up board-dt.c.
+    
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+
+Index: sid/arch/arm/mach-kirkwood/Makefile
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/Makefile	2012-06-10 20:02:17.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/Makefile	2012-06-10 20:02:33.717135970 +0200
+@@ -21,3 +21,4 @@ obj-$(CONFIG_MACH_T5325)		+= t5325-setup
+ 
+ obj-$(CONFIG_CPU_IDLE)			+= cpuidle.o
+ obj-$(CONFIG_ARCH_KIRKWOOD_DT)		+= board-dt.o
++obj-$(CONFIG_MACH_DREAMPLUG_DT)		+= board-dreamplug.o
+Index: sid/arch/arm/mach-kirkwood/board-dreamplug.c
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ sid/arch/arm/mach-kirkwood/board-dreamplug.c	2012-06-10 20:02:33.717135970 +0200
+@@ -0,0 +1,152 @@
++/*
++ * Copyright 2012 (C), Jason Cooper <jason at lakedaemon.net>
++ *
++ * arch/arm/mach-kirkwood/board-dreamplug.c
++ *
++ * Marvell DreamPlug Reference Board Init for drivers not converted to
++ * flattened device tree yet.
++ *
++ * This file is licensed under the terms of the GNU General Public
++ * License version 2.  This program is licensed "as is" without any
++ * warranty of any kind, whether express or implied.
++ */
++
++#include <linux/kernel.h>
++#include <linux/init.h>
++#include <linux/platform_device.h>
++#include <linux/mtd/partitions.h>
++#include <linux/ata_platform.h>
++#include <linux/mv643xx_eth.h>
++#include <linux/of.h>
++#include <linux/of_address.h>
++#include <linux/of_fdt.h>
++#include <linux/of_irq.h>
++#include <linux/of_platform.h>
++#include <linux/gpio.h>
++#include <linux/leds.h>
++#include <linux/mtd/physmap.h>
++#include <linux/spi/flash.h>
++#include <linux/spi/spi.h>
++#include <linux/spi/orion_spi.h>
++#include <asm/mach-types.h>
++#include <asm/mach/arch.h>
++#include <asm/mach/map.h>
++#include <mach/kirkwood.h>
++#include <mach/bridge-regs.h>
++#include <plat/mvsdio.h>
++#include "common.h"
++#include "mpp.h"
++
++struct mtd_partition dreamplug_partitions[] = {
++	{
++		.name	= "u-boot",
++		.size	= SZ_512K,
++		.offset = 0,
++	},
++	{
++		.name	= "u-boot env",
++		.size	= SZ_64K,
++		.offset = SZ_512K + SZ_512K,
++	},
++	{
++		.name	= "dtb",
++		.size	= SZ_64K,
++		.offset = SZ_512K + SZ_512K + SZ_512K,
++	},
++};
++
++static const struct flash_platform_data dreamplug_spi_slave_data = {
++	.type		= "mx25l1606e",
++	.name		= "spi_flash",
++	.parts		= dreamplug_partitions,
++	.nr_parts	= ARRAY_SIZE(dreamplug_partitions),
++};
++
++static struct spi_board_info __initdata dreamplug_spi_slave_info[] = {
++	{
++		.modalias	= "m25p80",
++		.platform_data	= &dreamplug_spi_slave_data,
++		.irq		= -1,
++		.max_speed_hz	= 50000000,
++		.bus_num	= 0,
++		.chip_select	= 0,
++	},
++};
++
++static struct mv643xx_eth_platform_data dreamplug_ge00_data = {
++	.phy_addr	= MV643XX_ETH_PHY_ADDR(0),
++};
++
++static struct mv643xx_eth_platform_data dreamplug_ge01_data = {
++	.phy_addr	= MV643XX_ETH_PHY_ADDR(1),
++};
++
++static struct mv_sata_platform_data dreamplug_sata_data = {
++	.n_ports	= 1,
++};
++
++static struct mvsdio_platform_data dreamplug_mvsdio_data = {
++	/* unfortunately the CD signal has not been connected */
++};
++
++static struct gpio_led dreamplug_led_pins[] = {
++	{
++		.name			= "dreamplug:blue:bluetooth",
++		.gpio			= 47,
++		.active_low		= 1,
++	},
++	{
++		.name			= "dreamplug:green:wifi",
++		.gpio			= 48,
++		.active_low		= 1,
++	},
++	{
++		.name			= "dreamplug:green:wifi_ap",
++		.gpio			= 49,
++		.active_low		= 1,
++	},
++};
++
++static struct gpio_led_platform_data dreamplug_led_data = {
++	.leds		= dreamplug_led_pins,
++	.num_leds	= ARRAY_SIZE(dreamplug_led_pins),
++};
++
++static struct platform_device dreamplug_leds = {
++	.name	= "leds-gpio",
++	.id	= -1,
++	.dev	= {
++		.platform_data	= &dreamplug_led_data,
++	}
++};
++
++static unsigned int dreamplug_mpp_config[] __initdata = {
++	MPP0_SPI_SCn,
++	MPP1_SPI_MOSI,
++	MPP2_SPI_SCK,
++	MPP3_SPI_MISO,
++	MPP47_GPIO,	/* Bluetooth LED */
++	MPP48_GPIO,	/* Wifi LED */
++	MPP49_GPIO,	/* Wifi AP LED */
++	0
++};
++
++void __init dreamplug_init(void)
++{
++	/*
++	 * Basic setup. Needs to be called early.
++	 */
++	kirkwood_mpp_conf(dreamplug_mpp_config);
++
++	spi_register_board_info(dreamplug_spi_slave_info,
++				ARRAY_SIZE(dreamplug_spi_slave_info));
++	kirkwood_spi_init();
++
++	kirkwood_ehci_init();
++	kirkwood_ge00_init(&dreamplug_ge00_data);
++	kirkwood_ge01_init(&dreamplug_ge01_data);
++	kirkwood_sata_init(&dreamplug_sata_data);
++	kirkwood_sdio_init(&dreamplug_mvsdio_data);
++
++	platform_device_register(&dreamplug_leds);
++}
+Index: sid/arch/arm/mach-kirkwood/board-dt.c
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/board-dt.c	2012-06-10 20:02:22.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/board-dt.c	2012-06-10 20:02:33.721135970 +0200
+@@ -3,7 +3,7 @@
+  *
+  * arch/arm/mach-kirkwood/board-dt.c
+  *
+- * Marvell DreamPlug Reference Board Setup
++ * Flattened Device Tree board initialization
+  *
+  * This file is licensed under the terms of the GNU General Public
+  * License version 2.  This program is licensed "as is" without any
+@@ -12,149 +12,18 @@
+ 
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/platform_device.h>
+-#include <linux/mtd/partitions.h>
+-#include <linux/ata_platform.h>
+-#include <linux/mv643xx_eth.h>
+ #include <linux/of.h>
+-#include <linux/of_address.h>
+-#include <linux/of_fdt.h>
+-#include <linux/of_irq.h>
+ #include <linux/of_platform.h>
+-#include <linux/gpio.h>
+-#include <linux/leds.h>
+-#include <linux/mtd/physmap.h>
+-#include <linux/spi/flash.h>
+-#include <linux/spi/spi.h>
+-#include <linux/spi/orion_spi.h>
+-#include <asm/mach-types.h>
+ #include <asm/mach/arch.h>
+ #include <asm/mach/map.h>
+-#include <mach/kirkwood.h>
+ #include <mach/bridge-regs.h>
+-#include <plat/mvsdio.h>
+ #include "common.h"
+-#include "mpp.h"
+ 
+ static struct of_device_id kirkwood_dt_match_table[] __initdata = {
+ 	{ .compatible = "simple-bus", },
+ 	{ }
+ };
+ 
+-struct mtd_partition dreamplug_partitions[] = {
+-	{
+-		.name	= "u-boot",
+-		.size	= SZ_512K,
+-		.offset = 0,
+-	},
+-	{
+-		.name	= "u-boot env",
+-		.size	= SZ_64K,
+-		.offset = SZ_512K + SZ_512K,
+-	},
+-	{
+-		.name	= "dtb",
+-		.size	= SZ_64K,
+-		.offset = SZ_512K + SZ_512K + SZ_512K,
+-	},
+-};
+-
+-static const struct flash_platform_data dreamplug_spi_slave_data = {
+-	.type		= "mx25l1606e",
+-	.name		= "spi_flash",
+-	.parts		= dreamplug_partitions,
+-	.nr_parts	= ARRAY_SIZE(dreamplug_partitions),
+-};
+-
+-static struct spi_board_info __initdata dreamplug_spi_slave_info[] = {
+-	{
+-		.modalias	= "m25p80",
+-		.platform_data	= &dreamplug_spi_slave_data,
+-		.irq		= -1,
+-		.max_speed_hz	= 50000000,
+-		.bus_num	= 0,
+-		.chip_select	= 0,
+-	},
+-};
+-
+-static struct mv643xx_eth_platform_data dreamplug_ge00_data = {
+-	.phy_addr	= MV643XX_ETH_PHY_ADDR(0),
+-};
+-
+-static struct mv643xx_eth_platform_data dreamplug_ge01_data = {
+-	.phy_addr	= MV643XX_ETH_PHY_ADDR(1),
+-};
+-
+-static struct mv_sata_platform_data dreamplug_sata_data = {
+-	.n_ports	= 1,
+-};
+-
+-static struct mvsdio_platform_data dreamplug_mvsdio_data = {
+-	/* unfortunately the CD signal has not been connected */
+-};
+-
+-static struct gpio_led dreamplug_led_pins[] = {
+-	{
+-		.name			= "dreamplug:blue:bluetooth",
+-		.gpio			= 47,
+-		.active_low		= 1,
+-	},
+-	{
+-		.name			= "dreamplug:green:wifi",
+-		.gpio			= 48,
+-		.active_low		= 1,
+-	},
+-	{
+-		.name			= "dreamplug:green:wifi_ap",
+-		.gpio			= 49,
+-		.active_low		= 1,
+-	},
+-};
+-
+-static struct gpio_led_platform_data dreamplug_led_data = {
+-	.leds		= dreamplug_led_pins,
+-	.num_leds	= ARRAY_SIZE(dreamplug_led_pins),
+-};
+-
+-static struct platform_device dreamplug_leds = {
+-	.name	= "leds-gpio",
+-	.id	= -1,
+-	.dev	= {
+-		.platform_data	= &dreamplug_led_data,
+-	}
+-};
+-
+-static unsigned int dreamplug_mpp_config[] __initdata = {
+-	MPP0_SPI_SCn,
+-	MPP1_SPI_MOSI,
+-	MPP2_SPI_SCK,
+-	MPP3_SPI_MISO,
+-	MPP47_GPIO,	/* Bluetooth LED */
+-	MPP48_GPIO,	/* Wifi LED */
+-	MPP49_GPIO,	/* Wifi AP LED */
+-	0
+-};
+-
+-static void __init dreamplug_init(void)
+-{
+-	/*
+-	 * Basic setup. Needs to be called early.
+-	 */
+-	kirkwood_mpp_conf(dreamplug_mpp_config);
+-
+-	spi_register_board_info(dreamplug_spi_slave_info,
+-				ARRAY_SIZE(dreamplug_spi_slave_info));
+-	kirkwood_spi_init();
+-
+-	kirkwood_ehci_init();
+-	kirkwood_ge00_init(&dreamplug_ge00_data);
+-	kirkwood_ge01_init(&dreamplug_ge01_data);
+-	kirkwood_sata_init(&dreamplug_sata_data);
+-	kirkwood_sdio_init(&dreamplug_mvsdio_data);
+-
+-	platform_device_register(&dreamplug_leds);
+-}
+-
+ static void __init kirkwood_dt_init(void)
+ {
+ 	pr_info("Kirkwood: %s, TCLK=%d.\n", kirkwood_id(), kirkwood_tclk);
+Index: sid/arch/arm/mach-kirkwood/common.h
+===================================================================
+--- sid.orig/arch/arm/mach-kirkwood/common.h	2012-06-10 20:02:22.000000000 +0200
++++ sid/arch/arm/mach-kirkwood/common.h	2012-06-10 20:02:33.721135970 +0200
+@@ -51,6 +51,14 @@ void kirkwood_nand_init(struct mtd_parti
+ void kirkwood_nand_init_rnb(struct mtd_partition *parts, int nr_parts, int (*dev_ready)(struct mtd_info *));
+ void kirkwood_audio_init(void);
+ 
++/* board init functions for boards not fully converted to fdt */
++#ifdef CONFIG_MACH_DREAMPLUG_DT
++void dreamplug_init(void);
++#else
++static inline void dreamplug_init(void) {};
++#endif
++
++/* early init functions not converted to fdt yet */
+ char *kirkwood_id(void);
+ void kirkwood_l2_init(void);
+ void kirkwood_rtc_init(void);

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-use-mrvl-ticker-symbol.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-use-mrvl-ticker-symbol.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fdt-use-mrvl-ticker-symbol.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fdt-use-mrvl-ticker-symbol.patch)
@@ -0,0 +1,44 @@
+commit b77816dea3e4c0f815510dea2a0ca9bcda6644dc
+Author: Jason Cooper <jason at lakedaemon.net>
+Date:   Wed Mar 7 15:03:57 2012 +0000
+
+    ARM: kirkwood: fdt: use mrvl ticker symbol
+    
+    Also, use inclusive register size for uart0.
+    
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+
+diff --git a/arch/arm/boot/dts/kirkwood-dreamplug.dts b/arch/arm/boot/dts/kirkwood-dreamplug.dts
+index 8a5dff8..333f11b 100644
+--- a/arch/arm/boot/dts/kirkwood-dreamplug.dts
++++ b/arch/arm/boot/dts/kirkwood-dreamplug.dts
+@@ -4,7 +4,7 @@
+ 
+ / {
+ 	model = "Globalscale Technologies Dreamplug";
+-	compatible = "globalscale,dreamplug-003-ds2001", "globalscale,dreamplug", "marvell,kirkwood-88f6281", "marvell,kirkwood";
++	compatible = "globalscale,dreamplug-003-ds2001", "globalscale,dreamplug", "mrvl,kirkwood-88f6281", "mrvl,kirkwood";
+ 
+ 	memory {
+ 		device_type = "memory";
+@@ -17,7 +17,7 @@
+ 
+ 	serial at f1012000 {
+ 		compatible = "ns16550a";
+-		reg = <0xf1012000 0xff>;
++		reg = <0xf1012000 0x100>;
+ 		reg-shift = <2>;
+ 		interrupts = <33>;
+ 		clock-frequency = <200000000>;
+diff --git a/arch/arm/boot/dts/kirkwood.dtsi b/arch/arm/boot/dts/kirkwood.dtsi
+index 771c6bb..702b955 100644
+--- a/arch/arm/boot/dts/kirkwood.dtsi
++++ b/arch/arm/boot/dts/kirkwood.dtsi
+@@ -1,6 +1,6 @@
+ /include/ "skeleton.dtsi"
+ 
+ / {
+-	compatible = "marvell,kirkwood";
++	compatible = "mrvl,kirkwood";
+ };
+ 

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fix-orion_gpio_set_blink.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fix-orion_gpio_set_blink.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-fix-orion_gpio_set_blink.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-fix-orion_gpio_set_blink.patch)
@@ -0,0 +1,29 @@
+commit 92a486eabefadca1169fbf15d737feeaf2bda844
+Author: Arnaud Patard (Rtp) <arnaud.patard at rtp-net.org>
+Date:   Wed Apr 18 23:16:39 2012 +0200
+
+    kirkwood/orion: fix orion_gpio_set_blink
+    
+    gpio registers are for 32 gpios. Given that orion_gpio_set_blink is called
+    directly and not through gpiolib, it needs to make sure that the pin value
+    given to the internal functions are between 0 and 31.
+    
+    Signed-off-by: Arnaud Patard <arnaud.patard at rtp-net.org>
+    Tested-By: Adam Baker <linux at baker-net.org.uk>
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+
+diff --git a/arch/arm/plat-orion/gpio.c b/arch/arm/plat-orion/gpio.c
+index 10d1608..d3401e7 100644
+--- a/arch/arm/plat-orion/gpio.c
++++ b/arch/arm/plat-orion/gpio.c
+@@ -289,8 +289,8 @@ void orion_gpio_set_blink(unsigned pin, int blink)
+ 		return;
+ 
+ 	spin_lock_irqsave(&ochip->lock, flags);
+-	__set_level(ochip, pin, 0);
+-	__set_blinking(ochip, pin, blink);
++	__set_level(ochip, pin & 31, 0);
++	__set_blinking(ochip, pin & 31, blink);
+ 	spin_unlock_irqrestore(&ochip->lock, flags);
+ }
+ EXPORT_SYMBOL(orion_gpio_set_blink);

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-rtc-mv-devicetree-bindings.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-rtc-mv-devicetree-bindings.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-rtc-mv-devicetree-bindings.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-rtc-mv-devicetree-bindings.patch)
@@ -0,0 +1,42 @@
+commit ea983ede1195982c64220e9030c28ff111c8655c
+Author: Jason Cooper <jason at lakedaemon.net>
+Date:   Tue Mar 6 23:53:57 2012 +0000
+
+    ARM: kirkwood: rtc-mv devicetree bindings
+    
+    Trivial conversion to devicetree.
+    
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+
+diff --git a/drivers/rtc/rtc-mv.c b/drivers/rtc/rtc-mv.c
+index 768e2ed..0dd8421 100644
+--- a/drivers/rtc/rtc-mv.c
++++ b/drivers/rtc/rtc-mv.c
+@@ -12,6 +12,7 @@
+ #include <linux/bcd.h>
+ #include <linux/io.h>
+ #include <linux/platform_device.h>
++#include <linux/of.h>
+ #include <linux/delay.h>
+ #include <linux/gfp.h>
+ #include <linux/module.h>
+@@ -294,11 +295,19 @@ static int __exit mv_rtc_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_OF
++static struct of_device_id rtc_mv_of_match_table[] = {
++	{ .compatible = "mrvl,orion-rtc", },
++	{}
++};
++#endif
++
+ static struct platform_driver mv_rtc_driver = {
+ 	.remove		= __exit_p(mv_rtc_remove),
+ 	.driver		= {
+ 		.name	= "rtc-mv",
+ 		.owner	= THIS_MODULE,
++		.of_match_table = of_match_ptr(rtc_mv_of_match_table),
+ 	},
+ };
+ 

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-use-devicetree-for-rtc-mv.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-use-devicetree-for-rtc-mv.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood-use-devicetree-for-rtc-mv.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood-use-devicetree-for-rtc-mv.patch)
@@ -0,0 +1,62 @@
+commit e871b87a1e978e618c75acd4ceb6cd4699728691
+Author: Jason Cooper <jason at lakedaemon.net>
+Date:   Tue Mar 6 23:55:04 2012 +0000
+
+    ARM: kirkwood: use devicetree for rtc-mv
+    
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+    Acked-by: Arnd Bergmann <arnd at arndb.de>
+
+diff --git a/arch/arm/boot/dts/kirkwood.dtsi b/arch/arm/boot/dts/kirkwood.dtsi
+index 825310b..3474ef8 100644
+--- a/arch/arm/boot/dts/kirkwood.dtsi
++++ b/arch/arm/boot/dts/kirkwood.dtsi
+@@ -26,5 +26,11 @@
+ 			/* set clock-frequency in board dts */
+ 			status = "disabled";
+ 		};
++
++		rtc at 10300 {
++			compatible = "mrvl,kirkwood-rtc", "mrvl,orion-rtc";
++			reg = <0x10300 0x20>;
++			interrupts = <53>;
++		};
+ 	};
+ };
+diff --git a/arch/arm/mach-kirkwood/board-dt.c b/arch/arm/mach-kirkwood/board-dt.c
+index 975ad01..1c672d9 100644
+--- a/arch/arm/mach-kirkwood/board-dt.c
++++ b/arch/arm/mach-kirkwood/board-dt.c
+@@ -43,7 +43,6 @@ static void __init kirkwood_dt_init(void)
+ #endif
+ 
+ 	/* internal devices that every board has */
+-	kirkwood_rtc_init();
+ 	kirkwood_wdt_init();
+ 	kirkwood_xor0_init();
+ 	kirkwood_xor1_init();
+diff --git a/arch/arm/mach-kirkwood/common.c b/arch/arm/mach-kirkwood/common.c
+index 04a7eb9..a02cae8 100644
+--- a/arch/arm/mach-kirkwood/common.c
++++ b/arch/arm/mach-kirkwood/common.c
+@@ -163,7 +163,7 @@ void __init kirkwood_nand_init_rnb(struct mtd_partition *parts, int nr_parts,
+ /*****************************************************************************
+  * SoC RTC
+  ****************************************************************************/
+-void __init kirkwood_rtc_init(void)
++static void __init kirkwood_rtc_init(void)
+ {
+ 	orion_rtc_init(RTC_PHYS_BASE, IRQ_KIRKWOOD_RTC);
+ }
+diff --git a/arch/arm/mach-kirkwood/common.h b/arch/arm/mach-kirkwood/common.h
+index 4737578..fa8e768 100644
+--- a/arch/arm/mach-kirkwood/common.h
++++ b/arch/arm/mach-kirkwood/common.h
+@@ -61,7 +61,6 @@ static inline void dreamplug_init(void) {};
+ /* early init functions not converted to fdt yet */
+ char *kirkwood_id(void);
+ void kirkwood_l2_init(void);
+-void kirkwood_rtc_init(void);
+ void kirkwood_wdt_init(void);
+ void kirkwood_xor0_init(void);
+ void kirkwood_xor1_init(void);

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood_add_missing_kexec_h.patch (from r19226, dists/sid/linux/debian/patches/features/arm/kirkwood_add_missing_kexec_h.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/kirkwood_add_missing_kexec_h.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/kirkwood_add_missing_kexec_h.patch)
@@ -0,0 +1,30 @@
+commit a7ac56de8316c0eb1111824c9add045cac2bd7a2
+Author: Ian Campbell <ijc at hellion.org.uk>
+Date:   Sun Apr 29 14:40:42 2012 +0100
+
+    ARM: kirkwood: add missing kexec.h include
+    
+    Fixes the following build error when CONFIG_KEXEC is enabled:
+      CC      arch/arm/mach-kirkwood/board-dt.o
+    arch/arm/mach-kirkwood/board-dt.c: In function 'kirkwood_dt_init':
+    arch/arm/mach-kirkwood/board-dt.c:52:2: error: 'kexec_reinit' undeclared (first use in this function)
+    arch/arm/mach-kirkwood/board-dt.c:52:2: note: each undeclared identifier is reported only once for each function it appears in
+    
+    Signed-off-by: Ian Campbell <ijc at hellion.org.uk>
+    [v4, rebase onto recent Linus for repost]
+    [v3, speak actual English in the commit message, thanks Sergei Shtylyov]
+    [v2, using linux/kexec.h not asm/kexec.h]
+    Signed-off-by: Jason Cooper <jason at lakedaemon.net>
+
+diff --git a/arch/arm/mach-kirkwood/board-dt.c b/arch/arm/mach-kirkwood/board-dt.c
+index 1c672d9..f7fe1b9 100644
+--- a/arch/arm/mach-kirkwood/board-dt.c
++++ b/arch/arm/mach-kirkwood/board-dt.c
+@@ -14,6 +14,7 @@
+ #include <linux/init.h>
+ #include <linux/of.h>
+ #include <linux/of_platform.h>
++#include <linux/kexec.h>
+ #include <asm/mach/arch.h>
+ #include <asm/mach/map.h>
+ #include <mach/bridge-regs.h>

Copied: dists/squeeze-backports/linux/debian/patches/features/arm/net-drop-NET-dependency-from-HAVE_BPF_JIT.patch (from r19226, dists/sid/linux/debian/patches/features/arm/net-drop-NET-dependency-from-HAVE_BPF_JIT.patch)
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ dists/squeeze-backports/linux/debian/patches/features/arm/net-drop-NET-dependency-from-HAVE_BPF_JIT.patch	Fri Aug 17 02:04:57 2012	(r19325, copy of r19226, dists/sid/linux/debian/patches/features/arm/net-drop-NET-dependency-from-HAVE_BPF_JIT.patch)
@@ -0,0 +1,86 @@
+From e47b65b032f2997aa0a7392ecdf656c86d4d7561 Mon Sep 17 00:00:00 2001
+From: Sam Ravnborg <sam at ravnborg.org>
+Date: Mon, 21 May 2012 20:45:37 +0200
+Subject: [PATCH] net: drop NET dependency from HAVE_BPF_JIT
+
+There is no point having the NET dependency on the select target, as it
+forces all users to depend on NET to tell they support BPF_JIT.  Move
+the config option to the bottom of the file - this could be a nice place
+also for future "selectable" config symbols.
+
+Fix up all users to drop the dependency on NET now that it is not
+required to supress warnings for non-NET builds.
+
+Reported-by: Linus Torvalds <torvalds at linux-foundation.org>
+Signed-off-by: Sam Ravnborg <sam at ravnborg.org>
+Acked-by: David Miller <davem at davemloft.net>
+Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
+---
+ arch/arm/Kconfig     |    2 +-
+ arch/powerpc/Kconfig |    2 +-
+ arch/sparc/Kconfig   |    2 +-
+ arch/x86/Kconfig     |    2 +-
+ net/Kconfig          |    7 ++++---
+ 5 files changed, 8 insertions(+), 7 deletions(-)
+
+Index: linux/arch/arm/Kconfig
+===================================================================
+--- linux.orig/arch/arm/Kconfig	2012-06-24 23:41:24.000000000 +0200
++++ linux/arch/arm/Kconfig	2012-06-24 23:49:03.000000000 +0200
+@@ -30,7 +30,7 @@
+ 	select HAVE_SPARSE_IRQ
+ 	select GENERIC_IRQ_SHOW
+ 	select CPU_PM if (SUSPEND || CPU_IDLE)
+-	select HAVE_BPF_JIT if NET
++	select HAVE_BPF_JIT
+ 	help
+ 	  The ARM series is a line of low-power-consumption RISC chip designs
+ 	  licensed by ARM Ltd and targeted at embedded applications and
+Index: linux/arch/powerpc/Kconfig
+===================================================================
+--- linux.orig/arch/powerpc/Kconfig	2012-06-20 00:18:30.000000000 +0200
++++ linux/arch/powerpc/Kconfig	2012-06-24 23:49:03.000000000 +0200
+@@ -134,7 +134,7 @@
+ 	select GENERIC_IRQ_SHOW_LEVEL
+ 	select HAVE_RCU_TABLE_FREE if SMP
+ 	select HAVE_SYSCALL_TRACEPOINTS
+-	select HAVE_BPF_JIT if (PPC64 && NET)
++	select HAVE_BPF_JIT if PPC64
+ 	select HAVE_ARCH_JUMP_LABEL
+ 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
+ 
+Index: linux/arch/x86/Kconfig
+===================================================================
+--- linux.orig/arch/x86/Kconfig	2012-06-23 17:09:51.000000000 +0200
++++ linux/arch/x86/Kconfig	2012-06-24 23:49:03.000000000 +0200
+@@ -72,7 +72,7 @@
+ 	select GENERIC_CLOCKEVENTS_MIN_ADJUST
+ 	select IRQ_FORCED_THREADING
+ 	select USE_GENERIC_SMP_HELPERS if SMP
+-	select HAVE_BPF_JIT if (X86_64 && NET)
++	select HAVE_BPF_JIT if X86_64
+ 	select CLKEVT_I8253
+ 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
+ 
+Index: linux/net/Kconfig
+===================================================================
+--- linux.orig/net/Kconfig	2012-06-20 00:18:30.000000000 +0200
++++ linux/net/Kconfig	2012-06-24 23:49:03.000000000 +0200
+@@ -232,9 +232,6 @@
+ 	depends on SMP && SYSFS && USE_GENERIC_SMP_HELPERS
+ 	default y
+ 
+-config HAVE_BPF_JIT
+-	bool
+-
+ config BPF_JIT
+ 	bool "enable BPF Just In Time compiler"
+ 	depends on HAVE_BPF_JIT
+@@ -326,3 +323,7 @@
+ 
+ 
+ endif   # if NET
++
++# Used by archs to tell that they support BPF_JIT
++config HAVE_BPF_JIT
++	bool

Modified: dists/squeeze-backports/linux/debian/patches/series
==============================================================================
--- dists/squeeze-backports/linux/debian/patches/series	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/patches/series	Fri Aug 17 02:04:57 2012	(r19325)
@@ -47,9 +47,6 @@
 
 bugfix/all/0004-media-staging-lirc_serial-Fix-bogus-error-codes.patch
 
-features/all/topology-Provide-CPU-topology-in-sysfs-in-SMP-configura.patch
-bugfix/all/cpu-Do-not-return-errors-from-cpu_dev_init-which-wil.patch
-bugfix/all/cpu-Register-a-generic-CPU-device-on-architectures-t.patch
 debian/x86-memtest-WARN-if-bad-RAM-found.patch
 bugfix/all/snapshot-Implement-compat_ioctl.patch
 debian/ARM-Remove-use-of-possibly-undefined-BUILD_BUG_ON-in.patch
@@ -181,16 +178,6 @@
 bugfix/all/brcmsmac-INTERMEDIATE-but-not-AMPDU-only-when-tracin.patch
 bugfix/all/NFSv4-Rate-limit-the-state-manager-for-lock-reclaim-.patch
 
-# Temporary, until the next ABI bump
-debian/revert-rtc-Provide-flag-for-rtc-devices-that-don-t-s.patch
-debian/nls-Avoid-ABI-change-from-improvement-to-utf8s_to_ut.patch
-debian/efi-avoid-ABI-change.patch
-debian/skbuff-avoid-ABI-change-in-3.2.17.patch
-debian/usb-hcd-avoid-ABI-change-in-3.2.17.patch
-debian/fork-avoid-ABI-change-in-3.2.18.patch
-debian/mmc-Avoid-ABI-change-in-3.2.19.patch
-debian/net-restore-skb_set_dev-removed-in-3.2.20.patch
-
 bugfix/all/ext4-Report-max_batch_time-option-correctly.patch
 
 # Update wacom driver to 3.5ish
@@ -219,6 +206,10 @@
 features/all/wacom/0024-Input-wacom-retrieve-maximum-number-of-touch-points.patch
 features/all/wacom/0025-Input-wacom-add-0xE5-MT-device-support.patch
 features/all/wacom/0026-Input-wacom-return-proper-error-if-usb_get_extra_des.patch
+features/all/wacom/0027-wacom-do-not-crash-when-retrieving-touch_max.patch
+features/all/wacom/0028-wacom-leave-touch_max-as-is-if-predefined.patch
+features/all/wacom/0029-wacom-do-not-request-tablet-data-on-MT-Tablet-PC-pen.patch
+features/all/wacom/0030-wacom-ignore-new-style-Wacom-multi-touch-packets-on-.patch
 
 # Add support for Ralink RT5392/RF5372 chipset
 features/all/rt2x00-add-debug-message-for-new-chipset.patch
@@ -298,6 +289,7 @@
 
 # AppArmor userland compatibility.  This had better be gone in wheezy+1!
 features/all/AppArmor-compatibility-patch-for-v5-interface.patch
+bugfix/all/apparmor-remove-advertising-the-support-of-network-r.patch
 
 bugfix/x86/mm-pmd_read_atomic-fix-32bit-pae-pmd-walk-vs-pmd_populate-smp-race.patch
 bugfix/x86/thp-avoid-atomic64_read-in-pmd_read_atomic-for-32bit-pae.patch
@@ -312,8 +304,6 @@
 features/all/hidepid/0002-procfs-add-hidepid-and-gid-mount-options.patch
 features/all/hidepid/0003-proc-fix-null-pointer-deref-in-proc_pid_permission.patch
 features/all/hidepid/0004-proc-fix-mount-t-proc-o-AAA.patch
-# Temporary, until the next ABI bump
-debian/avoid-ABI-change-for-hidepid.patch
 
 bugfix/all/NFSv4-Reduce-the-footprint-of-the-idmapper.patch
 bugfix/all/NFSv4-Further-reduce-the-footprint-of-the-idmapper.patch
@@ -326,3 +316,59 @@
 bugfix/all/macvtap-zerocopy-validate-vectors-before-building-sk.patch
 
 bugfix/all/KVM-Fix-buffer-overflow-in-kvm_set_irq.patch
+bugfix/all/ethtool-allow-ETHTOOL_GSSET_INFO-for-users.patch
+
+# CPU sysdev removal from 3.3 and x86 CPU auto-loading from 3.4
+features/all/cpu-devices/driver-core-implement-sysdev-functionality-for-regul.patch
+features/all/cpu-devices/cpu-convert-cpu-and-machinecheck-sysdev_class-to-a-r.patch
+features/all/cpu-devices/topology-Provide-CPU-topology-in-sysfs-in-SMP-configura.patch
+features/all/cpu-devices/cpu-Do-not-return-errors-from-cpu_dev_init-which-wil.patch
+features/all/cpu-devices/cpu-Register-a-generic-CPU-device-on-architectures-t.patch
+features/all/cpu-devices/x86-mce-Fix-CPU-hotplug-and-suspend-regression-relat.patch
+features/all/cpu-devices/mce-fix-warning-messages-about-static-struct-mce_dev.patch
+features/all/cpu-devices/x86-mce-Convert-static-array-of-pointers-to-per-cpu-.patch
+features/all/cpu-devices/Add-driver-auto-probing-for-x86-features-v4.patch
+features/all/cpu-devices/CPU-Introduce-ARCH_HAS_CPU_AUTOPROBE-and-X86-parts.patch
+features/all/cpu-devices/driver-core-cpu-remove-kernel-warning-when-removing-.patch
+features/all/cpu-devices/driver-core-cpu-fix-kobject-warning-when-hotplugging.patch
+features/all/cpu-devices/crypto-Add-support-for-x86-cpuid-auto-loading-for-x8.patch
+features/all/cpu-devices/intel-idle-convert-to-x86_cpu_id-auto-probing.patch
+features/all/cpu-devices/ACPI-Load-acpi-cpufreq-from-processor-driver-automat.patch
+features/all/cpu-devices/HWMON-Convert-via-cputemp-to-x86-cpuid-autoprobing.patch
+features/all/cpu-devices/HWMON-Convert-coretemp-to-x86-cpuid-autoprobing.patch
+features/all/cpu-devices/X86-Introduce-HW-Pstate-scattered-cpuid-feature.patch
+features/all/cpu-devices/cpufreq-Add-support-for-x86-cpuinfo-auto-loading-v4.patch
+features/all/cpu-devices/x86-cpu-Fix-overrun-check-in-arch_print_cpu_modalias.patch
+features/all/cpu-devices/x86-cpu-Clean-up-modalias-feature-matching.patch
+features/all/cpu-devices/intel_idle-Fix-ID-for-Nehalem-EX-Xeon-in-device-ID-t.patch
+features/all/cpu-devices/powernow-k7-Fix-CPU-family-number.patch
+features/all/cpu-devices/powernow-k6-Really-enable-auto-loading.patch
+features/all/cpu-devices/intel_idle-Revert-change-of-auto_demotion_disable_fl.patch
+features/all/cpu-devices/Partially-revert-cpufreq-Add-support-for-x86-cpuinfo.patch
+features/all/cpu-devices/cpufreq-gx-Fix-the-compile-error.patch
+features/all/cpu-devices/tracing-mm-Move-include-of-trace-events-kmem.h-out-o.patch
+features/all/cpu-devices/driver-core-remove-__must_check-from-device_create_f.patch
+
+features/arm/kirkwood-add-dreamplug-fdt-support.patch
+features/arm/kirkwood-fdt-convert-uart0-to-devicetree.patch
+features/arm/kirkwood-fdt-use-mrvl-ticker-symbol.patch
+features/arm/kirkwood-fdt-absorb-kirkwood_init.patch
+features/arm/kirkwood-fdt-facilitate-new-boards-during-fdt-migration.patch
+features/arm/kirkwood-fdt-define-uart01-as-disabled.patch
+features/arm/kirkwood-rtc-mv-devicetree-bindings.patch
+features/arm/kirkwood-use-devicetree-for-rtc-mv.patch
+features/arm/kirkwood_add_missing_kexec_h.patch
+features/arm/kirkwood-fix-orion_gpio_set_blink.patch
+features/arm/kirkwood-create-a-generic-function-for-gpio-led-blinking.patch
+features/arm/kirkwood-add-configuration-for-mpp12-as-gpio.patch
+features/arm/kirkwood-add-iconnect-support.patch
+
+features/all/Input-add-Synaptics-USB-device-driver.patch
+features/arm/ARM-7259-3-net-JIT-compiler-for-packet-filters.patch
+features/arm/ARM-fix-Kconfig-warning-for-HAVE_BPF_JIT.patch
+features/arm/net-drop-NET-dependency-from-HAVE_BPF_JIT.patch
+
+bugfix/all/xen-netfront-teardown-the-device-before-unregistering-it.patch
+
+# Until next ABI bump
+debian/driver-core-avoid-ABI-change-for-removal-of-__must_check.patch

Modified: dists/squeeze-backports/linux/debian/patches/series-rt
==============================================================================
--- dists/squeeze-backports/linux/debian/patches/series-rt	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/patches/series-rt	Fri Aug 17 02:04:57 2012	(r19325)
@@ -1,266 +1,269 @@
-features/all/rt/0001-x86-Call-idle-notifier-after-irq_enter.patch
-features/all/rt/0002-slab-lockdep-Annotate-all-slab-caches.patch
-features/all/rt/0003-x86-kprobes-Remove-remove-bogus-preempt_enable.patch
-features/all/rt/0004-x86-hpet-Disable-MSI-on-Lenovo-W510.patch
-features/all/rt/0005-block-Shorten-interrupt-disabled-regions.patch
-features/all/rt/0006-sched-Distangle-worker-accounting-from-rq-3Elock.patch
-features/all/rt/0007-mips-enable-interrupts-in-signal.patch.patch
-features/all/rt/0008-arm-enable-interrupts-in-signal-code.patch.patch
-features/all/rt/0009-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch
-features/all/rt/0010-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch
-features/all/rt/0011-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch
-features/all/rt/0012-powerpc-Allow-irq-threading.patch
-features/all/rt/0013-sched-Keep-period-timer-ticking-when-throttling-acti.patch
-features/all/rt/0014-sched-Do-not-throttle-due-to-PI-boosting.patch
-features/all/rt/0015-time-Remove-bogus-comments.patch
-features/all/rt/0016-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch
-features/all/rt/0017-x86-vdso-Use-seqcount-instead-of-seqlock.patch
-features/all/rt/0018-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch
-features/all/rt/0019-seqlock-Remove-unused-functions.patch
-features/all/rt/0020-seqlock-Use-seqcount.patch
-features/all/rt/0021-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch
-features/all/rt/0022-timekeeping-Split-xtime_lock.patch
-features/all/rt/0023-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch
-features/all/rt/0024-mm-memcg-shorten-preempt-disabled-section-around-eve.patch
-features/all/rt/0025-tracing-Account-for-preempt-off-in-preempt_schedule.patch
-features/all/rt/0026-signal-revert-ptrace-preempt-magic.patch.patch
-features/all/rt/0027-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch
-features/all/rt/0028-arm-Allow-forced-irq-threading.patch
-features/all/rt/0029-preempt-rt-Convert-arm-boot_lock-to-raw.patch
-features/all/rt/0030-sched-Create-schedule_preempt_disabled.patch
-features/all/rt/0031-sched-Use-schedule_preempt_disabled.patch
-features/all/rt/0032-signals-Do-not-wakeup-self.patch
-features/all/rt/0033-posix-timers-Prevent-broadcast-signals.patch
-features/all/rt/0034-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch
-features/all/rt/0035-signal-x86-Delay-calling-signals-in-atomic.patch
-features/all/rt/0036-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch
-features/all/rt/0037-drivers-random-Reduce-preempt-disabled-region.patch
-features/all/rt/0038-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch
-features/all/rt/0039-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch
-features/all/rt/0040-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch
-features/all/rt/0041-drivers-net-Use-disable_irq_nosync-in-8139too.patch
-features/all/rt/0042-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch
-features/all/rt/0043-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch
-features/all/rt/0044-preempt-mark-legitimated-no-resched-sites.patch.patch
-features/all/rt/0045-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch
-features/all/rt/0046-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch
-features/all/rt/0047-mm-pagefault_disabled.patch
-features/all/rt/0048-mm-raw_pagefault_disable.patch
-features/all/rt/0049-filemap-fix-up.patch.patch
-features/all/rt/0050-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch
-features/all/rt/0051-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch
-features/all/rt/0052-suspend-Prevent-might-sleep-splats.patch
-features/all/rt/0053-OF-Fixup-resursive-locking-code-paths.patch
-features/all/rt/0054-of-convert-devtree-lock.patch.patch
-features/all/rt/0055-list-add-list-last-entry.patch.patch
-features/all/rt/0056-mm-page-alloc-use-list-last-entry.patch.patch
-features/all/rt/0057-mm-slab-move-debug-out.patch.patch
-features/all/rt/0058-rwsem-inlcude-fix.patch.patch
-features/all/rt/0059-sysctl-include-fix.patch.patch
-features/all/rt/0060-net-flip-lock-dep-thingy.patch.patch
-features/all/rt/0061-softirq-thread-do-softirq.patch.patch
-features/all/rt/0062-softirq-split-out-code.patch.patch
-features/all/rt/0063-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch
-features/all/rt/0064-x86-32-fix-signal-crap.patch.patch
-features/all/rt/0065-x86-Do-not-disable-preemption-in-int3-on-32bit.patch
-features/all/rt/0066-rcu-Reduce-lock-section.patch
-features/all/rt/0067-locking-various-init-fixes.patch.patch
-features/all/rt/0068-wait-Provide-__wake_up_all_locked.patch
-features/all/rt/0069-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch
-features/all/rt/0070-latency-hist.patch.patch
-features/all/rt/0071-hwlatdetect.patch.patch
-features/all/rt/0073-early-printk-consolidate.patch.patch
-features/all/rt/0074-printk-kill.patch.patch
-features/all/rt/0075-printk-force_early_printk-boot-param-to-help-with-de.patch
-features/all/rt/0076-rt-preempt-base-config.patch.patch
-features/all/rt/0077-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch
-features/all/rt/0078-rt-local_irq_-variants-depending-on-RT-RT.patch
-features/all/rt/0079-preempt-Provide-preempt_-_-no-rt-variants.patch
-features/all/rt/0080-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch
-features/all/rt/0081-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch
-features/all/rt/0082-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch
-features/all/rt/0083-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch
-features/all/rt/0084-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch
-features/all/rt/0085-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch
-features/all/rt/0086-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch
-features/all/rt/0087-usb-Use-local_irq_-_nort-variants.patch
-features/all/rt/0088-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch
-features/all/rt/0089-mm-scatterlist-dont-disable-irqs-on-RT.patch
-features/all/rt/0090-signal-fix-up-rcu-wreckage.patch.patch
-features/all/rt/0091-net-wireless-warn-nort.patch.patch
-features/all/rt/0092-mm-Replace-cgroup_page-bit-spinlock.patch
-features/all/rt/0093-buffer_head-Replace-bh_uptodate_lock-for-rt.patch
-features/all/rt/0094-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch
-features/all/rt/0095-genirq-Disable-DEBUG_SHIRQ-for-rt.patch
-features/all/rt/0096-genirq-Disable-random-call-on-preempt-rt.patch
-features/all/rt/0097-genirq-disable-irqpoll-on-rt.patch
-features/all/rt/0098-genirq-force-threading.patch.patch
-features/all/rt/0099-drivers-net-fix-livelock-issues.patch
-features/all/rt/0100-drivers-net-vortex-fix-locking-issues.patch
-features/all/rt/0101-drivers-net-gianfar-Make-RT-aware.patch
-features/all/rt/0102-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch
-features/all/rt/0103-local-var.patch.patch
-features/all/rt/0104-rt-local-irq-lock.patch.patch
-features/all/rt/0105-cpu-rt-variants.patch.patch
-features/all/rt/0106-mm-slab-wrap-functions.patch.patch
-features/all/rt/0107-slab-Fix-__do_drain-to-use-the-right-array-cache.patch
-features/all/rt/0108-mm-More-lock-breaks-in-slab.c.patch
-features/all/rt/0109-mm-page_alloc-rt-friendly-per-cpu-pages.patch
-features/all/rt/0110-mm-page_alloc-reduce-lock-sections-further.patch
-features/all/rt/0111-mm-page-alloc-fix.patch.patch
-features/all/rt/0112-mm-convert-swap-to-percpu-locked.patch
-features/all/rt/0113-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch
-features/all/rt/0114-mm-make-vmstat-rt-aware.patch
-features/all/rt/0115-mm-shrink-the-page-frame-to-rt-size.patch
-features/all/rt/0116-ARM-Initialize-ptl-lock-for-vector-page.patch
-features/all/rt/0117-mm-Allow-only-slab-on-RT.patch
-features/all/rt/0118-radix-tree-rt-aware.patch.patch
-features/all/rt/0119-panic-disable-random-on-rt.patch
-features/all/rt/0120-ipc-Make-the-ipc-code-rt-aware.patch
-features/all/rt/0121-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch
-features/all/rt/0122-relay-fix-timer-madness.patch
-features/all/rt/0123-net-ipv4-route-use-locks-on-up-rt.patch.patch
-features/all/rt/0124-workqueue-avoid-the-lock-in-cpu-dying.patch.patch
-features/all/rt/0125-timers-prepare-for-full-preemption.patch
-features/all/rt/0126-timers-preempt-rt-support.patch
-features/all/rt/0127-timers-fix-timer-hotplug-on-rt.patch
-features/all/rt/0128-timers-mov-printk_tick-to-soft-interrupt.patch
-features/all/rt/0129-timer-delay-waking-softirqs-from-the-jiffy-tick.patch
-features/all/rt/0130-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch
-features/all/rt/0131-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch
-features/all/rt/0132-hrtimers-prepare-full-preemption.patch
-features/all/rt/0133-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
-features/all/rt/0134-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch
-features/all/rt/0135-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch
-features/all/rt/0136-hrtimer-fix-reprogram-madness.patch.patch
-features/all/rt/0137-timer-fd-Prevent-live-lock.patch
-features/all/rt/0138-posix-timers-thread-posix-cpu-timers-on-rt.patch
-features/all/rt/0139-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch
-features/all/rt/0140-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch
-features/all/rt/0141-sched-delay-put-task.patch.patch
-features/all/rt/0142-sched-limit-nr-migrate.patch.patch
-features/all/rt/0143-sched-mmdrop-delayed.patch.patch
-features/all/rt/0144-sched-rt-mutex-wakeup.patch.patch
-features/all/rt/0145-sched-prevent-idle-boost.patch.patch
-features/all/rt/0146-sched-might-sleep-do-not-account-rcu-depth.patch.patch
-features/all/rt/0147-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch
-features/all/rt/0148-sched-cond-resched.patch.patch
-features/all/rt/0149-cond-resched-softirq-fix.patch.patch
-features/all/rt/0150-sched-no-work-when-pi-blocked.patch.patch
-features/all/rt/0151-cond-resched-lock-rt-tweak.patch.patch
-features/all/rt/0152-sched-disable-ttwu-queue.patch.patch
-features/all/rt/0153-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch
-features/all/rt/0154-sched-ttwu-Return-success-when-only-changing-the-sav.patch
-features/all/rt/0155-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch
-features/all/rt/0156-stomp-machine-mark-stomper-thread.patch.patch
-features/all/rt/0157-stomp-machine-raw-lock.patch.patch
-features/all/rt/0158-hotplug-Lightweight-get-online-cpus.patch
-features/all/rt/0159-hotplug-sync_unplug-No.patch
-features/all/rt/0160-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch
-features/all/rt/0161-sched-migrate-disable.patch.patch
-features/all/rt/0162-hotplug-use-migrate-disable.patch.patch
-features/all/rt/0163-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch
-features/all/rt/0164-ftrace-migrate-disable-tracing.patch.patch
-features/all/rt/0165-tracing-Show-padding-as-unsigned-short.patch
-features/all/rt/0166-migrate-disable-rt-variant.patch.patch
-features/all/rt/0167-sched-Optimize-migrate_disable.patch
-features/all/rt/0168-sched-Generic-migrate_disable.patch
-features/all/rt/0169-sched-rt-Fix-migrate_enable-thinko.patch
-features/all/rt/0170-sched-teach-migrate_disable-about-atomic-contexts.patch
-features/all/rt/0171-sched-Postpone-actual-migration-disalbe-to-schedule.patch
-features/all/rt/0172-sched-Do-not-compare-cpu-masks-in-scheduler.patch
-features/all/rt/0173-sched-Have-migrate_disable-ignore-bounded-threads.patch
-features/all/rt/0174-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch
-features/all/rt/0175-ftrace-crap.patch.patch
-features/all/rt/0176-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch
-features/all/rt/0177-net-netif_rx_ni-migrate-disable.patch.patch
-features/all/rt/0178-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch
-features/all/rt/0179-lockdep-rt.patch.patch
-features/all/rt/0180-mutex-no-spin-on-rt.patch.patch
-features/all/rt/0181-softirq-local-lock.patch.patch
-features/all/rt/0182-softirq-Export-in_serving_softirq.patch
-features/all/rt/0183-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch
-features/all/rt/0184-softirq-Fix-unplug-deadlock.patch
-features/all/rt/0185-softirq-disable-softirq-stacks-for-rt.patch.patch
-features/all/rt/0186-softirq-make-fifo.patch.patch
-features/all/rt/0187-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch
-features/all/rt/0188-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch
-features/all/rt/0189-local-vars-migrate-disable.patch.patch
-features/all/rt/0190-md-raid5-Make-raid5_percpu-handling-RT-aware.patch
-features/all/rt/0191-rtmutex-lock-killable.patch.patch
-features/all/rt/0192-rtmutex-futex-prepare-rt.patch.patch
-features/all/rt/0193-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch
-features/all/rt/0194-rt-mutex-add-sleeping-spinlocks-support.patch.patch
-features/all/rt/0195-spinlock-types-separate-raw.patch.patch
-features/all/rt/0196-rtmutex-avoid-include-hell.patch.patch
-features/all/rt/0197-rt-add-rt-spinlocks.patch.patch
-features/all/rt/0198-rt-add-rt-to-mutex-headers.patch.patch
-features/all/rt/0199-rwsem-add-rt-variant.patch.patch
-features/all/rt/0200-rt-Add-the-preempt-rt-lock-replacement-APIs.patch
-features/all/rt/0201-rwlocks-Fix-section-mismatch.patch
-features/all/rt/0202-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch
-features/all/rt/0203-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch
-features/all/rt/0204-rcu-Frob-softirq-test.patch
-features/all/rt/0205-rcu-Merge-RCU-bh-into-RCU-preempt.patch
-features/all/rt/0206-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch
-features/all/rt/0207-rcu-more-fallout.patch.patch
-features/all/rt/0208-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch
-features/all/rt/0209-rt-rcutree-Move-misplaced-prototype.patch
-features/all/rt/0210-lglocks-rt.patch.patch
-features/all/rt/0211-serial-8250-Clean-up-the-locking-for-rt.patch
-features/all/rt/0212-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch
-features/all/rt/0213-drivers-tty-fix-omap-lock-crap.patch.patch
-features/all/rt/0214-rt-Improve-the-serial-console-PASS_LIMIT.patch
-features/all/rt/0215-fs-namespace-preemption-fix.patch
-features/all/rt/0216-mm-protect-activate-switch-mm.patch.patch
-features/all/rt/0217-fs-block-rt-support.patch.patch
-features/all/rt/0218-fs-ntfs-disable-interrupt-only-on-RT.patch
-features/all/rt/0219-x86-Convert-mce-timer-to-hrtimer.patch
-features/all/rt/0220-x86-stackprotector-Avoid-random-pool-on-rt.patch
-features/all/rt/0221-x86-Use-generic-rwsem_spinlocks-on-rt.patch
-features/all/rt/0222-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch
-features/all/rt/0223-workqueue-use-get-cpu-light.patch.patch
-features/all/rt/0224-epoll.patch.patch
-features/all/rt/0225-mm-vmalloc.patch.patch
-features/all/rt/revert-workqueue-skip-nr_running-sanity-check-in-wor.patch
-features/all/rt/0226-workqueue-Fix-cpuhotplug-trainwreck.patch
-features/all/rt/0227-workqueue-Fix-PF_THREAD_BOUND-abuse.patch
-features/all/rt/0228-workqueue-Use-get_cpu_light-in-flush_gcwq.patch
-features/all/rt/0229-hotplug-stuff.patch.patch
-features/all/rt/0230-debugobjects-rt.patch.patch
-features/all/rt/0231-jump-label-rt.patch.patch
-features/all/rt/0232-skbufhead-raw-lock.patch.patch
-features/all/rt/0233-x86-no-perf-irq-work-rt.patch.patch
-features/all/rt/0234-console-make-rt-friendly.patch.patch
-features/all/rt/0235-printk-Disable-migration-instead-of-preemption.patch
-features/all/rt/0236-power-use-generic-rwsem-on-rt.patch
-features/all/rt/0237-power-disable-highmem-on-rt.patch.patch
-features/all/rt/0238-arm-disable-highmem-on-rt.patch.patch
-features/all/rt/0239-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch
-features/all/rt/0240-mips-disable-highmem-on-rt.patch.patch
-features/all/rt/0241-net-Avoid-livelock-in-net_tx_action-on-RT.patch
-features/all/rt/0242-ping-sysrq.patch.patch
-features/all/rt/0243-kgdb-serial-Short-term-workaround.patch
-features/all/rt/0244-add-sys-kernel-realtime-entry.patch
-features/all/rt/0245-mm-rt-kmap_atomic-scheduling.patch
-features/all/rt/0246-ipc-sem-Rework-semaphore-wakeups.patch
-features/all/rt/0247-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch
-features/all/rt/0248-x86-kvm-require-const-tsc-for-rt.patch.patch
-features/all/rt/0249-scsi-fcoe-rt-aware.patch.patch
-features/all/rt/0250-x86-crypto-Reduce-preempt-disabled-regions.patch
-features/all/rt/0251-dm-Make-rt-aware.patch
-features/all/rt/0252-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch
-features/all/rt/0253-seqlock-Prevent-rt-starvation.patch
-features/all/rt/0254-timer-Fix-hotplug-for-rt.patch
-features/all/rt/0255-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch
-features/all/rt/0256-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch
-features/all/rt/0257-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch
-features/all/rt/0258-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch
-features/all/rt/0259-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch
-features/all/rt/0260-softirq-Check-preemption-after-reenabling-interrupts.patch
-features/all/rt/0261-rt-Introduce-cpu_chill.patch
-features/all/rt/0262-fs-dcache-Use-cpu_chill-in-trylock-loops.patch
-features/all/rt/0263-net-Use-cpu_chill-instead-of-cpu_relax.patch
-features/all/rt/0264-kconfig-disable-a-few-options-rt.patch.patch
-features/all/rt/0265-kconfig-preempt-rt-full.patch.patch
-features/all/rt/0266-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch
+features/all/rt/0001-Revert-workqueue-skip-nr_running-sanity-check-in-wor.patch
+features/all/rt/0002-x86-Call-idle-notifier-after-irq_enter.patch
+features/all/rt/0003-slab-lockdep-Annotate-all-slab-caches.patch
+features/all/rt/0004-x86-kprobes-Remove-remove-bogus-preempt_enable.patch
+features/all/rt/0005-x86-hpet-Disable-MSI-on-Lenovo-W510.patch
+features/all/rt/0006-block-Shorten-interrupt-disabled-regions.patch
+features/all/rt/0007-sched-Distangle-worker-accounting-from-rq-3Elock.patch
+features/all/rt/0008-mips-enable-interrupts-in-signal.patch.patch
+features/all/rt/0009-arm-enable-interrupts-in-signal-code.patch.patch
+features/all/rt/0010-powerpc-85xx-Mark-cascade-irq-IRQF_NO_THREAD.patch
+features/all/rt/0011-powerpc-wsp-Mark-opb-cascade-handler-IRQF_NO_THREAD.patch
+features/all/rt/0012-powerpc-Mark-IPI-interrupts-IRQF_NO_THREAD.patch
+features/all/rt/0013-powerpc-Allow-irq-threading.patch
+features/all/rt/0014-sched-Keep-period-timer-ticking-when-throttling-acti.patch
+features/all/rt/0015-sched-Do-not-throttle-due-to-PI-boosting.patch
+features/all/rt/0016-time-Remove-bogus-comments.patch
+features/all/rt/0017-x86-vdso-Remove-bogus-locking-in-update_vsyscall_tz.patch
+features/all/rt/0018-x86-vdso-Use-seqcount-instead-of-seqlock.patch
+features/all/rt/0019-ia64-vsyscall-Use-seqcount-instead-of-seqlock.patch
+features/all/rt/0020-seqlock-Remove-unused-functions.patch
+features/all/rt/0021-seqlock-Use-seqcount.patch
+features/all/rt/0022-vfs-fs_struct-Move-code-out-of-seqcount-write-sectio.patch
+features/all/rt/0023-timekeeping-Split-xtime_lock.patch
+features/all/rt/0024-intel_idle-Convert-i7300_idle_lock-to-raw-spinlock.patch
+features/all/rt/0025-mm-memcg-shorten-preempt-disabled-section-around-eve.patch
+features/all/rt/0026-tracing-Account-for-preempt-off-in-preempt_schedule.patch
+features/all/rt/0027-signal-revert-ptrace-preempt-magic.patch.patch
+features/all/rt/0028-arm-Mark-pmu-interupt-IRQF_NO_THREAD.patch
+features/all/rt/0029-arm-Allow-forced-irq-threading.patch
+features/all/rt/0030-preempt-rt-Convert-arm-boot_lock-to-raw.patch
+features/all/rt/0031-sched-Create-schedule_preempt_disabled.patch
+features/all/rt/0032-sched-Use-schedule_preempt_disabled.patch
+features/all/rt/0033-signals-Do-not-wakeup-self.patch
+features/all/rt/0034-posix-timers-Prevent-broadcast-signals.patch
+features/all/rt/0035-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch
+features/all/rt/0036-signal-x86-Delay-calling-signals-in-atomic.patch
+features/all/rt/0037-generic-Use-raw-local-irq-variant-for-generic-cmpxch.patch
+features/all/rt/0038-drivers-random-Reduce-preempt-disabled-region.patch
+features/all/rt/0039-ARM-AT91-PIT-Remove-irq-handler-when-clock-event-is-.patch
+features/all/rt/0040-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch
+features/all/rt/0041-drivers-net-tulip_remove_one-needs-to-call-pci_disab.patch
+features/all/rt/0042-drivers-net-Use-disable_irq_nosync-in-8139too.patch
+features/all/rt/0043-drivers-net-ehea-Make-rx-irq-handler-non-threaded-IR.patch
+features/all/rt/0044-drivers-net-at91_ether-Make-mdio-protection-rt-safe.patch
+features/all/rt/0045-preempt-mark-legitimated-no-resched-sites.patch.patch
+features/all/rt/0046-mm-Prepare-decoupling-the-page-fault-disabling-logic.patch
+features/all/rt/0047-mm-Fixup-all-fault-handlers-to-check-current-pagefau.patch
+features/all/rt/0048-mm-pagefault_disabled.patch
+features/all/rt/0049-mm-raw_pagefault_disable.patch
+features/all/rt/0050-filemap-fix-up.patch.patch
+features/all/rt/0051-mm-Remove-preempt-count-from-pagefault-disable-enabl.patch
+features/all/rt/0052-x86-highmem-Replace-BUG_ON-by-WARN_ON.patch
+features/all/rt/0053-suspend-Prevent-might-sleep-splats.patch
+features/all/rt/0054-OF-Fixup-resursive-locking-code-paths.patch
+features/all/rt/0055-of-convert-devtree-lock.patch.patch
+features/all/rt/0056-list-add-list-last-entry.patch.patch
+features/all/rt/0057-mm-page-alloc-use-list-last-entry.patch.patch
+features/all/rt/0058-mm-slab-move-debug-out.patch.patch
+features/all/rt/0059-rwsem-inlcude-fix.patch.patch
+features/all/rt/0060-sysctl-include-fix.patch.patch
+features/all/rt/0061-net-flip-lock-dep-thingy.patch.patch
+features/all/rt/0062-softirq-thread-do-softirq.patch.patch
+features/all/rt/0063-softirq-split-out-code.patch.patch
+features/all/rt/0064-x86-Do-not-unmask-io_apic-when-interrupt-is-in-progr.patch
+features/all/rt/0065-x86-32-fix-signal-crap.patch.patch
+features/all/rt/0066-x86-Do-not-disable-preemption-in-int3-on-32bit.patch
+features/all/rt/0067-rcu-Reduce-lock-section.patch
+features/all/rt/0068-locking-various-init-fixes.patch.patch
+features/all/rt/0069-wait-Provide-__wake_up_all_locked.patch
+features/all/rt/0070-pci-Use-__wake_up_all_locked-pci_unblock_user_cfg_ac.patch
+features/all/rt/0071-latency-hist.patch.patch
+features/all/rt/0072-hwlatdetect.patch.patch
+features/all/rt/0074-early-printk-consolidate.patch.patch
+features/all/rt/0075-printk-kill.patch.patch
+features/all/rt/0076-printk-force_early_printk-boot-param-to-help-with-de.patch
+features/all/rt/0077-rt-preempt-base-config.patch.patch
+features/all/rt/0078-bug-BUG_ON-WARN_ON-variants-dependend-on-RT-RT.patch
+features/all/rt/0079-rt-local_irq_-variants-depending-on-RT-RT.patch
+features/all/rt/0080-preempt-Provide-preempt_-_-no-rt-variants.patch
+features/all/rt/0081-ata-Do-not-disable-interrupts-in-ide-code-for-preemp.patch
+features/all/rt/0082-ide-Do-not-disable-interrupts-for-PREEMPT-RT.patch
+features/all/rt/0083-infiniband-Mellanox-IB-driver-patch-use-_nort-primit.patch
+features/all/rt/0084-input-gameport-Do-not-disable-interrupts-on-PREEMPT_.patch
+features/all/rt/0085-acpi-Do-not-disable-interrupts-on-PREEMPT_RT.patch
+features/all/rt/0086-core-Do-not-disable-interrupts-on-RT-in-kernel-users.patch
+features/all/rt/0087-core-Do-not-disable-interrupts-on-RT-in-res_counter..patch
+features/all/rt/0088-usb-Use-local_irq_-_nort-variants.patch
+features/all/rt/0089-tty-Do-not-disable-interrupts-in-put_ldisc-on-rt.patch
+features/all/rt/0090-mm-scatterlist-dont-disable-irqs-on-RT.patch
+features/all/rt/0091-signal-fix-up-rcu-wreckage.patch.patch
+features/all/rt/0092-net-wireless-warn-nort.patch.patch
+features/all/rt/0093-mm-Replace-cgroup_page-bit-spinlock.patch
+features/all/rt/0094-buffer_head-Replace-bh_uptodate_lock-for-rt.patch
+features/all/rt/0095-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch
+features/all/rt/0096-genirq-Disable-DEBUG_SHIRQ-for-rt.patch
+features/all/rt/0097-genirq-Disable-random-call-on-preempt-rt.patch
+features/all/rt/0098-genirq-disable-irqpoll-on-rt.patch
+features/all/rt/0099-genirq-force-threading.patch.patch
+features/all/rt/0100-drivers-net-fix-livelock-issues.patch
+features/all/rt/0101-drivers-net-vortex-fix-locking-issues.patch
+features/all/rt/0102-drivers-net-gianfar-Make-RT-aware.patch
+features/all/rt/0103-USB-Fix-the-mouse-problem-when-copying-large-amounts.patch
+features/all/rt/0104-local-var.patch.patch
+features/all/rt/0105-rt-local-irq-lock.patch.patch
+features/all/rt/0106-cpu-rt-variants.patch.patch
+features/all/rt/0107-mm-slab-wrap-functions.patch.patch
+features/all/rt/0108-slab-Fix-__do_drain-to-use-the-right-array-cache.patch
+features/all/rt/0109-mm-More-lock-breaks-in-slab.c.patch
+features/all/rt/0110-mm-page_alloc-rt-friendly-per-cpu-pages.patch
+features/all/rt/0111-mm-page_alloc-reduce-lock-sections-further.patch
+features/all/rt/0112-mm-page-alloc-fix.patch.patch
+features/all/rt/0113-mm-convert-swap-to-percpu-locked.patch
+features/all/rt/0114-mm-vmstat-fix-the-irq-lock-asymetry.patch.patch
+features/all/rt/0115-mm-make-vmstat-rt-aware.patch
+features/all/rt/0116-mm-shrink-the-page-frame-to-rt-size.patch
+features/all/rt/0117-ARM-Initialize-ptl-lock-for-vector-page.patch
+features/all/rt/0118-mm-Allow-only-slab-on-RT.patch
+features/all/rt/0119-radix-tree-rt-aware.patch.patch
+features/all/rt/0120-panic-disable-random-on-rt.patch
+features/all/rt/0121-ipc-Make-the-ipc-code-rt-aware.patch
+features/all/rt/0122-ipc-mqueue-Add-a-critical-section-to-avoid-a-deadloc.patch
+features/all/rt/0123-relay-fix-timer-madness.patch
+features/all/rt/0124-net-ipv4-route-use-locks-on-up-rt.patch.patch
+features/all/rt/0125-workqueue-avoid-the-lock-in-cpu-dying.patch.patch
+features/all/rt/0126-timers-prepare-for-full-preemption.patch
+features/all/rt/0127-timers-preempt-rt-support.patch
+features/all/rt/0128-timers-fix-timer-hotplug-on-rt.patch
+features/all/rt/0129-timers-mov-printk_tick-to-soft-interrupt.patch
+features/all/rt/0130-timer-delay-waking-softirqs-from-the-jiffy-tick.patch
+features/all/rt/0131-timers-Avoid-the-switch-timers-base-set-to-NULL-tric.patch
+features/all/rt/0132-printk-Don-t-call-printk_tick-in-printk_needs_cpu-on.patch
+features/all/rt/0133-hrtimers-prepare-full-preemption.patch
+features/all/rt/0134-hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
+features/all/rt/0135-hrtimer-Don-t-call-the-timer-handler-from-hrtimer_st.patch
+features/all/rt/0136-hrtimer-Add-missing-debug_activate-aid-Was-Re-ANNOUN.patch
+features/all/rt/0137-hrtimer-fix-reprogram-madness.patch.patch
+features/all/rt/0138-timer-fd-Prevent-live-lock.patch
+features/all/rt/0139-posix-timers-thread-posix-cpu-timers-on-rt.patch
+features/all/rt/0140-posix-timers-Shorten-posix_cpu_timers-CPU-kernel-thr.patch
+features/all/rt/0141-posix-timers-Avoid-wakeups-when-no-timers-are-active.patch
+features/all/rt/0142-sched-delay-put-task.patch.patch
+features/all/rt/0143-sched-limit-nr-migrate.patch.patch
+features/all/rt/0144-sched-mmdrop-delayed.patch.patch
+features/all/rt/0145-sched-rt-mutex-wakeup.patch.patch
+features/all/rt/0146-sched-prevent-idle-boost.patch.patch
+features/all/rt/0147-sched-might-sleep-do-not-account-rcu-depth.patch.patch
+features/all/rt/0148-sched-Break-out-from-load_balancing-on-rq_lock-conte.patch
+features/all/rt/0149-sched-cond-resched.patch.patch
+features/all/rt/0150-cond-resched-softirq-fix.patch.patch
+features/all/rt/0151-sched-no-work-when-pi-blocked.patch.patch
+features/all/rt/0152-cond-resched-lock-rt-tweak.patch.patch
+features/all/rt/0153-sched-disable-ttwu-queue.patch.patch
+features/all/rt/0154-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch
+features/all/rt/0155-sched-ttwu-Return-success-when-only-changing-the-sav.patch
+features/all/rt/0156-stop_machine-convert-stop_machine_run-to-PREEMPT_RT.patch
+features/all/rt/0157-stomp-machine-mark-stomper-thread.patch.patch
+features/all/rt/0158-stomp-machine-raw-lock.patch.patch
+features/all/rt/0159-hotplug-Lightweight-get-online-cpus.patch
+features/all/rt/0160-hotplug-sync_unplug-No.patch
+features/all/rt/0161-hotplug-Reread-hotplug_pcp-on-pin_current_cpu-retry.patch
+features/all/rt/0162-sched-migrate-disable.patch.patch
+features/all/rt/0163-hotplug-use-migrate-disable.patch.patch
+features/all/rt/0164-hotplug-Call-cpu_unplug_begin-before-DOWN_PREPARE.patch
+features/all/rt/0165-ftrace-migrate-disable-tracing.patch.patch
+features/all/rt/0166-tracing-Show-padding-as-unsigned-short.patch
+features/all/rt/0167-migrate-disable-rt-variant.patch.patch
+features/all/rt/0168-sched-Optimize-migrate_disable.patch
+features/all/rt/0169-sched-Generic-migrate_disable.patch
+features/all/rt/0170-sched-rt-Fix-migrate_enable-thinko.patch
+features/all/rt/0171-sched-teach-migrate_disable-about-atomic-contexts.patch
+features/all/rt/0172-sched-Postpone-actual-migration-disalbe-to-schedule.patch
+features/all/rt/0173-sched-Do-not-compare-cpu-masks-in-scheduler.patch
+features/all/rt/0174-sched-Have-migrate_disable-ignore-bounded-threads.patch
+features/all/rt/0175-sched-clear-pf-thread-bound-on-fallback-rq.patch.patch
+features/all/rt/0176-ftrace-crap.patch.patch
+features/all/rt/0177-ring-buffer-Convert-reader_lock-from-raw_spin_lock-i.patch
+features/all/rt/0178-net-netif_rx_ni-migrate-disable.patch.patch
+features/all/rt/0179-softirq-Sanitize-softirq-pending-for-NOHZ-RT.patch
+features/all/rt/0180-lockdep-rt.patch.patch
+features/all/rt/0181-mutex-no-spin-on-rt.patch.patch
+features/all/rt/0182-softirq-local-lock.patch.patch
+features/all/rt/0183-softirq-Export-in_serving_softirq.patch
+features/all/rt/0184-hardirq.h-Define-softirq_count-as-OUL-to-kill-build-.patch
+features/all/rt/0185-softirq-Fix-unplug-deadlock.patch
+features/all/rt/0186-softirq-disable-softirq-stacks-for-rt.patch.patch
+features/all/rt/0187-softirq-make-fifo.patch.patch
+features/all/rt/0188-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch
+features/all/rt/0189-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch
+features/all/rt/0190-local-vars-migrate-disable.patch.patch
+features/all/rt/0191-md-raid5-Make-raid5_percpu-handling-RT-aware.patch
+features/all/rt/0192-rtmutex-lock-killable.patch.patch
+features/all/rt/0193-rtmutex-futex-prepare-rt.patch.patch
+features/all/rt/0194-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch
+features/all/rt/0195-rt-mutex-add-sleeping-spinlocks-support.patch.patch
+features/all/rt/0196-spinlock-types-separate-raw.patch.patch
+features/all/rt/0197-rtmutex-avoid-include-hell.patch.patch
+features/all/rt/0198-rt-add-rt-spinlocks.patch.patch
+features/all/rt/0199-rt-add-rt-to-mutex-headers.patch.patch
+features/all/rt/0200-rwsem-add-rt-variant.patch.patch
+features/all/rt/0201-rt-Add-the-preempt-rt-lock-replacement-APIs.patch
+features/all/rt/0202-rwlocks-Fix-section-mismatch.patch
+features/all/rt/0203-timer-handle-idle-trylock-in-get-next-timer-irq.patc.patch
+features/all/rt/0204-RCU-Force-PREEMPT_RCU-for-PREEMPT-RT.patch
+features/all/rt/0205-rcu-Frob-softirq-test.patch
+features/all/rt/0206-rcu-Merge-RCU-bh-into-RCU-preempt.patch
+features/all/rt/0207-rcu-Fix-macro-substitution-for-synchronize_rcu_bh-on.patch
+features/all/rt/0208-rcu-more-fallout.patch.patch
+features/all/rt/0209-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch
+features/all/rt/0210-rt-rcutree-Move-misplaced-prototype.patch
+features/all/rt/0211-lglocks-rt.patch.patch
+features/all/rt/0212-serial-8250-Clean-up-the-locking-for-rt.patch
+features/all/rt/0213-serial-8250-Call-flush_to_ldisc-when-the-irq-is-thre.patch
+features/all/rt/0214-drivers-tty-fix-omap-lock-crap.patch.patch
+features/all/rt/0215-rt-Improve-the-serial-console-PASS_LIMIT.patch
+features/all/rt/0216-fs-namespace-preemption-fix.patch
+features/all/rt/0217-mm-protect-activate-switch-mm.patch.patch
+features/all/rt/0218-fs-block-rt-support.patch.patch
+features/all/rt/0219-fs-ntfs-disable-interrupt-only-on-RT.patch
+features/all/rt/0220-x86-Convert-mce-timer-to-hrtimer.patch
+features/all/rt/0221-x86-stackprotector-Avoid-random-pool-on-rt.patch
+features/all/rt/0222-x86-Use-generic-rwsem_spinlocks-on-rt.patch
+features/all/rt/0223-x86-Disable-IST-stacks-for-debug-int-3-stack-fault-f.patch
+features/all/rt/0224-workqueue-use-get-cpu-light.patch.patch
+features/all/rt/0225-epoll.patch.patch
+features/all/rt/0226-mm-vmalloc.patch.patch
+features/all/rt/0227-workqueue-Fix-cpuhotplug-trainwreck.patch
+features/all/rt/0228-workqueue-Fix-PF_THREAD_BOUND-abuse.patch
+features/all/rt/0229-workqueue-Use-get_cpu_light-in-flush_gcwq.patch
+features/all/rt/0230-hotplug-stuff.patch.patch
+features/all/rt/0231-debugobjects-rt.patch.patch
+features/all/rt/0232-jump-label-rt.patch.patch
+features/all/rt/0233-skbufhead-raw-lock.patch.patch
+features/all/rt/0234-x86-no-perf-irq-work-rt.patch.patch
+features/all/rt/0235-console-make-rt-friendly.patch.patch
+features/all/rt/0236-printk-Disable-migration-instead-of-preemption.patch
+features/all/rt/0237-power-use-generic-rwsem-on-rt.patch
+features/all/rt/0238-power-disable-highmem-on-rt.patch.patch
+features/all/rt/0239-arm-disable-highmem-on-rt.patch.patch
+features/all/rt/0240-ARM-at91-tclib-Default-to-tclib-timer-for-RT.patch
+features/all/rt/0241-mips-disable-highmem-on-rt.patch.patch
+features/all/rt/0242-net-Avoid-livelock-in-net_tx_action-on-RT.patch
+features/all/rt/0243-ping-sysrq.patch.patch
+features/all/rt/0244-kgdb-serial-Short-term-workaround.patch
+features/all/rt/0245-add-sys-kernel-realtime-entry.patch
+features/all/rt/0246-mm-rt-kmap_atomic-scheduling.patch
+features/all/rt/0247-ipc-sem-Rework-semaphore-wakeups.patch
+features/all/rt/0248-sysrq-Allow-immediate-Magic-SysRq-output-for-PREEMPT.patch
+features/all/rt/0249-x86-kvm-require-const-tsc-for-rt.patch.patch
+features/all/rt/0250-scsi-fcoe-rt-aware.patch.patch
+features/all/rt/0251-x86-crypto-Reduce-preempt-disabled-regions.patch
+features/all/rt/0252-dm-Make-rt-aware.patch
+features/all/rt/0253-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch
+features/all/rt/0254-seqlock-Prevent-rt-starvation.patch
+features/all/rt/0255-timer-Fix-hotplug-for-rt.patch
+features/all/rt/0256-futex-rt-Fix-possible-lockup-when-taking-pi_lock-in-.patch
+features/all/rt/0257-ring-buffer-rt-Check-for-irqs-disabled-before-grabbi.patch
+features/all/rt/0258-sched-rt-Fix-wait_task_interactive-to-test-rt_spin_l.patch
+features/all/rt/0259-lglock-rt-Use-non-rt-for_each_cpu-in-rt-code.patch
+features/all/rt/0260-cpu-Make-hotplug.lock-a-sleeping-spinlock-on-RT.patch
+features/all/rt/0261-softirq-Check-preemption-after-reenabling-interrupts.patch
+features/all/rt/0262-rt-Introduce-cpu_chill.patch
+features/all/rt/0263-fs-dcache-Use-cpu_chill-in-trylock-loops.patch
+features/all/rt/0264-net-Use-cpu_chill-instead-of-cpu_relax.patch
+features/all/rt/0265-kconfig-disable-a-few-options-rt.patch.patch
+features/all/rt/0266-kconfig-preempt-rt-full.patch.patch
+features/all/rt/0267-rt-Make-migrate_disable-enable-and-__rt_mutex_init-n.patch
+features/all/rt/0268-scsi-qla2xxx-Use-local_irq_save_nort-in-qla2x00_poll.patch
+features/all/rt/0269-net-RT-REmove-preemption-disabling-in-netif_rx.patch
+features/all/rt/0270-mips-remove-smp-reserve-lock.patch.patch

Modified: dists/squeeze-backports/linux/debian/rules
==============================================================================
--- dists/squeeze-backports/linux/debian/rules	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/rules	Fri Aug 17 02:04:57 2012	(r19325)
@@ -32,10 +32,18 @@
 	$(MAKE) -f debian/rules.gen setup_$(DEB_HOST_ARCH)
 	@$(stamp)
 
-build: debian/control $(STAMPS_DIR)/build-base
-$(STAMPS_DIR)/build-base: $(STAMPS_DIR)/setup-base
+build: build-arch build-indep
+
+build-arch: debian/control $(STAMPS_DIR)/build-arch-base
+$(STAMPS_DIR)/build-arch-base: $(STAMPS_DIR)/setup-base
+	dh_testdir
+	$(MAKE) -f debian/rules.gen build-arch_$(DEB_HOST_ARCH)
+	@$(stamp)
+
+build-indep: debian/control $(STAMPS_DIR)/build-indep-base
+$(STAMPS_DIR)/build-indep-base: $(STAMPS_DIR)/setup-base
 	dh_testdir
-	$(MAKE) -f debian/rules.gen build_$(DEB_HOST_ARCH)
+	$(MAKE) -f debian/rules.gen build-indep
 	@$(stamp)
 
 DIR_ORIG = ../orig/$(SOURCE)-$(VERSION_UPSTREAM)
@@ -63,11 +71,11 @@
 	rm -rf $(BUILD_DIR) $(STAMPS_DIR) debian/lib/python/debian_linux/*.pyc debian/linux-headers-* debian/linux-image-* debian/linux-support-* debian/linux-source-* debian/linux-doc-* debian/linux-manual-* debian/xen-linux-system-* debian/*-modules-*-di*
 	dh_clean
 
-binary-indep: $(STAMPS_DIR)/source-base
+binary-indep: $(STAMPS_DIR)/build-indep-base
 	dh_testdir
 	$(MAKE) -f debian/rules.gen binary-indep
 
-binary-arch: $(STAMPS_DIR)/build-base
+binary-arch: $(STAMPS_DIR)/build-arch-base
 	dh_testdir
 	$(MAKE) -f debian/rules.gen binary-arch_$(DEB_HOST_ARCH)
 

Modified: dists/squeeze-backports/linux/debian/rules.real
==============================================================================
--- dists/squeeze-backports/linux/debian/rules.real	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/rules.real	Fri Aug 17 02:04:57 2012	(r19325)
@@ -21,6 +21,13 @@
   export KW_CHECK_NONFATAL = y
 endif
 
+# Set Multi-Arch fields only when built in a suite that supports it
+ifneq (,$(DEB_HOST_MULTIARCH))
+DEFINE_MULTIARCH = -Vlinux:Multi-Arch=$(1)
+else
+DEFINE_MULTIARCH = -Vlinux:Multi-Arch=
+endif
+
 include debian/rules.defs
 
 stamp = [ -d $(dir $@) ] || mkdir $(dir $@); touch $@
@@ -51,7 +58,8 @@
 binary-indep: install-source
 binary-indep: install-support
 
-build: $(STAMPS_DIR)/build_$(ARCH)_$(FEATURESET)_$(FLAVOUR)_$(TYPE)
+build-arch: $(STAMPS_DIR)/build_$(ARCH)_$(FEATURESET)_$(FLAVOUR)_$(TYPE)
+build-indep: $(STAMPS_DIR)/build-doc
 
 setup-flavour: $(STAMPS_DIR)/setup_$(ARCH)_$(FEATURESET)_$(FLAVOUR)
 
@@ -190,7 +198,7 @@
 	| \
 	cpio -pd --preserve-modification-time '$(CURDIR)/$(OUT_DIR)/html'
 	gzip -9qfr $(OUT_DIR)/Documentation
-	+$(MAKE_SELF) install-base
+	+$(MAKE_SELF) install-base GENCONTROL_ARGS='$(call DEFINE_MULTIARCH,foreign)'
 
 install-manual: PACKAGE_NAME = linux-manual-$(VERSION)
 install-manual: DIR=$(BUILD_DIR)/build-doc
@@ -198,7 +206,7 @@
 install-manual: $(STAMPS_DIR)/build-doc
 	dh_prep
 	find $(DIR)/Documentation/DocBook/man/ -name '*.9' | xargs dh_installman
-	+$(MAKE_SELF) install-base
+	+$(MAKE_SELF) install-base GENCONTROL_ARGS='$(call DEFINE_MULTIARCH,foreign)'
 
 install-headers_$(ARCH): PACKAGE_NAMES = linux-headers-$(ABINAME)-all linux-headers-$(ABINAME)-all-$(ARCH)
 install-headers_$(ARCH): DH_OPTIONS = $(foreach p, $(PACKAGE_NAMES), -p$(p))
@@ -310,12 +318,9 @@
 	# Move include/asm to arch-specific directory
 	mkdir -p $(OUT_DIR)/include/$(DEB_HOST_MULTIARCH)
 	mv $(OUT_DIR)/include/asm $(OUT_DIR)/include/$(DEB_HOST_MULTIARCH)/
-	echo linux-libc-dev:Multi-Arch=same >>debian/$(PACKAGE_NAME).substvars
-else
-	echo linux-libc-dev:Multi-Arch= >>debian/$(PACKAGE_NAME).substvars
 endif
 	
-	+$(MAKE_SELF) install-base
+	+$(MAKE_SELF) install-base GENCONTROL_ARGS='$(call DEFINE_MULTIARCH,same)'
 
 install-support: PACKAGE_NAME = linux-support-$(ABINAME)
 install-support: DH_OPTIONS = -p$(PACKAGE_NAME)
@@ -330,7 +335,7 @@
 	cp debian/lib/python/debian_linux/*.py $(PACKAGE_DIR)$(PACKAGE_ROOT)/lib/python/debian_linux
 	dh_python2
 	dh_link $(PACKAGE_ROOT) /usr/src/$(PACKAGE_NAME)
-	+$(MAKE_SELF) install-base
+	+$(MAKE_SELF) install-base GENCONTROL_ARGS='$(call DEFINE_MULTIARCH,foreign)'
 
 install-image_$(ARCH)_$(FEATURESET)_$(FLAVOUR)_$(TYPE): REAL_VERSION = $(ABINAME)$(LOCALVERSION)
 install-image_$(ARCH)_$(FEATURESET)_$(FLAVOUR)_$(TYPE): PACKAGE_NAME = linux-image-$(REAL_VERSION)
@@ -370,13 +375,21 @@
 	  PACKAGE_DIR='$(PACKAGE_DIR)' PACKAGE_NAME='$(PACKAGE_NAME)' REAL_VERSION='$(REAL_VERSION)'
 	+$(MAKE_SELF) install-base
 
-install-image_armel_$(FEATURESET)_$(FLAVOUR)_plain_image \
-install-image_armhf_$(FEATURESET)_$(FLAVOUR)_plain_image \
 install-image_sparc_$(FEATURESET)_$(FLAVOUR)_plain_image \
 install-image_sparc64_$(FEATURESET)_$(FLAVOUR)_plain_image \
 install-image_sh4_$(FEATURESET)_$(FLAVOUR)_plain_image:
 	install -m644 '$(DIR)/arch/$(KERNEL_ARCH)/boot/zImage' $(INSTALL_DIR)/vmlinuz-$(REAL_VERSION)
 
+ifneq ($(filter armel armhf,$(ARCH)),)
+install-image_$(ARCH)_$(FEATURESET)_$(FLAVOUR)_plain_image: DTB_INSTALL_DIR = /usr/lib/linux-image-$(REAL_VERSION)
+install-image_$(ARCH)_$(FEATURESET)_$(FLAVOUR)_plain_image:
+	install -m644 '$(DIR)/arch/$(KERNEL_ARCH)/boot/zImage' $(INSTALL_DIR)/vmlinuz-$(REAL_VERSION)
+	+$(MAKE_CLEAN) -C $(DIR) dtbs
+	shopt -s nullglob ; for i in $(DIR)/arch/arm/boot/*.dtb ; do \
+		install -D -m644 $$i '$(PACKAGE_DIR)'/'$(DTB_INSTALL_DIR)'/$$(basename $$i) ; \
+	done
+endif
+
 install-image_amd64_$(FEATURESET)_$(FLAVOUR)_plain_image \
 install-image_i386_$(FEATURESET)_$(FLAVOUR)_plain_image:
 	install -m644 '$(DIR)/arch/$(KERNEL_ARCH)/boot/bzImage' $(INSTALL_DIR)/vmlinuz-$(REAL_VERSION)
@@ -492,6 +505,6 @@
 	dh_testdir
 	dh_testroot
 	dh_install '$^' /usr/src
-	+$(MAKE_SELF) install-base
+	+$(MAKE_SELF) install-base GENCONTROL_ARGS='$(call DEFINE_MULTIARCH,foreign)'
 
 # vim: filetype=make

Modified: dists/squeeze-backports/linux/debian/templates/control.image.type-plain.in
==============================================================================
--- dists/squeeze-backports/linux/debian/templates/control.image.type-plain.in	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/templates/control.image.type-plain.in	Fri Aug 17 02:04:57 2012	(r19325)
@@ -3,7 +3,7 @@
 Pre-Depends: debconf | debconf-2.0
 Depends: kmod | module-init-tools, linux-base (>= 3~), ${misc:Depends}
 Recommends: firmware-linux-free (>= 3~)
-Suggests: linux-doc- at version@
+Suggests: linux-doc- at version@, debian-kernel-handbook
 Breaks: at (<< 3.1.12-1+squeeze1)
 Description: Linux @upstreamversion@ for @class@
  The Linux kernel @upstreamversion@ and modules for use on @longclass at .

Modified: dists/squeeze-backports/linux/debian/templates/control.libc-dev.in
==============================================================================
--- dists/squeeze-backports/linux/debian/templates/control.libc-dev.in	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/templates/control.libc-dev.in	Fri Aug 17 02:04:57 2012	(r19325)
@@ -4,7 +4,7 @@
 Provides: linux-kernel-headers
 Replaces: linux-kernel-headers
 Conflicts: linux-kernel-headers
-Multi-Arch: ${linux-libc-dev:Multi-Arch}
+Multi-Arch: ${linux:Multi-Arch}
 Description: Linux support headers for userspace development
  This package provides userspaces headers from the Linux kernel.  These headers
  are used by the installed headers for GNU glibc and other system libraries.

Modified: dists/squeeze-backports/linux/debian/templates/control.main.in
==============================================================================
--- dists/squeeze-backports/linux/debian/templates/control.main.in	Tue Aug 14 05:46:25 2012	(r19324)
+++ dists/squeeze-backports/linux/debian/templates/control.main.in	Fri Aug 17 02:04:57 2012	(r19325)
@@ -5,6 +5,7 @@
 Depends: binutils, bzip2, ${misc:Depends}
 Recommends: libc6-dev | libc-dev, gcc, make
 Suggests: libncurses-dev | ncurses-dev, libqt4-dev
+Multi-Arch: ${linux:Multi-Arch}
 Description: Linux kernel source for version @version@ with Debian patches
  This package provides source code for the Linux kernel version @version at .
  This source closely tracks official Linux kernel releases.  Debian's
@@ -16,6 +17,7 @@
 Architecture: all
 Depends: ${misc:Depends}
 Section: doc
+Multi-Arch: ${linux:Multi-Arch}
 Description: Linux kernel specific documentation for version @version@
  This package provides the various README files and HTML documentation for
  the Linux kernel version @version at .  Plenty of information, including the
@@ -31,6 +33,7 @@
 Provides: linux-manual
 Conflicts: linux-manual
 Replaces: linux-manual
+Multi-Arch: ${linux:Multi-Arch}
 Description: Linux kernel API manual pages for version @version@
  This package provides the Kernel Hacker's Guide in the form of
  manual pages, describing the kernel API functions.  They
@@ -45,6 +48,7 @@
 Architecture: all
 Section: devel
 Depends: ${python:Depends}, ${misc:Depends}
+Multi-Arch: ${linux:Multi-Arch}
 Description: Support files for Linux @upstreamversion@
  This package provides support files for the Linux kernel build,
  e.g. scripts to handle ABI information and for generation of



More information about the Kernel-svn-changes mailing list