[kernel] r18146 - in dists/sid/linux-2.6/debian: . patches/bugfix/all patches/bugfix/all/stable patches/series
Ben Hutchings
benh at alioth.debian.org
Tue Oct 4 05:41:54 UTC 2011
Author: benh
Date: Tue Oct 4 05:41:51 2011
New Revision: 18146
Log:
Add stable 3.0.5 and 3.0.6
Added:
dists/sid/linux-2.6/debian/patches/bugfix/all/stable/3.0.5.patch
dists/sid/linux-2.6/debian/patches/bugfix/all/stable/3.0.6.patch
Deleted:
dists/sid/linux-2.6/debian/patches/bugfix/all/block-Free-queue-resources-at-blk_release_queue.patch
dists/sid/linux-2.6/debian/patches/bugfix/all/fm801-Fix-double-free-in-case-of-error-in-tuner-dete.patch
dists/sid/linux-2.6/debian/patches/bugfix/all/fm801-Gracefully-handle-failure-of-tuner-auto-detect.patch
Modified:
dists/sid/linux-2.6/debian/changelog
dists/sid/linux-2.6/debian/patches/series/5
Modified: dists/sid/linux-2.6/debian/changelog
==============================================================================
--- dists/sid/linux-2.6/debian/changelog Sun Oct 2 22:54:51 2011 (r18145)
+++ dists/sid/linux-2.6/debian/changelog Tue Oct 4 05:41:51 2011 (r18146)
@@ -2,12 +2,59 @@
[ Ben Hutchings ]
* Ignore ABI change in rt2800lib (fixes FTBFS on several architectures)
- * fm801: Fix double free in case of error in tuner detection
- * fm801: Gracefully handle failure of tuner auto-detect (Closes: #641946)
- * block: Free queue resources at blk_release_queue() (Closes: #631187)
* kobj_uevent: Ignore if some listeners cannot handle message
(Closes: #641661)
* Build udebs for the installer
+ * Add stable 3.0.5 and 3.0.6, including:
+ - TTY: pty, fix pty counting
+ - pata_via: disable ATAPI DMA on AVERATEC 3200
+ - atm: br2684: Fix oops due to skb->dev being NULL
+ - alarmtimers: Avoid possible null pointer traversal
+ - alarmtimers: Memset itimerspec passed into alarm_timer_get
+ - alarmtimers: Avoid possible denial of service with high freq periodic
+ timers
+ - rtc: Fix RTC PIE frequency limit
+ - x86, perf: Check that current->mm is alive before getting user callchain
+ - xen/smp: Warn user why they keel over - nosmp or noapic and what to use
+ instead. (Closes: #637308)
+ - drm/nouveau: properly handle allocation failure in nouveau_sgdma_populate
+ - net/9p: fix client code to fail more gracefully on protocol error
+ - virtio: Fix the size of receive buffer packing onto VirtIO ring.
+ - virtio: VirtIO can transfer VIRTQUEUE_NUM of pages.
+ - fs/9p: Fid is not valid after a failed clunk.
+ - fs/9p: When doing inode lookup compare qid details and inode mode bits.
+ - fs/9p: Always ask new inode in create
+ - net/9p: Fix the msize calculation.
+ - 9p: close ACL leaks
+ - fs/9p: Add fid before dentry instantiation
+ - net/9p: Fix kernel crash with msize 512K
+ - fs/9p: Always ask new inode in lookup for cache mode disabled
+ - vfs: restore pinning the victim dentry in vfs_rmdir()/vfs_rename_dir()
+ - cifs: fix possible memory corruption in CIFSFindNext
+ - writeback: introduce .tagged_writepages for the WB_SYNC_NONE sync stage
+ - writeback: update dirtied_when for synced inode to prevent livelock
+ - fib:fix BUG_ON in fib_nl_newrule when add new fib rule
+ - scm: Capture the full credentials of the scm sender
+ - vlan: reset headers on accel emulation path
+ - xfrm: Perform a replay check after return from async codepaths
+ - bridge: Pseudo-header required for the checksum of ICMPv6
+ - bridge: fix a possible use after free
+ - TPM: Call tpm_transmit with correct size (CVE-2011-1161)
+ - TPM: Zero buffer after copying to userspace (CVE-2011-1162)
+ - ALSA: fm801: Gracefully handle failure of tuner auto-detect
+ (Closes: #641946)
+ - btrfs: fix d_off in the first dirent
+ - ARM: 7091/1: errata: D-cache line maintenance operation by MVA may not
+ succeed
+ - ARM: 7099/1: futex: preserve oldval in SMP __futex_atomic_op
+ - ALSA: usb-audio: Check for possible chip NULL pointer before clearing
+ probing flag
+ - cfg80211: Fix validation of AKM suites
+ - iwlagn: fix dangling scan request
+ - block: Free queue resources at blk_release_queue() (Closes: #631187)
+ For the complete list of changes, see:
+ http://www.kernel.org/pub/linux/kernel/v3.0/ChangeLog-3.0.5
+ http://www.kernel.org/pub/linux/kernel/v3.0/ChangeLog-3.0.6
-- Ben Hutchings <ben at decadent.org.uk> Tue, 20 Sep 2011 23:50:35 +0100
Added: dists/sid/linux-2.6/debian/patches/bugfix/all/stable/3.0.5.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/sid/linux-2.6/debian/patches/bugfix/all/stable/3.0.5.patch Tue Oct 4 05:41:51 2011 (r18146)
@@ -0,0 +1,11920 @@
+diff --git a/Makefile b/Makefile
+index 7d2192c..eeff5df 100644
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 9adc278..91c84cb 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -1298,6 +1298,20 @@ source "drivers/pci/Kconfig"
+
+ source "drivers/pcmcia/Kconfig"
+
++config ARM_ERRATA_764369
++ bool "ARM errata: Data cache line maintenance operation by MVA may not succeed"
++ depends on CPU_V7 && SMP
++ help
++ This option enables the workaround for erratum 764369
++ affecting Cortex-A9 MPCore with two or more processors (all
++ current revisions). Under certain timing circumstances, a data
++ cache line maintenance operation by MVA targeting an Inner
++ Shareable memory region may fail to proceed up to either the
++ Point of Coherency or to the Point of Unification of the
++ system. This workaround adds a DSB instruction before the
++ relevant cache maintenance functions and sets a specific bit
++ in the diagnostic control register of the SCU.
++
+ endmenu
+
+ menu "Kernel Features"
+diff --git a/arch/arm/include/asm/futex.h b/arch/arm/include/asm/futex.h
+index 8c73900..253cc86 100644
+--- a/arch/arm/include/asm/futex.h
++++ b/arch/arm/include/asm/futex.h
+@@ -25,17 +25,17 @@
+
+ #ifdef CONFIG_SMP
+
+-#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
++#define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \
+ smp_mb(); \
+ __asm__ __volatile__( \
+- "1: ldrex %1, [%2]\n" \
++ "1: ldrex %1, [%3]\n" \
+ " " insn "\n" \
+- "2: strex %1, %0, [%2]\n" \
+- " teq %1, #0\n" \
++ "2: strex %2, %0, [%3]\n" \
++ " teq %2, #0\n" \
+ " bne 1b\n" \
+ " mov %0, #0\n" \
+- __futex_atomic_ex_table("%4") \
+- : "=&r" (ret), "=&r" (oldval) \
++ __futex_atomic_ex_table("%5") \
++ : "=&r" (ret), "=&r" (oldval), "=&r" (tmp) \
+ : "r" (uaddr), "r" (oparg), "Ir" (-EFAULT) \
+ : "cc", "memory")
+
+@@ -73,14 +73,14 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ #include <linux/preempt.h>
+ #include <asm/domain.h>
+
+-#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
++#define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \
+ __asm__ __volatile__( \
+- "1: " T(ldr) " %1, [%2]\n" \
++ "1: " T(ldr) " %1, [%3]\n" \
+ " " insn "\n" \
+- "2: " T(str) " %0, [%2]\n" \
++ "2: " T(str) " %0, [%3]\n" \
+ " mov %0, #0\n" \
+- __futex_atomic_ex_table("%4") \
+- : "=&r" (ret), "=&r" (oldval) \
++ __futex_atomic_ex_table("%5") \
++ : "=&r" (ret), "=&r" (oldval), "=&r" (tmp) \
+ : "r" (uaddr), "r" (oparg), "Ir" (-EFAULT) \
+ : "cc", "memory")
+
+@@ -117,7 +117,7 @@ futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
+ int cmp = (encoded_op >> 24) & 15;
+ int oparg = (encoded_op << 8) >> 20;
+ int cmparg = (encoded_op << 20) >> 20;
+- int oldval = 0, ret;
++ int oldval = 0, ret, tmp;
+
+ if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
+ oparg = 1 << oparg;
+@@ -129,19 +129,19 @@ futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
+
+ switch (op) {
+ case FUTEX_OP_SET:
+- __futex_atomic_op("mov %0, %3", ret, oldval, uaddr, oparg);
++ __futex_atomic_op("mov %0, %4", ret, oldval, tmp, uaddr, oparg);
+ break;
+ case FUTEX_OP_ADD:
+- __futex_atomic_op("add %0, %1, %3", ret, oldval, uaddr, oparg);
++ __futex_atomic_op("add %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
+ break;
+ case FUTEX_OP_OR:
+- __futex_atomic_op("orr %0, %1, %3", ret, oldval, uaddr, oparg);
++ __futex_atomic_op("orr %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
+ break;
+ case FUTEX_OP_ANDN:
+- __futex_atomic_op("and %0, %1, %3", ret, oldval, uaddr, ~oparg);
++ __futex_atomic_op("and %0, %1, %4", ret, oldval, tmp, uaddr, ~oparg);
+ break;
+ case FUTEX_OP_XOR:
+- __futex_atomic_op("eor %0, %1, %3", ret, oldval, uaddr, oparg);
++ __futex_atomic_op("eor %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
+ break;
+ default:
+ ret = -ENOSYS;
+diff --git a/arch/arm/include/asm/hardware/cache-l2x0.h b/arch/arm/include/asm/hardware/cache-l2x0.h
+index 16bd480..bfa706f 100644
+--- a/arch/arm/include/asm/hardware/cache-l2x0.h
++++ b/arch/arm/include/asm/hardware/cache-l2x0.h
+@@ -64,7 +64,7 @@
+ #define L2X0_AUX_CTRL_MASK 0xc0000fff
+ #define L2X0_AUX_CTRL_ASSOCIATIVITY_SHIFT 16
+ #define L2X0_AUX_CTRL_WAY_SIZE_SHIFT 17
+-#define L2X0_AUX_CTRL_WAY_SIZE_MASK (0x3 << 17)
++#define L2X0_AUX_CTRL_WAY_SIZE_MASK (0x7 << 17)
+ #define L2X0_AUX_CTRL_SHARE_OVERRIDE_SHIFT 22
+ #define L2X0_AUX_CTRL_NS_LOCKDOWN_SHIFT 26
+ #define L2X0_AUX_CTRL_NS_INT_CTRL_SHIFT 27
+diff --git a/arch/arm/kernel/smp_scu.c b/arch/arm/kernel/smp_scu.c
+index a1e757c..cb7dd40 100644
+--- a/arch/arm/kernel/smp_scu.c
++++ b/arch/arm/kernel/smp_scu.c
+@@ -13,6 +13,7 @@
+
+ #include <asm/smp_scu.h>
+ #include <asm/cacheflush.h>
++#include <asm/cputype.h>
+
+ #define SCU_CTRL 0x00
+ #define SCU_CONFIG 0x04
+@@ -36,6 +37,15 @@ void __init scu_enable(void __iomem *scu_base)
+ {
+ u32 scu_ctrl;
+
++#ifdef CONFIG_ARM_ERRATA_764369
++ /* Cortex-A9 only */
++ if ((read_cpuid(CPUID_ID) & 0xff0ffff0) == 0x410fc090) {
++ scu_ctrl = __raw_readl(scu_base + 0x30);
++ if (!(scu_ctrl & 1))
++ __raw_writel(scu_ctrl | 0x1, scu_base + 0x30);
++ }
++#endif
++
+ scu_ctrl = __raw_readl(scu_base + SCU_CTRL);
+ /* already enabled? */
+ if (scu_ctrl & 1)
+diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c
+index a7b41bf..e83cc86 100644
+--- a/arch/arm/mach-davinci/board-da850-evm.c
++++ b/arch/arm/mach-davinci/board-da850-evm.c
+@@ -115,6 +115,32 @@ static struct spi_board_info da850evm_spi_info[] = {
+ },
+ };
+
++#ifdef CONFIG_MTD
++static void da850_evm_m25p80_notify_add(struct mtd_info *mtd)
++{
++ char *mac_addr = davinci_soc_info.emac_pdata->mac_addr;
++ size_t retlen;
++
++ if (!strcmp(mtd->name, "MAC-Address")) {
++ mtd->read(mtd, 0, ETH_ALEN, &retlen, mac_addr);
++ if (retlen == ETH_ALEN)
++ pr_info("Read MAC addr from SPI Flash: %pM\n",
++ mac_addr);
++ }
++}
++
++static struct mtd_notifier da850evm_spi_notifier = {
++ .add = da850_evm_m25p80_notify_add,
++};
++
++static void da850_evm_setup_mac_addr(void)
++{
++ register_mtd_user(&da850evm_spi_notifier);
++}
++#else
++static void da850_evm_setup_mac_addr(void) { }
++#endif
++
+ static struct mtd_partition da850_evm_norflash_partition[] = {
+ {
+ .name = "bootloaders + env",
+@@ -1237,6 +1263,8 @@ static __init void da850_evm_init(void)
+ if (ret)
+ pr_warning("da850_evm_init: spi 1 registration failed: %d\n",
+ ret);
++
++ da850_evm_setup_mac_addr();
+ }
+
+ #ifdef CONFIG_SERIAL_8250_CONSOLE
+diff --git a/arch/arm/mach-davinci/sleep.S b/arch/arm/mach-davinci/sleep.S
+index fb5e72b..5f1e045 100644
+--- a/arch/arm/mach-davinci/sleep.S
++++ b/arch/arm/mach-davinci/sleep.S
+@@ -217,7 +217,11 @@ ddr2clk_stop_done:
+ ENDPROC(davinci_ddr_psc_config)
+
+ CACHE_FLUSH:
+- .word arm926_flush_kern_cache_all
++#ifdef CONFIG_CPU_V6
++ .word v6_flush_kern_cache_all
++#else
++ .word arm926_flush_kern_cache_all
++#endif
+
+ ENTRY(davinci_cpu_suspend_sz)
+ .word . - davinci_cpu_suspend
+diff --git a/arch/arm/mach-dove/common.c b/arch/arm/mach-dove/common.c
+index 5ed51b8..cf7e598 100644
+--- a/arch/arm/mach-dove/common.c
++++ b/arch/arm/mach-dove/common.c
+@@ -160,7 +160,7 @@ void __init dove_spi0_init(void)
+
+ void __init dove_spi1_init(void)
+ {
+- orion_spi_init(DOVE_SPI1_PHYS_BASE, get_tclk());
++ orion_spi_1_init(DOVE_SPI1_PHYS_BASE, get_tclk());
+ }
+
+ /*****************************************************************************
+diff --git a/arch/arm/mach-integrator/integrator_ap.c b/arch/arm/mach-integrator/integrator_ap.c
+index 2fbbdd5..fcf0ae9 100644
+--- a/arch/arm/mach-integrator/integrator_ap.c
++++ b/arch/arm/mach-integrator/integrator_ap.c
+@@ -337,15 +337,15 @@ static unsigned long timer_reload;
+ static void integrator_clocksource_init(u32 khz)
+ {
+ void __iomem *base = (void __iomem *)TIMER2_VA_BASE;
+- u32 ctrl = TIMER_CTRL_ENABLE;
++ u32 ctrl = TIMER_CTRL_ENABLE | TIMER_CTRL_PERIODIC;
+
+ if (khz >= 1500) {
+ khz /= 16;
+- ctrl = TIMER_CTRL_DIV16;
++ ctrl |= TIMER_CTRL_DIV16;
+ }
+
+- writel(ctrl, base + TIMER_CTRL);
+ writel(0xffff, base + TIMER_LOAD);
++ writel(ctrl, base + TIMER_CTRL);
+
+ clocksource_mmio_init(base + TIMER_VALUE, "timer2",
+ khz * 1000, 200, 16, clocksource_mmio_readl_down);
+diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S
+index d32f02b..3593119 100644
+--- a/arch/arm/mm/cache-v7.S
++++ b/arch/arm/mm/cache-v7.S
+@@ -174,6 +174,10 @@ ENTRY(v7_coherent_user_range)
+ dcache_line_size r2, r3
+ sub r3, r2, #1
+ bic r12, r0, r3
++#ifdef CONFIG_ARM_ERRATA_764369
++ ALT_SMP(W(dsb))
++ ALT_UP(W(nop))
++#endif
+ 1:
+ USER( mcr p15, 0, r12, c7, c11, 1 ) @ clean D line to the point of unification
+ add r12, r12, r2
+@@ -223,6 +227,10 @@ ENTRY(v7_flush_kern_dcache_area)
+ add r1, r0, r1
+ sub r3, r2, #1
+ bic r0, r0, r3
++#ifdef CONFIG_ARM_ERRATA_764369
++ ALT_SMP(W(dsb))
++ ALT_UP(W(nop))
++#endif
+ 1:
+ mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D line / unified line
+ add r0, r0, r2
+@@ -247,6 +255,10 @@ v7_dma_inv_range:
+ sub r3, r2, #1
+ tst r0, r3
+ bic r0, r0, r3
++#ifdef CONFIG_ARM_ERRATA_764369
++ ALT_SMP(W(dsb))
++ ALT_UP(W(nop))
++#endif
+ mcrne p15, 0, r0, c7, c14, 1 @ clean & invalidate D / U line
+
+ tst r1, r3
+@@ -270,6 +282,10 @@ v7_dma_clean_range:
+ dcache_line_size r2, r3
+ sub r3, r2, #1
+ bic r0, r0, r3
++#ifdef CONFIG_ARM_ERRATA_764369
++ ALT_SMP(W(dsb))
++ ALT_UP(W(nop))
++#endif
+ 1:
+ mcr p15, 0, r0, c7, c10, 1 @ clean D / U line
+ add r0, r0, r2
+@@ -288,6 +304,10 @@ ENTRY(v7_dma_flush_range)
+ dcache_line_size r2, r3
+ sub r3, r2, #1
+ bic r0, r0, r3
++#ifdef CONFIG_ARM_ERRATA_764369
++ ALT_SMP(W(dsb))
++ ALT_UP(W(nop))
++#endif
+ 1:
+ mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D / U line
+ add r0, r0, r2
+diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
+index 82a093c..f96d2c7 100644
+--- a/arch/arm/mm/dma-mapping.c
++++ b/arch/arm/mm/dma-mapping.c
+@@ -322,6 +322,8 @@ __dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp,
+
+ if (addr)
+ *handle = pfn_to_dma(dev, page_to_pfn(page));
++ else
++ __dma_free_buffer(page, size);
+
+ return addr;
+ }
+diff --git a/arch/arm/plat-mxc/include/mach/iomux-v3.h b/arch/arm/plat-mxc/include/mach/iomux-v3.h
+index 82620af..ebbce33 100644
+--- a/arch/arm/plat-mxc/include/mach/iomux-v3.h
++++ b/arch/arm/plat-mxc/include/mach/iomux-v3.h
+@@ -66,7 +66,6 @@ typedef u64 iomux_v3_cfg_t;
+ #define MUX_MODE_MASK ((iomux_v3_cfg_t)0x1f << MUX_MODE_SHIFT)
+ #define MUX_PAD_CTRL_SHIFT 41
+ #define MUX_PAD_CTRL_MASK ((iomux_v3_cfg_t)0x1ffff << MUX_PAD_CTRL_SHIFT)
+-#define NO_PAD_CTRL ((iomux_v3_cfg_t)1 << (MUX_PAD_CTRL_SHIFT + 16))
+ #define MUX_SEL_INPUT_SHIFT 58
+ #define MUX_SEL_INPUT_MASK ((iomux_v3_cfg_t)0xf << MUX_SEL_INPUT_SHIFT)
+
+@@ -85,6 +84,7 @@ typedef u64 iomux_v3_cfg_t;
+ * Use to set PAD control
+ */
+
++#define NO_PAD_CTRL (1 << 16)
+ #define PAD_CTL_DVS (1 << 13)
+ #define PAD_CTL_HYS (1 << 8)
+
+diff --git a/arch/powerpc/sysdev/fsl_rio.c b/arch/powerpc/sysdev/fsl_rio.c
+index b3fd081..cdd765b 100644
+--- a/arch/powerpc/sysdev/fsl_rio.c
++++ b/arch/powerpc/sysdev/fsl_rio.c
+@@ -54,6 +54,7 @@
+ #define ODSR_CLEAR 0x1c00
+ #define LTLEECSR_ENABLE_ALL 0xFFC000FC
+ #define ESCSR_CLEAR 0x07120204
++#define IECSR_CLEAR 0x80000000
+
+ #define RIO_PORT1_EDCSR 0x0640
+ #define RIO_PORT2_EDCSR 0x0680
+@@ -1089,11 +1090,11 @@ static void port_error_handler(struct rio_mport *port, int offset)
+
+ if (offset == 0) {
+ out_be32((u32 *)(rio_regs_win + RIO_PORT1_EDCSR), 0);
+- out_be32((u32 *)(rio_regs_win + RIO_PORT1_IECSR), 0);
++ out_be32((u32 *)(rio_regs_win + RIO_PORT1_IECSR), IECSR_CLEAR);
+ out_be32((u32 *)(rio_regs_win + RIO_ESCSR), ESCSR_CLEAR);
+ } else {
+ out_be32((u32 *)(rio_regs_win + RIO_PORT2_EDCSR), 0);
+- out_be32((u32 *)(rio_regs_win + RIO_PORT2_IECSR), 0);
++ out_be32((u32 *)(rio_regs_win + RIO_PORT2_IECSR), IECSR_CLEAR);
+ out_be32((u32 *)(rio_regs_win + RIO_PORT2_ESCSR), ESCSR_CLEAR);
+ }
+ }
+diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
+index 253986b..2e79419 100644
+--- a/arch/sparc/Kconfig
++++ b/arch/sparc/Kconfig
+@@ -53,6 +53,7 @@ config SPARC64
+ select HAVE_PERF_EVENTS
+ select PERF_USE_VMALLOC
+ select IRQ_PREFLOW_FASTEOI
++ select HAVE_C_RECORDMCOUNT
+
+ config ARCH_DEFCONFIG
+ string
+diff --git a/arch/sparc/include/asm/sigcontext.h b/arch/sparc/include/asm/sigcontext.h
+index a1607d1..69914d7 100644
+--- a/arch/sparc/include/asm/sigcontext.h
++++ b/arch/sparc/include/asm/sigcontext.h
+@@ -45,6 +45,19 @@ typedef struct {
+ int si_mask;
+ } __siginfo32_t;
+
++#define __SIGC_MAXWIN 7
++
++typedef struct {
++ unsigned long locals[8];
++ unsigned long ins[8];
++} __siginfo_reg_window;
++
++typedef struct {
++ int wsaved;
++ __siginfo_reg_window reg_window[__SIGC_MAXWIN];
++ unsigned long rwbuf_stkptrs[__SIGC_MAXWIN];
++} __siginfo_rwin_t;
++
+ #ifdef CONFIG_SPARC64
+ typedef struct {
+ unsigned int si_float_regs [64];
+@@ -73,6 +86,7 @@ struct sigcontext {
+ unsigned long ss_size;
+ } sigc_stack;
+ unsigned long sigc_mask;
++ __siginfo_rwin_t * sigc_rwin_save;
+ };
+
+ #else
+diff --git a/arch/sparc/include/asm/spinlock_32.h b/arch/sparc/include/asm/spinlock_32.h
+index 5f5b8bf..bcc98fc 100644
+--- a/arch/sparc/include/asm/spinlock_32.h
++++ b/arch/sparc/include/asm/spinlock_32.h
+@@ -131,6 +131,15 @@ static inline void arch_write_lock(arch_rwlock_t *rw)
+ *(volatile __u32 *)&lp->lock = ~0U;
+ }
+
++static void inline arch_write_unlock(arch_rwlock_t *lock)
++{
++ __asm__ __volatile__(
++" st %%g0, [%0]"
++ : /* no outputs */
++ : "r" (lock)
++ : "memory");
++}
++
+ static inline int arch_write_trylock(arch_rwlock_t *rw)
+ {
+ unsigned int val;
+@@ -175,8 +184,6 @@ static inline int __arch_read_trylock(arch_rwlock_t *rw)
+ res; \
+ })
+
+-#define arch_write_unlock(rw) do { (rw)->lock = 0; } while(0)
+-
+ #define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
+ #define arch_read_lock_flags(rw, flags) arch_read_lock(rw)
+ #define arch_write_lock_flags(rw, flags) arch_write_lock(rw)
+diff --git a/arch/sparc/include/asm/spinlock_64.h b/arch/sparc/include/asm/spinlock_64.h
+index 073936a..9689176 100644
+--- a/arch/sparc/include/asm/spinlock_64.h
++++ b/arch/sparc/include/asm/spinlock_64.h
+@@ -210,14 +210,8 @@ static int inline arch_write_trylock(arch_rwlock_t *lock)
+ return result;
+ }
+
+-#define arch_read_lock(p) arch_read_lock(p)
+ #define arch_read_lock_flags(p, f) arch_read_lock(p)
+-#define arch_read_trylock(p) arch_read_trylock(p)
+-#define arch_read_unlock(p) arch_read_unlock(p)
+-#define arch_write_lock(p) arch_write_lock(p)
+ #define arch_write_lock_flags(p, f) arch_write_lock(p)
+-#define arch_write_unlock(p) arch_write_unlock(p)
+-#define arch_write_trylock(p) arch_write_trylock(p)
+
+ #define arch_read_can_lock(rw) (!((rw)->lock & 0x80000000UL))
+ #define arch_write_can_lock(rw) (!(rw)->lock)
+diff --git a/arch/sparc/kernel/Makefile b/arch/sparc/kernel/Makefile
+index b90b4a1..cb85458 100644
+--- a/arch/sparc/kernel/Makefile
++++ b/arch/sparc/kernel/Makefile
+@@ -32,6 +32,7 @@ obj-$(CONFIG_SPARC32) += sun4m_irq.o sun4c_irq.o sun4d_irq.o
+
+ obj-y += process_$(BITS).o
+ obj-y += signal_$(BITS).o
++obj-y += sigutil_$(BITS).o
+ obj-$(CONFIG_SPARC32) += ioport.o
+ obj-y += setup_$(BITS).o
+ obj-y += idprom.o
+diff --git a/arch/sparc/kernel/irq.h b/arch/sparc/kernel/irq.h
+index 100b9c2..4285112 100644
+--- a/arch/sparc/kernel/irq.h
++++ b/arch/sparc/kernel/irq.h
+@@ -88,7 +88,7 @@ BTFIXUPDEF_CALL(void, set_irq_udt, int)
+ #define set_irq_udt(cpu) BTFIXUP_CALL(set_irq_udt)(cpu)
+
+ /* All SUN4D IPIs are sent on this IRQ, may be shared with hard IRQs */
+-#define SUN4D_IPI_IRQ 14
++#define SUN4D_IPI_IRQ 13
+
+ extern void sun4d_ipi_interrupt(void);
+
+diff --git a/arch/sparc/kernel/pcic.c b/arch/sparc/kernel/pcic.c
+index 948601a..6418ba6 100644
+--- a/arch/sparc/kernel/pcic.c
++++ b/arch/sparc/kernel/pcic.c
+@@ -352,8 +352,8 @@ int __init pcic_probe(void)
+ strcpy(pbm->prom_name, namebuf);
+
+ {
+- extern volatile int t_nmi[1];
+- extern int pcic_nmi_trap_patch[1];
++ extern volatile int t_nmi[4];
++ extern int pcic_nmi_trap_patch[4];
+
+ t_nmi[0] = pcic_nmi_trap_patch[0];
+ t_nmi[1] = pcic_nmi_trap_patch[1];
+diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
+index 3e9daea..3c5bb78 100644
+--- a/arch/sparc/kernel/setup_64.c
++++ b/arch/sparc/kernel/setup_64.c
+@@ -440,8 +440,14 @@ static void __init init_sparc64_elf_hwcap(void)
+ cap |= AV_SPARC_VIS;
+ if (tlb_type == cheetah || tlb_type == cheetah_plus)
+ cap |= AV_SPARC_VIS | AV_SPARC_VIS2;
+- if (tlb_type == cheetah_plus)
+- cap |= AV_SPARC_POPC;
++ if (tlb_type == cheetah_plus) {
++ unsigned long impl, ver;
++
++ __asm__ __volatile__("rdpr %%ver, %0" : "=r" (ver));
++ impl = ((ver >> 32) & 0xffff);
++ if (impl == PANTHER_IMPL)
++ cap |= AV_SPARC_POPC;
++ }
+ if (tlb_type == hypervisor) {
+ if (sun4v_chip_type == SUN4V_CHIP_NIAGARA1)
+ cap |= AV_SPARC_ASI_BLK_INIT;
+diff --git a/arch/sparc/kernel/signal32.c b/arch/sparc/kernel/signal32.c
+index 75fad42..5d92488 100644
+--- a/arch/sparc/kernel/signal32.c
++++ b/arch/sparc/kernel/signal32.c
+@@ -29,6 +29,8 @@
+ #include <asm/visasm.h>
+ #include <asm/compat_signal.h>
+
++#include "sigutil.h"
++
+ #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+
+ /* This magic should be in g_upper[0] for all upper parts
+@@ -44,14 +46,14 @@ typedef struct {
+ struct signal_frame32 {
+ struct sparc_stackf32 ss;
+ __siginfo32_t info;
+- /* __siginfo_fpu32_t * */ u32 fpu_save;
++ /* __siginfo_fpu_t * */ u32 fpu_save;
+ unsigned int insns[2];
+ unsigned int extramask[_COMPAT_NSIG_WORDS - 1];
+ unsigned int extra_size; /* Should be sizeof(siginfo_extra_v8plus_t) */
+ /* Only valid if (info.si_regs.psr & (PSR_VERS|PSR_IMPL)) == PSR_V8PLUS */
+ siginfo_extra_v8plus_t v8plus;
+- __siginfo_fpu_t fpu_state;
+-};
++ /* __siginfo_rwin_t * */u32 rwin_save;
++} __attribute__((aligned(8)));
+
+ typedef struct compat_siginfo{
+ int si_signo;
+@@ -110,18 +112,14 @@ struct rt_signal_frame32 {
+ compat_siginfo_t info;
+ struct pt_regs32 regs;
+ compat_sigset_t mask;
+- /* __siginfo_fpu32_t * */ u32 fpu_save;
++ /* __siginfo_fpu_t * */ u32 fpu_save;
+ unsigned int insns[2];
+ stack_t32 stack;
+ unsigned int extra_size; /* Should be sizeof(siginfo_extra_v8plus_t) */
+ /* Only valid if (regs.psr & (PSR_VERS|PSR_IMPL)) == PSR_V8PLUS */
+ siginfo_extra_v8plus_t v8plus;
+- __siginfo_fpu_t fpu_state;
+-};
+-
+-/* Align macros */
+-#define SF_ALIGNEDSZ (((sizeof(struct signal_frame32) + 15) & (~15)))
+-#define RT_ALIGNEDSZ (((sizeof(struct rt_signal_frame32) + 15) & (~15)))
++ /* __siginfo_rwin_t * */u32 rwin_save;
++} __attribute__((aligned(8)));
+
+ int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
+ {
+@@ -192,30 +190,13 @@ int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
+ return 0;
+ }
+
+-static int restore_fpu_state32(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
+-{
+- unsigned long *fpregs = current_thread_info()->fpregs;
+- unsigned long fprs;
+- int err;
+-
+- err = __get_user(fprs, &fpu->si_fprs);
+- fprs_write(0);
+- regs->tstate &= ~TSTATE_PEF;
+- if (fprs & FPRS_DL)
+- err |= copy_from_user(fpregs, &fpu->si_float_regs[0], (sizeof(unsigned int) * 32));
+- if (fprs & FPRS_DU)
+- err |= copy_from_user(fpregs+16, &fpu->si_float_regs[32], (sizeof(unsigned int) * 32));
+- err |= __get_user(current_thread_info()->xfsr[0], &fpu->si_fsr);
+- err |= __get_user(current_thread_info()->gsr[0], &fpu->si_gsr);
+- current_thread_info()->fpsaved[0] |= fprs;
+- return err;
+-}
+-
+ void do_sigreturn32(struct pt_regs *regs)
+ {
+ struct signal_frame32 __user *sf;
++ compat_uptr_t fpu_save;
++ compat_uptr_t rwin_save;
+ unsigned int psr;
+- unsigned pc, npc, fpu_save;
++ unsigned pc, npc;
+ sigset_t set;
+ unsigned seta[_COMPAT_NSIG_WORDS];
+ int err, i;
+@@ -273,8 +254,13 @@ void do_sigreturn32(struct pt_regs *regs)
+ pt_regs_clear_syscall(regs);
+
+ err |= __get_user(fpu_save, &sf->fpu_save);
+- if (fpu_save)
+- err |= restore_fpu_state32(regs, &sf->fpu_state);
++ if (!err && fpu_save)
++ err |= restore_fpu_state(regs, compat_ptr(fpu_save));
++ err |= __get_user(rwin_save, &sf->rwin_save);
++ if (!err && rwin_save) {
++ if (restore_rwin_state(compat_ptr(rwin_save)))
++ goto segv;
++ }
+ err |= __get_user(seta[0], &sf->info.si_mask);
+ err |= copy_from_user(seta+1, &sf->extramask,
+ (_COMPAT_NSIG_WORDS - 1) * sizeof(unsigned int));
+@@ -300,7 +286,9 @@ segv:
+ asmlinkage void do_rt_sigreturn32(struct pt_regs *regs)
+ {
+ struct rt_signal_frame32 __user *sf;
+- unsigned int psr, pc, npc, fpu_save, u_ss_sp;
++ unsigned int psr, pc, npc, u_ss_sp;
++ compat_uptr_t fpu_save;
++ compat_uptr_t rwin_save;
+ mm_segment_t old_fs;
+ sigset_t set;
+ compat_sigset_t seta;
+@@ -359,8 +347,8 @@ asmlinkage void do_rt_sigreturn32(struct pt_regs *regs)
+ pt_regs_clear_syscall(regs);
+
+ err |= __get_user(fpu_save, &sf->fpu_save);
+- if (fpu_save)
+- err |= restore_fpu_state32(regs, &sf->fpu_state);
++ if (!err && fpu_save)
++ err |= restore_fpu_state(regs, compat_ptr(fpu_save));
+ err |= copy_from_user(&seta, &sf->mask, sizeof(compat_sigset_t));
+ err |= __get_user(u_ss_sp, &sf->stack.ss_sp);
+ st.ss_sp = compat_ptr(u_ss_sp);
+@@ -376,6 +364,12 @@ asmlinkage void do_rt_sigreturn32(struct pt_regs *regs)
+ do_sigaltstack((stack_t __user *) &st, NULL, (unsigned long)sf);
+ set_fs(old_fs);
+
++ err |= __get_user(rwin_save, &sf->rwin_save);
++ if (!err && rwin_save) {
++ if (restore_rwin_state(compat_ptr(rwin_save)))
++ goto segv;
++ }
++
+ switch (_NSIG_WORDS) {
+ case 4: set.sig[3] = seta.sig[6] + (((long)seta.sig[7]) << 32);
+ case 3: set.sig[2] = seta.sig[4] + (((long)seta.sig[5]) << 32);
+@@ -433,26 +427,6 @@ static void __user *get_sigframe(struct sigaction *sa, struct pt_regs *regs, uns
+ return (void __user *) sp;
+ }
+
+-static int save_fpu_state32(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
+-{
+- unsigned long *fpregs = current_thread_info()->fpregs;
+- unsigned long fprs;
+- int err = 0;
+-
+- fprs = current_thread_info()->fpsaved[0];
+- if (fprs & FPRS_DL)
+- err |= copy_to_user(&fpu->si_float_regs[0], fpregs,
+- (sizeof(unsigned int) * 32));
+- if (fprs & FPRS_DU)
+- err |= copy_to_user(&fpu->si_float_regs[32], fpregs+16,
+- (sizeof(unsigned int) * 32));
+- err |= __put_user(current_thread_info()->xfsr[0], &fpu->si_fsr);
+- err |= __put_user(current_thread_info()->gsr[0], &fpu->si_gsr);
+- err |= __put_user(fprs, &fpu->si_fprs);
+-
+- return err;
+-}
+-
+ /* The I-cache flush instruction only works in the primary ASI, which
+ * right now is the nucleus, aka. kernel space.
+ *
+@@ -515,18 +489,23 @@ static int setup_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ int signo, sigset_t *oldset)
+ {
+ struct signal_frame32 __user *sf;
++ int i, err, wsaved;
++ void __user *tail;
+ int sigframe_size;
+ u32 psr;
+- int i, err;
+ unsigned int seta[_COMPAT_NSIG_WORDS];
+
+ /* 1. Make sure everything is clean */
+ synchronize_user_stack();
+ save_and_clear_fpu();
+
+- sigframe_size = SF_ALIGNEDSZ;
+- if (!(current_thread_info()->fpsaved[0] & FPRS_FEF))
+- sigframe_size -= sizeof(__siginfo_fpu_t);
++ wsaved = get_thread_wsaved();
++
++ sigframe_size = sizeof(*sf);
++ if (current_thread_info()->fpsaved[0] & FPRS_FEF)
++ sigframe_size += sizeof(__siginfo_fpu_t);
++ if (wsaved)
++ sigframe_size += sizeof(__siginfo_rwin_t);
+
+ sf = (struct signal_frame32 __user *)
+ get_sigframe(&ka->sa, regs, sigframe_size);
+@@ -534,8 +513,7 @@ static int setup_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ if (invalid_frame_pointer(sf, sigframe_size))
+ goto sigill;
+
+- if (get_thread_wsaved() != 0)
+- goto sigill;
++ tail = (sf + 1);
+
+ /* 2. Save the current process state */
+ if (test_thread_flag(TIF_32BIT)) {
+@@ -560,11 +538,22 @@ static int setup_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ &sf->v8plus.asi);
+
+ if (psr & PSR_EF) {
+- err |= save_fpu_state32(regs, &sf->fpu_state);
+- err |= __put_user((u64)&sf->fpu_state, &sf->fpu_save);
++ __siginfo_fpu_t __user *fp = tail;
++ tail += sizeof(*fp);
++ err |= save_fpu_state(regs, fp);
++ err |= __put_user((u64)fp, &sf->fpu_save);
+ } else {
+ err |= __put_user(0, &sf->fpu_save);
+ }
++ if (wsaved) {
++ __siginfo_rwin_t __user *rwp = tail;
++ tail += sizeof(*rwp);
++ err |= save_rwin_state(wsaved, rwp);
++ err |= __put_user((u64)rwp, &sf->rwin_save);
++ set_thread_wsaved(0);
++ } else {
++ err |= __put_user(0, &sf->rwin_save);
++ }
+
+ switch (_NSIG_WORDS) {
+ case 4: seta[7] = (oldset->sig[3] >> 32);
+@@ -580,10 +569,21 @@ static int setup_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ err |= __copy_to_user(sf->extramask, seta + 1,
+ (_COMPAT_NSIG_WORDS - 1) * sizeof(unsigned int));
+
+- err |= copy_in_user((u32 __user *)sf,
+- (u32 __user *)(regs->u_regs[UREG_FP]),
+- sizeof(struct reg_window32));
+-
++ if (!wsaved) {
++ err |= copy_in_user((u32 __user *)sf,
++ (u32 __user *)(regs->u_regs[UREG_FP]),
++ sizeof(struct reg_window32));
++ } else {
++ struct reg_window *rp;
++
++ rp = ¤t_thread_info()->reg_window[wsaved - 1];
++ for (i = 0; i < 8; i++)
++ err |= __put_user(rp->locals[i], &sf->ss.locals[i]);
++ for (i = 0; i < 6; i++)
++ err |= __put_user(rp->ins[i], &sf->ss.ins[i]);
++ err |= __put_user(rp->ins[6], &sf->ss.fp);
++ err |= __put_user(rp->ins[7], &sf->ss.callers_pc);
++ }
+ if (err)
+ goto sigsegv;
+
+@@ -613,7 +613,6 @@ static int setup_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ err |= __put_user(0x91d02010, &sf->insns[1]); /*t 0x10*/
+ if (err)
+ goto sigsegv;
+-
+ flush_signal_insns(address);
+ }
+ return 0;
+@@ -632,18 +631,23 @@ static int setup_rt_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ siginfo_t *info)
+ {
+ struct rt_signal_frame32 __user *sf;
++ int i, err, wsaved;
++ void __user *tail;
+ int sigframe_size;
+ u32 psr;
+- int i, err;
+ compat_sigset_t seta;
+
+ /* 1. Make sure everything is clean */
+ synchronize_user_stack();
+ save_and_clear_fpu();
+
+- sigframe_size = RT_ALIGNEDSZ;
+- if (!(current_thread_info()->fpsaved[0] & FPRS_FEF))
+- sigframe_size -= sizeof(__siginfo_fpu_t);
++ wsaved = get_thread_wsaved();
++
++ sigframe_size = sizeof(*sf);
++ if (current_thread_info()->fpsaved[0] & FPRS_FEF)
++ sigframe_size += sizeof(__siginfo_fpu_t);
++ if (wsaved)
++ sigframe_size += sizeof(__siginfo_rwin_t);
+
+ sf = (struct rt_signal_frame32 __user *)
+ get_sigframe(&ka->sa, regs, sigframe_size);
+@@ -651,8 +655,7 @@ static int setup_rt_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ if (invalid_frame_pointer(sf, sigframe_size))
+ goto sigill;
+
+- if (get_thread_wsaved() != 0)
+- goto sigill;
++ tail = (sf + 1);
+
+ /* 2. Save the current process state */
+ if (test_thread_flag(TIF_32BIT)) {
+@@ -677,11 +680,22 @@ static int setup_rt_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ &sf->v8plus.asi);
+
+ if (psr & PSR_EF) {
+- err |= save_fpu_state32(regs, &sf->fpu_state);
+- err |= __put_user((u64)&sf->fpu_state, &sf->fpu_save);
++ __siginfo_fpu_t __user *fp = tail;
++ tail += sizeof(*fp);
++ err |= save_fpu_state(regs, fp);
++ err |= __put_user((u64)fp, &sf->fpu_save);
+ } else {
+ err |= __put_user(0, &sf->fpu_save);
+ }
++ if (wsaved) {
++ __siginfo_rwin_t __user *rwp = tail;
++ tail += sizeof(*rwp);
++ err |= save_rwin_state(wsaved, rwp);
++ err |= __put_user((u64)rwp, &sf->rwin_save);
++ set_thread_wsaved(0);
++ } else {
++ err |= __put_user(0, &sf->rwin_save);
++ }
+
+ /* Update the siginfo structure. */
+ err |= copy_siginfo_to_user32(&sf->info, info);
+@@ -703,9 +717,21 @@ static int setup_rt_frame32(struct k_sigaction *ka, struct pt_regs *regs,
+ }
+ err |= __copy_to_user(&sf->mask, &seta, sizeof(compat_sigset_t));
+
+- err |= copy_in_user((u32 __user *)sf,
+- (u32 __user *)(regs->u_regs[UREG_FP]),
+- sizeof(struct reg_window32));
++ if (!wsaved) {
++ err |= copy_in_user((u32 __user *)sf,
++ (u32 __user *)(regs->u_regs[UREG_FP]),
++ sizeof(struct reg_window32));
++ } else {
++ struct reg_window *rp;
++
++ rp = ¤t_thread_info()->reg_window[wsaved - 1];
++ for (i = 0; i < 8; i++)
++ err |= __put_user(rp->locals[i], &sf->ss.locals[i]);
++ for (i = 0; i < 6; i++)
++ err |= __put_user(rp->ins[i], &sf->ss.ins[i]);
++ err |= __put_user(rp->ins[6], &sf->ss.fp);
++ err |= __put_user(rp->ins[7], &sf->ss.callers_pc);
++ }
+ if (err)
+ goto sigsegv;
+
+diff --git a/arch/sparc/kernel/signal_32.c b/arch/sparc/kernel/signal_32.c
+index 5e5c5fd..04ede8f 100644
+--- a/arch/sparc/kernel/signal_32.c
++++ b/arch/sparc/kernel/signal_32.c
+@@ -26,6 +26,8 @@
+ #include <asm/pgtable.h>
+ #include <asm/cacheflush.h> /* flush_sig_insns */
+
++#include "sigutil.h"
++
+ #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+
+ extern void fpsave(unsigned long *fpregs, unsigned long *fsr,
+@@ -39,8 +41,8 @@ struct signal_frame {
+ unsigned long insns[2] __attribute__ ((aligned (8)));
+ unsigned int extramask[_NSIG_WORDS - 1];
+ unsigned int extra_size; /* Should be 0 */
+- __siginfo_fpu_t fpu_state;
+-};
++ __siginfo_rwin_t __user *rwin_save;
++} __attribute__((aligned(8)));
+
+ struct rt_signal_frame {
+ struct sparc_stackf ss;
+@@ -51,8 +53,8 @@ struct rt_signal_frame {
+ unsigned int insns[2];
+ stack_t stack;
+ unsigned int extra_size; /* Should be 0 */
+- __siginfo_fpu_t fpu_state;
+-};
++ __siginfo_rwin_t __user *rwin_save;
++} __attribute__((aligned(8)));
+
+ /* Align macros */
+ #define SF_ALIGNEDSZ (((sizeof(struct signal_frame) + 7) & (~7)))
+@@ -79,43 +81,13 @@ asmlinkage int sys_sigsuspend(old_sigset_t set)
+ return _sigpause_common(set);
+ }
+
+-static inline int
+-restore_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
+-{
+- int err;
+-#ifdef CONFIG_SMP
+- if (test_tsk_thread_flag(current, TIF_USEDFPU))
+- regs->psr &= ~PSR_EF;
+-#else
+- if (current == last_task_used_math) {
+- last_task_used_math = NULL;
+- regs->psr &= ~PSR_EF;
+- }
+-#endif
+- set_used_math();
+- clear_tsk_thread_flag(current, TIF_USEDFPU);
+-
+- if (!access_ok(VERIFY_READ, fpu, sizeof(*fpu)))
+- return -EFAULT;
+-
+- err = __copy_from_user(¤t->thread.float_regs[0], &fpu->si_float_regs[0],
+- (sizeof(unsigned long) * 32));
+- err |= __get_user(current->thread.fsr, &fpu->si_fsr);
+- err |= __get_user(current->thread.fpqdepth, &fpu->si_fpqdepth);
+- if (current->thread.fpqdepth != 0)
+- err |= __copy_from_user(¤t->thread.fpqueue[0],
+- &fpu->si_fpqueue[0],
+- ((sizeof(unsigned long) +
+- (sizeof(unsigned long *)))*16));
+- return err;
+-}
+-
+ asmlinkage void do_sigreturn(struct pt_regs *regs)
+ {
+ struct signal_frame __user *sf;
+ unsigned long up_psr, pc, npc;
+ sigset_t set;
+ __siginfo_fpu_t __user *fpu_save;
++ __siginfo_rwin_t __user *rwin_save;
+ int err;
+
+ /* Always make any pending restarted system calls return -EINTR */
+@@ -150,9 +122,11 @@ asmlinkage void do_sigreturn(struct pt_regs *regs)
+ pt_regs_clear_syscall(regs);
+
+ err |= __get_user(fpu_save, &sf->fpu_save);
+-
+ if (fpu_save)
+ err |= restore_fpu_state(regs, fpu_save);
++ err |= __get_user(rwin_save, &sf->rwin_save);
++ if (rwin_save)
++ err |= restore_rwin_state(rwin_save);
+
+ /* This is pretty much atomic, no amount locking would prevent
+ * the races which exist anyways.
+@@ -180,6 +154,7 @@ asmlinkage void do_rt_sigreturn(struct pt_regs *regs)
+ struct rt_signal_frame __user *sf;
+ unsigned int psr, pc, npc;
+ __siginfo_fpu_t __user *fpu_save;
++ __siginfo_rwin_t __user *rwin_save;
+ mm_segment_t old_fs;
+ sigset_t set;
+ stack_t st;
+@@ -207,8 +182,7 @@ asmlinkage void do_rt_sigreturn(struct pt_regs *regs)
+ pt_regs_clear_syscall(regs);
+
+ err |= __get_user(fpu_save, &sf->fpu_save);
+-
+- if (fpu_save)
++ if (!err && fpu_save)
+ err |= restore_fpu_state(regs, fpu_save);
+ err |= __copy_from_user(&set, &sf->mask, sizeof(sigset_t));
+
+@@ -228,6 +202,12 @@ asmlinkage void do_rt_sigreturn(struct pt_regs *regs)
+ do_sigaltstack((const stack_t __user *) &st, NULL, (unsigned long)sf);
+ set_fs(old_fs);
+
++ err |= __get_user(rwin_save, &sf->rwin_save);
++ if (!err && rwin_save) {
++ if (restore_rwin_state(rwin_save))
++ goto segv;
++ }
++
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sighand->siglock);
+ current->blocked = set;
+@@ -280,53 +260,23 @@ static inline void __user *get_sigframe(struct sigaction *sa, struct pt_regs *re
+ return (void __user *) sp;
+ }
+
+-static inline int
+-save_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
+-{
+- int err = 0;
+-#ifdef CONFIG_SMP
+- if (test_tsk_thread_flag(current, TIF_USEDFPU)) {
+- put_psr(get_psr() | PSR_EF);
+- fpsave(¤t->thread.float_regs[0], ¤t->thread.fsr,
+- ¤t->thread.fpqueue[0], ¤t->thread.fpqdepth);
+- regs->psr &= ~(PSR_EF);
+- clear_tsk_thread_flag(current, TIF_USEDFPU);
+- }
+-#else
+- if (current == last_task_used_math) {
+- put_psr(get_psr() | PSR_EF);
+- fpsave(¤t->thread.float_regs[0], ¤t->thread.fsr,
+- ¤t->thread.fpqueue[0], ¤t->thread.fpqdepth);
+- last_task_used_math = NULL;
+- regs->psr &= ~(PSR_EF);
+- }
+-#endif
+- err |= __copy_to_user(&fpu->si_float_regs[0],
+- ¤t->thread.float_regs[0],
+- (sizeof(unsigned long) * 32));
+- err |= __put_user(current->thread.fsr, &fpu->si_fsr);
+- err |= __put_user(current->thread.fpqdepth, &fpu->si_fpqdepth);
+- if (current->thread.fpqdepth != 0)
+- err |= __copy_to_user(&fpu->si_fpqueue[0],
+- ¤t->thread.fpqueue[0],
+- ((sizeof(unsigned long) +
+- (sizeof(unsigned long *)))*16));
+- clear_used_math();
+- return err;
+-}
+-
+ static int setup_frame(struct k_sigaction *ka, struct pt_regs *regs,
+ int signo, sigset_t *oldset)
+ {
+ struct signal_frame __user *sf;
+- int sigframe_size, err;
++ int sigframe_size, err, wsaved;
++ void __user *tail;
+
+ /* 1. Make sure everything is clean */
+ synchronize_user_stack();
+
+- sigframe_size = SF_ALIGNEDSZ;
+- if (!used_math())
+- sigframe_size -= sizeof(__siginfo_fpu_t);
++ wsaved = current_thread_info()->w_saved;
++
++ sigframe_size = sizeof(*sf);
++ if (used_math())
++ sigframe_size += sizeof(__siginfo_fpu_t);
++ if (wsaved)
++ sigframe_size += sizeof(__siginfo_rwin_t);
+
+ sf = (struct signal_frame __user *)
+ get_sigframe(&ka->sa, regs, sigframe_size);
+@@ -334,8 +284,7 @@ static int setup_frame(struct k_sigaction *ka, struct pt_regs *regs,
+ if (invalid_frame_pointer(sf, sigframe_size))
+ goto sigill_and_return;
+
+- if (current_thread_info()->w_saved != 0)
+- goto sigill_and_return;
++ tail = sf + 1;
+
+ /* 2. Save the current process state */
+ err = __copy_to_user(&sf->info.si_regs, regs, sizeof(struct pt_regs));
+@@ -343,17 +292,34 @@ static int setup_frame(struct k_sigaction *ka, struct pt_regs *regs,
+ err |= __put_user(0, &sf->extra_size);
+
+ if (used_math()) {
+- err |= save_fpu_state(regs, &sf->fpu_state);
+- err |= __put_user(&sf->fpu_state, &sf->fpu_save);
++ __siginfo_fpu_t __user *fp = tail;
++ tail += sizeof(*fp);
++ err |= save_fpu_state(regs, fp);
++ err |= __put_user(fp, &sf->fpu_save);
+ } else {
+ err |= __put_user(0, &sf->fpu_save);
+ }
++ if (wsaved) {
++ __siginfo_rwin_t __user *rwp = tail;
++ tail += sizeof(*rwp);
++ err |= save_rwin_state(wsaved, rwp);
++ err |= __put_user(rwp, &sf->rwin_save);
++ } else {
++ err |= __put_user(0, &sf->rwin_save);
++ }
+
+ err |= __put_user(oldset->sig[0], &sf->info.si_mask);
+ err |= __copy_to_user(sf->extramask, &oldset->sig[1],
+ (_NSIG_WORDS - 1) * sizeof(unsigned int));
+- err |= __copy_to_user(sf, (char *) regs->u_regs[UREG_FP],
+- sizeof(struct reg_window32));
++ if (!wsaved) {
++ err |= __copy_to_user(sf, (char *) regs->u_regs[UREG_FP],
++ sizeof(struct reg_window32));
++ } else {
++ struct reg_window32 *rp;
++
++ rp = ¤t_thread_info()->reg_window[wsaved - 1];
++ err |= __copy_to_user(sf, rp, sizeof(struct reg_window32));
++ }
+ if (err)
+ goto sigsegv;
+
+@@ -399,21 +365,24 @@ static int setup_rt_frame(struct k_sigaction *ka, struct pt_regs *regs,
+ int signo, sigset_t *oldset, siginfo_t *info)
+ {
+ struct rt_signal_frame __user *sf;
+- int sigframe_size;
++ int sigframe_size, wsaved;
++ void __user *tail;
+ unsigned int psr;
+ int err;
+
+ synchronize_user_stack();
+- sigframe_size = RT_ALIGNEDSZ;
+- if (!used_math())
+- sigframe_size -= sizeof(__siginfo_fpu_t);
++ wsaved = current_thread_info()->w_saved;
++ sigframe_size = sizeof(*sf);
++ if (used_math())
++ sigframe_size += sizeof(__siginfo_fpu_t);
++ if (wsaved)
++ sigframe_size += sizeof(__siginfo_rwin_t);
+ sf = (struct rt_signal_frame __user *)
+ get_sigframe(&ka->sa, regs, sigframe_size);
+ if (invalid_frame_pointer(sf, sigframe_size))
+ goto sigill;
+- if (current_thread_info()->w_saved != 0)
+- goto sigill;
+
++ tail = sf + 1;
+ err = __put_user(regs->pc, &sf->regs.pc);
+ err |= __put_user(regs->npc, &sf->regs.npc);
+ err |= __put_user(regs->y, &sf->regs.y);
+@@ -425,11 +394,21 @@ static int setup_rt_frame(struct k_sigaction *ka, struct pt_regs *regs,
+ err |= __put_user(0, &sf->extra_size);
+
+ if (psr & PSR_EF) {
+- err |= save_fpu_state(regs, &sf->fpu_state);
+- err |= __put_user(&sf->fpu_state, &sf->fpu_save);
++ __siginfo_fpu_t *fp = tail;
++ tail += sizeof(*fp);
++ err |= save_fpu_state(regs, fp);
++ err |= __put_user(fp, &sf->fpu_save);
+ } else {
+ err |= __put_user(0, &sf->fpu_save);
+ }
++ if (wsaved) {
++ __siginfo_rwin_t *rwp = tail;
++ tail += sizeof(*rwp);
++ err |= save_rwin_state(wsaved, rwp);
++ err |= __put_user(rwp, &sf->rwin_save);
++ } else {
++ err |= __put_user(0, &sf->rwin_save);
++ }
+ err |= __copy_to_user(&sf->mask, &oldset->sig[0], sizeof(sigset_t));
+
+ /* Setup sigaltstack */
+@@ -437,8 +416,15 @@ static int setup_rt_frame(struct k_sigaction *ka, struct pt_regs *regs,
+ err |= __put_user(sas_ss_flags(regs->u_regs[UREG_FP]), &sf->stack.ss_flags);
+ err |= __put_user(current->sas_ss_size, &sf->stack.ss_size);
+
+- err |= __copy_to_user(sf, (char *) regs->u_regs[UREG_FP],
+- sizeof(struct reg_window32));
++ if (!wsaved) {
++ err |= __copy_to_user(sf, (char *) regs->u_regs[UREG_FP],
++ sizeof(struct reg_window32));
++ } else {
++ struct reg_window32 *rp;
++
++ rp = ¤t_thread_info()->reg_window[wsaved - 1];
++ err |= __copy_to_user(sf, rp, sizeof(struct reg_window32));
++ }
+
+ err |= copy_siginfo_to_user(&sf->info, info);
+
+diff --git a/arch/sparc/kernel/signal_64.c b/arch/sparc/kernel/signal_64.c
+index 006fe45..47509df 100644
+--- a/arch/sparc/kernel/signal_64.c
++++ b/arch/sparc/kernel/signal_64.c
+@@ -34,6 +34,7 @@
+
+ #include "entry.h"
+ #include "systbls.h"
++#include "sigutil.h"
+
+ #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+
+@@ -236,7 +237,7 @@ struct rt_signal_frame {
+ __siginfo_fpu_t __user *fpu_save;
+ stack_t stack;
+ sigset_t mask;
+- __siginfo_fpu_t fpu_state;
++ __siginfo_rwin_t *rwin_save;
+ };
+
+ static long _sigpause_common(old_sigset_t set)
+@@ -266,33 +267,12 @@ asmlinkage long sys_sigsuspend(old_sigset_t set)
+ return _sigpause_common(set);
+ }
+
+-static inline int
+-restore_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
+-{
+- unsigned long *fpregs = current_thread_info()->fpregs;
+- unsigned long fprs;
+- int err;
+-
+- err = __get_user(fprs, &fpu->si_fprs);
+- fprs_write(0);
+- regs->tstate &= ~TSTATE_PEF;
+- if (fprs & FPRS_DL)
+- err |= copy_from_user(fpregs, &fpu->si_float_regs[0],
+- (sizeof(unsigned int) * 32));
+- if (fprs & FPRS_DU)
+- err |= copy_from_user(fpregs+16, &fpu->si_float_regs[32],
+- (sizeof(unsigned int) * 32));
+- err |= __get_user(current_thread_info()->xfsr[0], &fpu->si_fsr);
+- err |= __get_user(current_thread_info()->gsr[0], &fpu->si_gsr);
+- current_thread_info()->fpsaved[0] |= fprs;
+- return err;
+-}
+-
+ void do_rt_sigreturn(struct pt_regs *regs)
+ {
+ struct rt_signal_frame __user *sf;
+ unsigned long tpc, tnpc, tstate;
+ __siginfo_fpu_t __user *fpu_save;
++ __siginfo_rwin_t __user *rwin_save;
+ sigset_t set;
+ int err;
+
+@@ -325,8 +305,8 @@ void do_rt_sigreturn(struct pt_regs *regs)
+ regs->tstate |= (tstate & (TSTATE_ASI | TSTATE_ICC | TSTATE_XCC));
+
+ err |= __get_user(fpu_save, &sf->fpu_save);
+- if (fpu_save)
+- err |= restore_fpu_state(regs, &sf->fpu_state);
++ if (!err && fpu_save)
++ err |= restore_fpu_state(regs, fpu_save);
+
+ err |= __copy_from_user(&set, &sf->mask, sizeof(sigset_t));
+ err |= do_sigaltstack(&sf->stack, NULL, (unsigned long)sf);
+@@ -334,6 +314,12 @@ void do_rt_sigreturn(struct pt_regs *regs)
+ if (err)
+ goto segv;
+
++ err |= __get_user(rwin_save, &sf->rwin_save);
++ if (!err && rwin_save) {
++ if (restore_rwin_state(rwin_save))
++ goto segv;
++ }
++
+ regs->tpc = tpc;
+ regs->tnpc = tnpc;
+
+@@ -351,34 +337,13 @@ segv:
+ }
+
+ /* Checks if the fp is valid */
+-static int invalid_frame_pointer(void __user *fp, int fplen)
++static int invalid_frame_pointer(void __user *fp)
+ {
+ if (((unsigned long) fp) & 15)
+ return 1;
+ return 0;
+ }
+
+-static inline int
+-save_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
+-{
+- unsigned long *fpregs = current_thread_info()->fpregs;
+- unsigned long fprs;
+- int err = 0;
+-
+- fprs = current_thread_info()->fpsaved[0];
+- if (fprs & FPRS_DL)
+- err |= copy_to_user(&fpu->si_float_regs[0], fpregs,
+- (sizeof(unsigned int) * 32));
+- if (fprs & FPRS_DU)
+- err |= copy_to_user(&fpu->si_float_regs[32], fpregs+16,
+- (sizeof(unsigned int) * 32));
+- err |= __put_user(current_thread_info()->xfsr[0], &fpu->si_fsr);
+- err |= __put_user(current_thread_info()->gsr[0], &fpu->si_gsr);
+- err |= __put_user(fprs, &fpu->si_fprs);
+-
+- return err;
+-}
+-
+ static inline void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, unsigned long framesize)
+ {
+ unsigned long sp = regs->u_regs[UREG_FP] + STACK_BIAS;
+@@ -414,34 +379,48 @@ setup_rt_frame(struct k_sigaction *ka, struct pt_regs *regs,
+ int signo, sigset_t *oldset, siginfo_t *info)
+ {
+ struct rt_signal_frame __user *sf;
+- int sigframe_size, err;
++ int wsaved, err, sf_size;
++ void __user *tail;
+
+ /* 1. Make sure everything is clean */
+ synchronize_user_stack();
+ save_and_clear_fpu();
+
+- sigframe_size = sizeof(struct rt_signal_frame);
+- if (!(current_thread_info()->fpsaved[0] & FPRS_FEF))
+- sigframe_size -= sizeof(__siginfo_fpu_t);
++ wsaved = get_thread_wsaved();
+
++ sf_size = sizeof(struct rt_signal_frame);
++ if (current_thread_info()->fpsaved[0] & FPRS_FEF)
++ sf_size += sizeof(__siginfo_fpu_t);
++ if (wsaved)
++ sf_size += sizeof(__siginfo_rwin_t);
+ sf = (struct rt_signal_frame __user *)
+- get_sigframe(ka, regs, sigframe_size);
+-
+- if (invalid_frame_pointer (sf, sigframe_size))
+- goto sigill;
++ get_sigframe(ka, regs, sf_size);
+
+- if (get_thread_wsaved() != 0)
++ if (invalid_frame_pointer (sf))
+ goto sigill;
+
++ tail = (sf + 1);
++
+ /* 2. Save the current process state */
+ err = copy_to_user(&sf->regs, regs, sizeof (*regs));
+
+ if (current_thread_info()->fpsaved[0] & FPRS_FEF) {
+- err |= save_fpu_state(regs, &sf->fpu_state);
+- err |= __put_user((u64)&sf->fpu_state, &sf->fpu_save);
++ __siginfo_fpu_t __user *fpu_save = tail;
++ tail += sizeof(__siginfo_fpu_t);
++ err |= save_fpu_state(regs, fpu_save);
++ err |= __put_user((u64)fpu_save, &sf->fpu_save);
+ } else {
+ err |= __put_user(0, &sf->fpu_save);
+ }
++ if (wsaved) {
++ __siginfo_rwin_t __user *rwin_save = tail;
++ tail += sizeof(__siginfo_rwin_t);
++ err |= save_rwin_state(wsaved, rwin_save);
++ err |= __put_user((u64)rwin_save, &sf->rwin_save);
++ set_thread_wsaved(0);
++ } else {
++ err |= __put_user(0, &sf->rwin_save);
++ }
+
+ /* Setup sigaltstack */
+ err |= __put_user(current->sas_ss_sp, &sf->stack.ss_sp);
+@@ -450,10 +429,17 @@ setup_rt_frame(struct k_sigaction *ka, struct pt_regs *regs,
+
+ err |= copy_to_user(&sf->mask, oldset, sizeof(sigset_t));
+
+- err |= copy_in_user((u64 __user *)sf,
+- (u64 __user *)(regs->u_regs[UREG_FP]+STACK_BIAS),
+- sizeof(struct reg_window));
++ if (!wsaved) {
++ err |= copy_in_user((u64 __user *)sf,
++ (u64 __user *)(regs->u_regs[UREG_FP] +
++ STACK_BIAS),
++ sizeof(struct reg_window));
++ } else {
++ struct reg_window *rp;
+
++ rp = ¤t_thread_info()->reg_window[wsaved - 1];
++ err |= copy_to_user(sf, rp, sizeof(struct reg_window));
++ }
+ if (info)
+ err |= copy_siginfo_to_user(&sf->info, info);
+ else {
+diff --git a/arch/sparc/kernel/sigutil.h b/arch/sparc/kernel/sigutil.h
+new file mode 100644
+index 0000000..d223aa4
+--- /dev/null
++++ b/arch/sparc/kernel/sigutil.h
+@@ -0,0 +1,9 @@
++#ifndef _SIGUTIL_H
++#define _SIGUTIL_H
++
++int save_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu);
++int restore_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu);
++int save_rwin_state(int wsaved, __siginfo_rwin_t __user *rwin);
++int restore_rwin_state(__siginfo_rwin_t __user *rp);
++
++#endif /* _SIGUTIL_H */
+diff --git a/arch/sparc/kernel/sigutil_32.c b/arch/sparc/kernel/sigutil_32.c
+new file mode 100644
+index 0000000..35c7897
+--- /dev/null
++++ b/arch/sparc/kernel/sigutil_32.c
+@@ -0,0 +1,120 @@
++#include <linux/kernel.h>
++#include <linux/types.h>
++#include <linux/thread_info.h>
++#include <linux/uaccess.h>
++#include <linux/sched.h>
++
++#include <asm/sigcontext.h>
++#include <asm/fpumacro.h>
++#include <asm/ptrace.h>
++
++#include "sigutil.h"
++
++int save_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
++{
++ int err = 0;
++#ifdef CONFIG_SMP
++ if (test_tsk_thread_flag(current, TIF_USEDFPU)) {
++ put_psr(get_psr() | PSR_EF);
++ fpsave(¤t->thread.float_regs[0], ¤t->thread.fsr,
++ ¤t->thread.fpqueue[0], ¤t->thread.fpqdepth);
++ regs->psr &= ~(PSR_EF);
++ clear_tsk_thread_flag(current, TIF_USEDFPU);
++ }
++#else
++ if (current == last_task_used_math) {
++ put_psr(get_psr() | PSR_EF);
++ fpsave(¤t->thread.float_regs[0], ¤t->thread.fsr,
++ ¤t->thread.fpqueue[0], ¤t->thread.fpqdepth);
++ last_task_used_math = NULL;
++ regs->psr &= ~(PSR_EF);
++ }
++#endif
++ err |= __copy_to_user(&fpu->si_float_regs[0],
++ ¤t->thread.float_regs[0],
++ (sizeof(unsigned long) * 32));
++ err |= __put_user(current->thread.fsr, &fpu->si_fsr);
++ err |= __put_user(current->thread.fpqdepth, &fpu->si_fpqdepth);
++ if (current->thread.fpqdepth != 0)
++ err |= __copy_to_user(&fpu->si_fpqueue[0],
++ ¤t->thread.fpqueue[0],
++ ((sizeof(unsigned long) +
++ (sizeof(unsigned long *)))*16));
++ clear_used_math();
++ return err;
++}
++
++int restore_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
++{
++ int err;
++#ifdef CONFIG_SMP
++ if (test_tsk_thread_flag(current, TIF_USEDFPU))
++ regs->psr &= ~PSR_EF;
++#else
++ if (current == last_task_used_math) {
++ last_task_used_math = NULL;
++ regs->psr &= ~PSR_EF;
++ }
++#endif
++ set_used_math();
++ clear_tsk_thread_flag(current, TIF_USEDFPU);
++
++ if (!access_ok(VERIFY_READ, fpu, sizeof(*fpu)))
++ return -EFAULT;
++
++ err = __copy_from_user(¤t->thread.float_regs[0], &fpu->si_float_regs[0],
++ (sizeof(unsigned long) * 32));
++ err |= __get_user(current->thread.fsr, &fpu->si_fsr);
++ err |= __get_user(current->thread.fpqdepth, &fpu->si_fpqdepth);
++ if (current->thread.fpqdepth != 0)
++ err |= __copy_from_user(¤t->thread.fpqueue[0],
++ &fpu->si_fpqueue[0],
++ ((sizeof(unsigned long) +
++ (sizeof(unsigned long *)))*16));
++ return err;
++}
++
++int save_rwin_state(int wsaved, __siginfo_rwin_t __user *rwin)
++{
++ int i, err = __put_user(wsaved, &rwin->wsaved);
++
++ for (i = 0; i < wsaved; i++) {
++ struct reg_window32 *rp;
++ unsigned long fp;
++
++ rp = ¤t_thread_info()->reg_window[i];
++ fp = current_thread_info()->rwbuf_stkptrs[i];
++ err |= copy_to_user(&rwin->reg_window[i], rp,
++ sizeof(struct reg_window32));
++ err |= __put_user(fp, &rwin->rwbuf_stkptrs[i]);
++ }
++ return err;
++}
++
++int restore_rwin_state(__siginfo_rwin_t __user *rp)
++{
++ struct thread_info *t = current_thread_info();
++ int i, wsaved, err;
++
++ __get_user(wsaved, &rp->wsaved);
++ if (wsaved > NSWINS)
++ return -EFAULT;
++
++ err = 0;
++ for (i = 0; i < wsaved; i++) {
++ err |= copy_from_user(&t->reg_window[i],
++ &rp->reg_window[i],
++ sizeof(struct reg_window32));
++ err |= __get_user(t->rwbuf_stkptrs[i],
++ &rp->rwbuf_stkptrs[i]);
++ }
++ if (err)
++ return err;
++
++ t->w_saved = wsaved;
++ synchronize_user_stack();
++ if (t->w_saved)
++ return -EFAULT;
++ return 0;
++
++}
+diff --git a/arch/sparc/kernel/sigutil_64.c b/arch/sparc/kernel/sigutil_64.c
+new file mode 100644
+index 0000000..6edc4e5
+--- /dev/null
++++ b/arch/sparc/kernel/sigutil_64.c
+@@ -0,0 +1,93 @@
++#include <linux/kernel.h>
++#include <linux/types.h>
++#include <linux/thread_info.h>
++#include <linux/uaccess.h>
++
++#include <asm/sigcontext.h>
++#include <asm/fpumacro.h>
++#include <asm/ptrace.h>
++
++#include "sigutil.h"
++
++int save_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
++{
++ unsigned long *fpregs = current_thread_info()->fpregs;
++ unsigned long fprs;
++ int err = 0;
++
++ fprs = current_thread_info()->fpsaved[0];
++ if (fprs & FPRS_DL)
++ err |= copy_to_user(&fpu->si_float_regs[0], fpregs,
++ (sizeof(unsigned int) * 32));
++ if (fprs & FPRS_DU)
++ err |= copy_to_user(&fpu->si_float_regs[32], fpregs+16,
++ (sizeof(unsigned int) * 32));
++ err |= __put_user(current_thread_info()->xfsr[0], &fpu->si_fsr);
++ err |= __put_user(current_thread_info()->gsr[0], &fpu->si_gsr);
++ err |= __put_user(fprs, &fpu->si_fprs);
++
++ return err;
++}
++
++int restore_fpu_state(struct pt_regs *regs, __siginfo_fpu_t __user *fpu)
++{
++ unsigned long *fpregs = current_thread_info()->fpregs;
++ unsigned long fprs;
++ int err;
++
++ err = __get_user(fprs, &fpu->si_fprs);
++ fprs_write(0);
++ regs->tstate &= ~TSTATE_PEF;
++ if (fprs & FPRS_DL)
++ err |= copy_from_user(fpregs, &fpu->si_float_regs[0],
++ (sizeof(unsigned int) * 32));
++ if (fprs & FPRS_DU)
++ err |= copy_from_user(fpregs+16, &fpu->si_float_regs[32],
++ (sizeof(unsigned int) * 32));
++ err |= __get_user(current_thread_info()->xfsr[0], &fpu->si_fsr);
++ err |= __get_user(current_thread_info()->gsr[0], &fpu->si_gsr);
++ current_thread_info()->fpsaved[0] |= fprs;
++ return err;
++}
++
++int save_rwin_state(int wsaved, __siginfo_rwin_t __user *rwin)
++{
++ int i, err = __put_user(wsaved, &rwin->wsaved);
++
++ for (i = 0; i < wsaved; i++) {
++ struct reg_window *rp = ¤t_thread_info()->reg_window[i];
++ unsigned long fp = current_thread_info()->rwbuf_stkptrs[i];
++
++ err |= copy_to_user(&rwin->reg_window[i], rp,
++ sizeof(struct reg_window));
++ err |= __put_user(fp, &rwin->rwbuf_stkptrs[i]);
++ }
++ return err;
++}
++
++int restore_rwin_state(__siginfo_rwin_t __user *rp)
++{
++ struct thread_info *t = current_thread_info();
++ int i, wsaved, err;
++
++ __get_user(wsaved, &rp->wsaved);
++ if (wsaved > NSWINS)
++ return -EFAULT;
++
++ err = 0;
++ for (i = 0; i < wsaved; i++) {
++ err |= copy_from_user(&t->reg_window[i],
++ &rp->reg_window[i],
++ sizeof(struct reg_window));
++ err |= __get_user(t->rwbuf_stkptrs[i],
++ &rp->rwbuf_stkptrs[i]);
++ }
++ if (err)
++ return err;
++
++ set_thread_wsaved(wsaved);
++ synchronize_user_stack();
++ if (get_thread_wsaved())
++ return -EFAULT;
++ return 0;
++}
+diff --git a/arch/x86/kernel/amd_iommu.c b/arch/x86/kernel/amd_iommu.c
+index 7c3a95e..d3d9d50 100644
+--- a/arch/x86/kernel/amd_iommu.c
++++ b/arch/x86/kernel/amd_iommu.c
+@@ -531,7 +531,9 @@ static void build_inv_all(struct iommu_cmd *cmd)
+ * Writes the command to the IOMMUs command buffer and informs the
+ * hardware about the new command.
+ */
+-static int iommu_queue_command(struct amd_iommu *iommu, struct iommu_cmd *cmd)
++static int iommu_queue_command_sync(struct amd_iommu *iommu,
++ struct iommu_cmd *cmd,
++ bool sync)
+ {
+ u32 left, tail, head, next_tail;
+ unsigned long flags;
+@@ -565,13 +567,18 @@ again:
+ copy_cmd_to_buffer(iommu, cmd, tail);
+
+ /* We need to sync now to make sure all commands are processed */
+- iommu->need_sync = true;
++ iommu->need_sync = sync;
+
+ spin_unlock_irqrestore(&iommu->lock, flags);
+
+ return 0;
+ }
+
++static int iommu_queue_command(struct amd_iommu *iommu, struct iommu_cmd *cmd)
++{
++ return iommu_queue_command_sync(iommu, cmd, true);
++}
++
+ /*
+ * This function queues a completion wait command into the command
+ * buffer of an IOMMU
+@@ -587,7 +594,7 @@ static int iommu_completion_wait(struct amd_iommu *iommu)
+
+ build_completion_wait(&cmd, (u64)&sem);
+
+- ret = iommu_queue_command(iommu, &cmd);
++ ret = iommu_queue_command_sync(iommu, &cmd, false);
+ if (ret)
+ return ret;
+
+@@ -773,14 +780,9 @@ static void domain_flush_complete(struct protection_domain *domain)
+ static void domain_flush_devices(struct protection_domain *domain)
+ {
+ struct iommu_dev_data *dev_data;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&domain->lock, flags);
+
+ list_for_each_entry(dev_data, &domain->dev_list, list)
+ device_flush_dte(dev_data->dev);
+-
+- spin_unlock_irqrestore(&domain->lock, flags);
+ }
+
+ /****************************************************************************
+diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
+index 3a0338b..bf6d692 100644
+--- a/arch/x86/kernel/cpu/perf_event.c
++++ b/arch/x86/kernel/cpu/perf_event.c
+@@ -1856,6 +1856,9 @@ perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
+
+ perf_callchain_store(entry, regs->ip);
+
++ if (!current->mm)
++ return;
++
+ if (perf_callchain_user32(regs, entry))
+ return;
+
+diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
+index 41178c8..dd208a8 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel.c
++++ b/arch/x86/kernel/cpu/perf_event_intel.c
+@@ -1495,6 +1495,7 @@ static __init int intel_pmu_init(void)
+ break;
+
+ case 42: /* SandyBridge */
++ case 45: /* SandyBridge, "Romely-EP" */
+ memcpy(hw_cache_event_ids, snb_hw_cache_event_ids,
+ sizeof(hw_cache_event_ids));
+
+diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
+index 60aeeb5..acea42e 100644
+--- a/arch/x86/xen/setup.c
++++ b/arch/x86/xen/setup.c
+@@ -185,6 +185,19 @@ static unsigned long __init xen_set_identity(const struct e820entry *list,
+ PFN_UP(start_pci), PFN_DOWN(last));
+ return identity;
+ }
++
++static unsigned long __init xen_get_max_pages(void)
++{
++ unsigned long max_pages = MAX_DOMAIN_PAGES;
++ domid_t domid = DOMID_SELF;
++ int ret;
++
++ ret = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &domid);
++ if (ret > 0)
++ max_pages = ret;
++ return min(max_pages, MAX_DOMAIN_PAGES);
++}
++
+ /**
+ * machine_specific_memory_setup - Hook for machine specific memory setup.
+ **/
+@@ -293,6 +306,14 @@ char * __init xen_memory_setup(void)
+
+ sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
+
++ extra_limit = xen_get_max_pages();
++ if (max_pfn + extra_pages > extra_limit) {
++ if (extra_limit > max_pfn)
++ extra_pages = extra_limit - max_pfn;
++ else
++ extra_pages = 0;
++ }
++
+ extra_pages += xen_return_unused_memory(xen_start_info->nr_pages, &e820);
+
+ /*
+diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
+index e79dbb9..d4fc6d4 100644
+--- a/arch/x86/xen/smp.c
++++ b/arch/x86/xen/smp.c
+@@ -32,6 +32,7 @@
+ #include <xen/page.h>
+ #include <xen/events.h>
+
++#include <xen/hvc-console.h>
+ #include "xen-ops.h"
+ #include "mmu.h"
+
+@@ -207,6 +208,15 @@ static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
+ unsigned cpu;
+ unsigned int i;
+
++ if (skip_ioapic_setup) {
++ char *m = (max_cpus == 0) ?
++ "The nosmp parameter is incompatible with Xen; " \
++ "use Xen dom0_max_vcpus=1 parameter" :
++ "The noapic parameter is incompatible with Xen";
++
++ xen_raw_printk(m);
++ panic(m);
++ }
+ xen_init_lock_cpu(0);
+
+ smp_store_cpu_info(0);
+diff --git a/arch/x86/xen/xen-asm_32.S b/arch/x86/xen/xen-asm_32.S
+index 22a2093..b040b0e 100644
+--- a/arch/x86/xen/xen-asm_32.S
++++ b/arch/x86/xen/xen-asm_32.S
+@@ -113,11 +113,13 @@ xen_iret_start_crit:
+
+ /*
+ * If there's something pending, mask events again so we can
+- * jump back into xen_hypervisor_callback
++ * jump back into xen_hypervisor_callback. Otherwise do not
++ * touch XEN_vcpu_info_mask.
+ */
+- sete XEN_vcpu_info_mask(%eax)
++ jne 1f
++ movb $1, XEN_vcpu_info_mask(%eax)
+
+- popl %eax
++1: popl %eax
+
+ /*
+ * From this point on the registers are restored and the stack
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index bcaf16e..b596e54 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -785,10 +785,10 @@ static int blkio_policy_parse_and_set(char *buf,
+ {
+ char *s[4], *p, *major_s = NULL, *minor_s = NULL;
+ int ret;
+- unsigned long major, minor, temp;
++ unsigned long major, minor;
+ int i = 0;
+ dev_t dev;
+- u64 bps, iops;
++ u64 temp;
+
+ memset(s, 0, sizeof(s));
+
+@@ -826,20 +826,23 @@ static int blkio_policy_parse_and_set(char *buf,
+
+ dev = MKDEV(major, minor);
+
+- ret = blkio_check_dev_num(dev);
++ ret = strict_strtoull(s[1], 10, &temp);
+ if (ret)
+- return ret;
++ return -EINVAL;
+
+- newpn->dev = dev;
++ /* For rule removal, do not check for device presence. */
++ if (temp) {
++ ret = blkio_check_dev_num(dev);
++ if (ret)
++ return ret;
++ }
+
+- if (s[1] == NULL)
+- return -EINVAL;
++ newpn->dev = dev;
+
+ switch (plid) {
+ case BLKIO_POLICY_PROP:
+- ret = strict_strtoul(s[1], 10, &temp);
+- if (ret || (temp < BLKIO_WEIGHT_MIN && temp > 0) ||
+- temp > BLKIO_WEIGHT_MAX)
++ if ((temp < BLKIO_WEIGHT_MIN && temp > 0) ||
++ temp > BLKIO_WEIGHT_MAX)
+ return -EINVAL;
+
+ newpn->plid = plid;
+@@ -850,26 +853,18 @@ static int blkio_policy_parse_and_set(char *buf,
+ switch(fileid) {
+ case BLKIO_THROTL_read_bps_device:
+ case BLKIO_THROTL_write_bps_device:
+- ret = strict_strtoull(s[1], 10, &bps);
+- if (ret)
+- return -EINVAL;
+-
+ newpn->plid = plid;
+ newpn->fileid = fileid;
+- newpn->val.bps = bps;
++ newpn->val.bps = temp;
+ break;
+ case BLKIO_THROTL_read_iops_device:
+ case BLKIO_THROTL_write_iops_device:
+- ret = strict_strtoull(s[1], 10, &iops);
+- if (ret)
+- return -EINVAL;
+-
+- if (iops > THROTL_IOPS_MAX)
++ if (temp > THROTL_IOPS_MAX)
+ return -EINVAL;
+
+ newpn->plid = plid;
+ newpn->fileid = fileid;
+- newpn->val.iops = (unsigned int)iops;
++ newpn->val.iops = (unsigned int)temp;
+ break;
+ }
+ break;
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 1d49e1c..847d04e 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -348,9 +348,10 @@ void blk_put_queue(struct request_queue *q)
+ EXPORT_SYMBOL(blk_put_queue);
+
+ /*
+- * Note: If a driver supplied the queue lock, it should not zap that lock
+- * unexpectedly as some queue cleanup components like elevator_exit() and
+- * blk_throtl_exit() need queue lock.
++ * Note: If a driver supplied the queue lock, it is disconnected
++ * by this function. The actual state of the lock doesn't matter
++ * here as the request_queue isn't accessible after this point
++ * (QUEUE_FLAG_DEAD is set) and no other requests will be queued.
+ */
+ void blk_cleanup_queue(struct request_queue *q)
+ {
+@@ -367,10 +368,8 @@ void blk_cleanup_queue(struct request_queue *q)
+ queue_flag_set_unlocked(QUEUE_FLAG_DEAD, q);
+ mutex_unlock(&q->sysfs_lock);
+
+- if (q->elevator)
+- elevator_exit(q->elevator);
+-
+- blk_throtl_exit(q);
++ if (q->queue_lock != &q->__queue_lock)
++ q->queue_lock = &q->__queue_lock;
+
+ blk_put_queue(q);
+ }
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index d935bd8..45c56d8 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -472,6 +472,11 @@ static void blk_release_queue(struct kobject *kobj)
+
+ blk_sync_queue(q);
+
++ if (q->elevator)
++ elevator_exit(q->elevator);
++
++ blk_throtl_exit(q);
++
+ if (rl->rq_pool)
+ mempool_destroy(rl->rq_pool);
+
+diff --git a/drivers/acpi/acpica/acconfig.h b/drivers/acpi/acpica/acconfig.h
+index bc533dd..f895a24 100644
+--- a/drivers/acpi/acpica/acconfig.h
++++ b/drivers/acpi/acpica/acconfig.h
+@@ -121,7 +121,7 @@
+
+ /* Maximum sleep allowed via Sleep() operator */
+
+-#define ACPI_MAX_SLEEP 20000 /* Two seconds */
++#define ACPI_MAX_SLEEP 2000 /* Two seconds */
+
+ /******************************************************************************
+ *
+diff --git a/drivers/acpi/acpica/aclocal.h b/drivers/acpi/acpica/aclocal.h
+index c7f743c..5552125 100644
+--- a/drivers/acpi/acpica/aclocal.h
++++ b/drivers/acpi/acpica/aclocal.h
+@@ -357,6 +357,7 @@ struct acpi_predefined_data {
+ char *pathname;
+ const union acpi_predefined_info *predefined;
+ union acpi_operand_object *parent_package;
++ struct acpi_namespace_node *node;
+ u32 flags;
+ u8 node_flags;
+ };
+diff --git a/drivers/acpi/acpica/nspredef.c b/drivers/acpi/acpica/nspredef.c
+index 9fb03fa..dc00582 100644
+--- a/drivers/acpi/acpica/nspredef.c
++++ b/drivers/acpi/acpica/nspredef.c
+@@ -212,6 +212,7 @@ acpi_ns_check_predefined_names(struct acpi_namespace_node *node,
+ goto cleanup;
+ }
+ data->predefined = predefined;
++ data->node = node;
+ data->node_flags = node->flags;
+ data->pathname = pathname;
+
+diff --git a/drivers/acpi/acpica/nsrepair2.c b/drivers/acpi/acpica/nsrepair2.c
+index 973883b..024c4f2 100644
+--- a/drivers/acpi/acpica/nsrepair2.c
++++ b/drivers/acpi/acpica/nsrepair2.c
+@@ -503,6 +503,21 @@ acpi_ns_repair_TSS(struct acpi_predefined_data *data,
+ {
+ union acpi_operand_object *return_object = *return_object_ptr;
+ acpi_status status;
++ struct acpi_namespace_node *node;
++
++ /*
++ * We can only sort the _TSS return package if there is no _PSS in the
++ * same scope. This is because if _PSS is present, the ACPI specification
++ * dictates that the _TSS Power Dissipation field is to be ignored, and
++ * therefore some BIOSs leave garbage values in the _TSS Power field(s).
++ * In this case, it is best to just return the _TSS package as-is.
++ * (May, 2011)
++ */
++ status =
++ acpi_ns_get_node(data->node, "^_PSS", ACPI_NS_NO_UPSEARCH, &node);
++ if (ACPI_SUCCESS(status)) {
++ return (AE_OK);
++ }
+
+ status = acpi_ns_check_sorted_list(data, return_object, 5, 1,
+ ACPI_SORT_DESCENDING,
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 71afe03..cab6960 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -267,6 +267,7 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0x1e05), board_ahci }, /* Panther Point RAID */
+ { PCI_VDEVICE(INTEL, 0x1e06), board_ahci }, /* Panther Point RAID */
+ { PCI_VDEVICE(INTEL, 0x1e07), board_ahci }, /* Panther Point RAID */
++ { PCI_VDEVICE(INTEL, 0x1e0e), board_ahci }, /* Panther Point RAID */
+
+ /* JMicron 360/1/3/5/6, match class to avoid IDE function */
+ { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+diff --git a/drivers/ata/pata_via.c b/drivers/ata/pata_via.c
+index ac8d7d9..d6d4f57 100644
+--- a/drivers/ata/pata_via.c
++++ b/drivers/ata/pata_via.c
+@@ -124,6 +124,17 @@ static const struct via_isa_bridge {
+ { NULL }
+ };
+
++static const struct dmi_system_id no_atapi_dma_dmi_table[] = {
++ {
++ .ident = "AVERATEC 3200",
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "AVERATEC"),
++ DMI_MATCH(DMI_BOARD_NAME, "3200"),
++ },
++ },
++ { }
++};
++
+ struct via_port {
+ u8 cached_device;
+ };
+@@ -355,6 +366,13 @@ static unsigned long via_mode_filter(struct ata_device *dev, unsigned long mask)
+ mask &= ~ ATA_MASK_UDMA;
+ }
+ }
++
++ if (dev->class == ATA_DEV_ATAPI &&
++ dmi_check_system(no_atapi_dma_dmi_table)) {
++ ata_dev_printk(dev, KERN_WARNING, "controller locks up on ATAPI DMA, forcing PIO\n");
++ mask &= ATA_MASK_PIO;
++ }
++
+ return mask;
+ }
+
+diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c
+index bbb03e6..06ed6b4 100644
+--- a/drivers/base/firmware_class.c
++++ b/drivers/base/firmware_class.c
+@@ -521,11 +521,6 @@ static int _request_firmware(const struct firmware **firmware_p,
+ if (!firmware_p)
+ return -EINVAL;
+
+- if (WARN_ON(usermodehelper_is_disabled())) {
+- dev_err(device, "firmware: %s will not be loaded\n", name);
+- return -EBUSY;
+- }
+-
+ *firmware_p = firmware = kzalloc(sizeof(*firmware), GFP_KERNEL);
+ if (!firmware) {
+ dev_err(device, "%s: kmalloc(struct firmware) failed\n",
+@@ -539,6 +534,12 @@ static int _request_firmware(const struct firmware **firmware_p,
+ return 0;
+ }
+
++ if (WARN_ON(usermodehelper_is_disabled())) {
++ dev_err(device, "firmware: %s will not be loaded\n", name);
++ retval = -EBUSY;
++ goto out;
++ }
++
+ if (uevent)
+ dev_dbg(device, "firmware: requesting %s\n", name);
+
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 98de8f4..9955a53 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -4250,7 +4250,7 @@ static int __init floppy_init(void)
+ use_virtual_dma = can_use_virtual_dma & 1;
+ fdc_state[0].address = FDC1;
+ if (fdc_state[0].address == -1) {
+- del_timer(&fd_timeout);
++ del_timer_sync(&fd_timeout);
+ err = -ENODEV;
+ goto out_unreg_region;
+ }
+@@ -4261,7 +4261,7 @@ static int __init floppy_init(void)
+ fdc = 0; /* reset fdc in case of unexpected interrupt */
+ err = floppy_grab_irq_and_dma();
+ if (err) {
+- del_timer(&fd_timeout);
++ del_timer_sync(&fd_timeout);
+ err = -EBUSY;
+ goto out_unreg_region;
+ }
+@@ -4318,7 +4318,7 @@ static int __init floppy_init(void)
+ user_reset_fdc(-1, FD_RESET_ALWAYS, false);
+ }
+ fdc = 0;
+- del_timer(&fd_timeout);
++ del_timer_sync(&fd_timeout);
+ current_drive = 0;
+ initialized = true;
+ if (have_no_fdc) {
+@@ -4368,7 +4368,7 @@ out_unreg_blkdev:
+ unregister_blkdev(FLOPPY_MAJOR, "fd");
+ out_put_disk:
+ while (dr--) {
+- del_timer(&motor_off_timer[dr]);
++ del_timer_sync(&motor_off_timer[dr]);
+ if (disks[dr]->queue)
+ blk_cleanup_queue(disks[dr]->queue);
+ put_disk(disks[dr]);
+diff --git a/drivers/char/tpm/tpm.c b/drivers/char/tpm/tpm.c
+index 7beb0e2..b85ee76 100644
+--- a/drivers/char/tpm/tpm.c
++++ b/drivers/char/tpm/tpm.c
+@@ -383,6 +383,9 @@ static ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
+ u32 count, ordinal;
+ unsigned long stop;
+
++ if (bufsiz > TPM_BUFSIZE)
++ bufsiz = TPM_BUFSIZE;
++
+ count = be32_to_cpu(*((__be32 *) (buf + 2)));
+ ordinal = be32_to_cpu(*((__be32 *) (buf + 6)));
+ if (count == 0)
+@@ -1052,6 +1055,7 @@ ssize_t tpm_read(struct file *file, char __user *buf,
+ {
+ struct tpm_chip *chip = file->private_data;
+ ssize_t ret_size;
++ int rc;
+
+ del_singleshot_timer_sync(&chip->user_read_timer);
+ flush_work_sync(&chip->work);
+@@ -1062,8 +1066,11 @@ ssize_t tpm_read(struct file *file, char __user *buf,
+ ret_size = size;
+
+ mutex_lock(&chip->buffer_mutex);
+- if (copy_to_user(buf, chip->data_buffer, ret_size))
++ rc = copy_to_user(buf, chip->data_buffer, ret_size);
++ memset(chip->data_buffer, 0, ret_size);
++ if (rc)
+ ret_size = -EFAULT;
++
+ mutex_unlock(&chip->buffer_mutex);
+ }
+
+diff --git a/drivers/cpufreq/pcc-cpufreq.c b/drivers/cpufreq/pcc-cpufreq.c
+index 7b0603e..cdc02ac 100644
+--- a/drivers/cpufreq/pcc-cpufreq.c
++++ b/drivers/cpufreq/pcc-cpufreq.c
+@@ -261,6 +261,9 @@ static int pcc_get_offset(int cpu)
+ pr = per_cpu(processors, cpu);
+ pcc_cpu_data = per_cpu_ptr(pcc_cpu_info, cpu);
+
++ if (!pr)
++ return -ENODEV;
++
+ status = acpi_evaluate_object(pr->handle, "PCCP", NULL, &buffer);
+ if (ACPI_FAILURE(status))
+ return -ENODEV;
+diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c
+index ebb8973..ee76c8e 100644
+--- a/drivers/firewire/ohci.c
++++ b/drivers/firewire/ohci.c
+@@ -291,6 +291,9 @@ static const struct {
+ {PCI_VENDOR_ID_NEC, PCI_ANY_ID, PCI_ANY_ID,
+ QUIRK_CYCLE_TIMER},
+
++ {PCI_VENDOR_ID_O2, PCI_ANY_ID, PCI_ANY_ID,
++ QUIRK_NO_MSI},
++
+ {PCI_VENDOR_ID_RICOH, PCI_ANY_ID, PCI_ANY_ID,
+ QUIRK_CYCLE_TIMER},
+
+diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+index 82fad91..ca6028f 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
++++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+@@ -37,8 +37,11 @@ nouveau_sgdma_populate(struct ttm_backend *be, unsigned long num_pages,
+ return -ENOMEM;
+
+ nvbe->ttm_alloced = kmalloc(sizeof(bool) * num_pages, GFP_KERNEL);
+- if (!nvbe->ttm_alloced)
++ if (!nvbe->ttm_alloced) {
++ kfree(nvbe->pages);
++ nvbe->pages = NULL;
+ return -ENOMEM;
++ }
+
+ nvbe->nr_pages = 0;
+ while (num_pages--) {
+diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
+index 15bd047..c975581 100644
+--- a/drivers/gpu/drm/radeon/evergreen.c
++++ b/drivers/gpu/drm/radeon/evergreen.c
+@@ -41,6 +41,31 @@ static void evergreen_gpu_init(struct radeon_device *rdev);
+ void evergreen_fini(struct radeon_device *rdev);
+ static void evergreen_pcie_gen2_enable(struct radeon_device *rdev);
+
++void evergreen_fix_pci_max_read_req_size(struct radeon_device *rdev)
++{
++ u16 ctl, v;
++ int cap, err;
++
++ cap = pci_pcie_cap(rdev->pdev);
++ if (!cap)
++ return;
++
++ err = pci_read_config_word(rdev->pdev, cap + PCI_EXP_DEVCTL, &ctl);
++ if (err)
++ return;
++
++ v = (ctl & PCI_EXP_DEVCTL_READRQ) >> 12;
++
++ /* if bios or OS sets MAX_READ_REQUEST_SIZE to an invalid value, fix it
++ * to avoid hangs or perfomance issues
++ */
++ if ((v == 0) || (v == 6) || (v == 7)) {
++ ctl &= ~PCI_EXP_DEVCTL_READRQ;
++ ctl |= (2 << 12);
++ pci_write_config_word(rdev->pdev, cap + PCI_EXP_DEVCTL, ctl);
++ }
++}
++
+ void evergreen_pre_page_flip(struct radeon_device *rdev, int crtc)
+ {
+ /* enable the pflip int */
+@@ -1357,6 +1382,7 @@ int evergreen_cp_resume(struct radeon_device *rdev)
+ SOFT_RESET_PA |
+ SOFT_RESET_SH |
+ SOFT_RESET_VGT |
++ SOFT_RESET_SPI |
+ SOFT_RESET_SX));
+ RREG32(GRBM_SOFT_RESET);
+ mdelay(15);
+@@ -1378,7 +1404,8 @@ int evergreen_cp_resume(struct radeon_device *rdev)
+ /* Initialize the ring buffer's read and write pointers */
+ WREG32(CP_RB_CNTL, tmp | RB_RPTR_WR_ENA);
+ WREG32(CP_RB_RPTR_WR, 0);
+- WREG32(CP_RB_WPTR, 0);
++ rdev->cp.wptr = 0;
++ WREG32(CP_RB_WPTR, rdev->cp.wptr);
+
+ /* set the wb address wether it's enabled or not */
+ WREG32(CP_RB_RPTR_ADDR,
+@@ -1403,7 +1430,6 @@ int evergreen_cp_resume(struct radeon_device *rdev)
+ WREG32(CP_DEBUG, (1 << 27) | (1 << 28));
+
+ rdev->cp.rptr = RREG32(CP_RB_RPTR);
+- rdev->cp.wptr = RREG32(CP_RB_WPTR);
+
+ evergreen_cp_start(rdev);
+ rdev->cp.ready = true;
+@@ -1865,6 +1891,8 @@ static void evergreen_gpu_init(struct radeon_device *rdev)
+
+ WREG32(GRBM_CNTL, GRBM_READ_TIMEOUT(0xff));
+
++ evergreen_fix_pci_max_read_req_size(rdev);
++
+ cc_gc_shader_pipe_config = RREG32(CC_GC_SHADER_PIPE_CONFIG) & ~2;
+
+ cc_gc_shader_pipe_config |=
+@@ -3142,21 +3170,23 @@ int evergreen_suspend(struct radeon_device *rdev)
+ }
+
+ int evergreen_copy_blit(struct radeon_device *rdev,
+- uint64_t src_offset, uint64_t dst_offset,
+- unsigned num_pages, struct radeon_fence *fence)
++ uint64_t src_offset,
++ uint64_t dst_offset,
++ unsigned num_gpu_pages,
++ struct radeon_fence *fence)
+ {
+ int r;
+
+ mutex_lock(&rdev->r600_blit.mutex);
+ rdev->r600_blit.vb_ib = NULL;
+- r = evergreen_blit_prepare_copy(rdev, num_pages * RADEON_GPU_PAGE_SIZE);
++ r = evergreen_blit_prepare_copy(rdev, num_gpu_pages * RADEON_GPU_PAGE_SIZE);
+ if (r) {
+ if (rdev->r600_blit.vb_ib)
+ radeon_ib_free(rdev, &rdev->r600_blit.vb_ib);
+ mutex_unlock(&rdev->r600_blit.mutex);
+ return r;
+ }
+- evergreen_kms_blit_copy(rdev, src_offset, dst_offset, num_pages * RADEON_GPU_PAGE_SIZE);
++ evergreen_kms_blit_copy(rdev, src_offset, dst_offset, num_gpu_pages * RADEON_GPU_PAGE_SIZE);
+ evergreen_blit_done_copy(rdev, fence);
+ mutex_unlock(&rdev->r600_blit.mutex);
+ return 0;
+diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
+index 559dbd4..0b132a3 100644
+--- a/drivers/gpu/drm/radeon/ni.c
++++ b/drivers/gpu/drm/radeon/ni.c
+@@ -39,6 +39,7 @@ extern int evergreen_mc_wait_for_idle(struct radeon_device *rdev);
+ extern void evergreen_mc_program(struct radeon_device *rdev);
+ extern void evergreen_irq_suspend(struct radeon_device *rdev);
+ extern int evergreen_mc_init(struct radeon_device *rdev);
++extern void evergreen_fix_pci_max_read_req_size(struct radeon_device *rdev);
+
+ #define EVERGREEN_PFP_UCODE_SIZE 1120
+ #define EVERGREEN_PM4_UCODE_SIZE 1376
+@@ -669,6 +670,8 @@ static void cayman_gpu_init(struct radeon_device *rdev)
+
+ WREG32(GRBM_CNTL, GRBM_READ_TIMEOUT(0xff));
+
++ evergreen_fix_pci_max_read_req_size(rdev);
++
+ mc_shared_chmap = RREG32(MC_SHARED_CHMAP);
+ mc_arb_ramcfg = RREG32(MC_ARB_RAMCFG);
+
+@@ -1158,6 +1161,7 @@ int cayman_cp_resume(struct radeon_device *rdev)
+ SOFT_RESET_PA |
+ SOFT_RESET_SH |
+ SOFT_RESET_VGT |
++ SOFT_RESET_SPI |
+ SOFT_RESET_SX));
+ RREG32(GRBM_SOFT_RESET);
+ mdelay(15);
+@@ -1182,7 +1186,8 @@ int cayman_cp_resume(struct radeon_device *rdev)
+
+ /* Initialize the ring buffer's read and write pointers */
+ WREG32(CP_RB0_CNTL, tmp | RB_RPTR_WR_ENA);
+- WREG32(CP_RB0_WPTR, 0);
++ rdev->cp.wptr = 0;
++ WREG32(CP_RB0_WPTR, rdev->cp.wptr);
+
+ /* set the wb address wether it's enabled or not */
+ WREG32(CP_RB0_RPTR_ADDR, (rdev->wb.gpu_addr + RADEON_WB_CP_RPTR_OFFSET) & 0xFFFFFFFC);
+@@ -1202,7 +1207,6 @@ int cayman_cp_resume(struct radeon_device *rdev)
+ WREG32(CP_RB0_BASE, rdev->cp.gpu_addr >> 8);
+
+ rdev->cp.rptr = RREG32(CP_RB0_RPTR);
+- rdev->cp.wptr = RREG32(CP_RB0_WPTR);
+
+ /* ring1 - compute only */
+ /* Set ring buffer size */
+@@ -1215,7 +1219,8 @@ int cayman_cp_resume(struct radeon_device *rdev)
+
+ /* Initialize the ring buffer's read and write pointers */
+ WREG32(CP_RB1_CNTL, tmp | RB_RPTR_WR_ENA);
+- WREG32(CP_RB1_WPTR, 0);
++ rdev->cp1.wptr = 0;
++ WREG32(CP_RB1_WPTR, rdev->cp1.wptr);
+
+ /* set the wb address wether it's enabled or not */
+ WREG32(CP_RB1_RPTR_ADDR, (rdev->wb.gpu_addr + RADEON_WB_CP1_RPTR_OFFSET) & 0xFFFFFFFC);
+@@ -1227,7 +1232,6 @@ int cayman_cp_resume(struct radeon_device *rdev)
+ WREG32(CP_RB1_BASE, rdev->cp1.gpu_addr >> 8);
+
+ rdev->cp1.rptr = RREG32(CP_RB1_RPTR);
+- rdev->cp1.wptr = RREG32(CP_RB1_WPTR);
+
+ /* ring2 - compute only */
+ /* Set ring buffer size */
+@@ -1240,7 +1244,8 @@ int cayman_cp_resume(struct radeon_device *rdev)
+
+ /* Initialize the ring buffer's read and write pointers */
+ WREG32(CP_RB2_CNTL, tmp | RB_RPTR_WR_ENA);
+- WREG32(CP_RB2_WPTR, 0);
++ rdev->cp2.wptr = 0;
++ WREG32(CP_RB2_WPTR, rdev->cp2.wptr);
+
+ /* set the wb address wether it's enabled or not */
+ WREG32(CP_RB2_RPTR_ADDR, (rdev->wb.gpu_addr + RADEON_WB_CP2_RPTR_OFFSET) & 0xFFFFFFFC);
+@@ -1252,7 +1257,6 @@ int cayman_cp_resume(struct radeon_device *rdev)
+ WREG32(CP_RB2_BASE, rdev->cp2.gpu_addr >> 8);
+
+ rdev->cp2.rptr = RREG32(CP_RB2_RPTR);
+- rdev->cp2.wptr = RREG32(CP_RB2_WPTR);
+
+ /* start the rings */
+ cayman_cp_start(rdev);
+diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
+index f2204cb..830e1f1 100644
+--- a/drivers/gpu/drm/radeon/r100.c
++++ b/drivers/gpu/drm/radeon/r100.c
+@@ -721,11 +721,11 @@ void r100_fence_ring_emit(struct radeon_device *rdev,
+ int r100_copy_blit(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+- unsigned num_pages,
++ unsigned num_gpu_pages,
+ struct radeon_fence *fence)
+ {
+ uint32_t cur_pages;
+- uint32_t stride_bytes = PAGE_SIZE;
++ uint32_t stride_bytes = RADEON_GPU_PAGE_SIZE;
+ uint32_t pitch;
+ uint32_t stride_pixels;
+ unsigned ndw;
+@@ -737,7 +737,7 @@ int r100_copy_blit(struct radeon_device *rdev,
+ /* radeon pitch is /64 */
+ pitch = stride_bytes / 64;
+ stride_pixels = stride_bytes / 4;
+- num_loops = DIV_ROUND_UP(num_pages, 8191);
++ num_loops = DIV_ROUND_UP(num_gpu_pages, 8191);
+
+ /* Ask for enough room for blit + flush + fence */
+ ndw = 64 + (10 * num_loops);
+@@ -746,12 +746,12 @@ int r100_copy_blit(struct radeon_device *rdev,
+ DRM_ERROR("radeon: moving bo (%d) asking for %u dw.\n", r, ndw);
+ return -EINVAL;
+ }
+- while (num_pages > 0) {
+- cur_pages = num_pages;
++ while (num_gpu_pages > 0) {
++ cur_pages = num_gpu_pages;
+ if (cur_pages > 8191) {
+ cur_pages = 8191;
+ }
+- num_pages -= cur_pages;
++ num_gpu_pages -= cur_pages;
+
+ /* pages are in Y direction - height
+ page width in X direction - width */
+@@ -990,7 +990,8 @@ int r100_cp_init(struct radeon_device *rdev, unsigned ring_size)
+ /* Force read & write ptr to 0 */
+ WREG32(RADEON_CP_RB_CNTL, tmp | RADEON_RB_RPTR_WR_ENA | RADEON_RB_NO_UPDATE);
+ WREG32(RADEON_CP_RB_RPTR_WR, 0);
+- WREG32(RADEON_CP_RB_WPTR, 0);
++ rdev->cp.wptr = 0;
++ WREG32(RADEON_CP_RB_WPTR, rdev->cp.wptr);
+
+ /* set the wb address whether it's enabled or not */
+ WREG32(R_00070C_CP_RB_RPTR_ADDR,
+@@ -1007,9 +1008,6 @@ int r100_cp_init(struct radeon_device *rdev, unsigned ring_size)
+ WREG32(RADEON_CP_RB_CNTL, tmp);
+ udelay(10);
+ rdev->cp.rptr = RREG32(RADEON_CP_RB_RPTR);
+- rdev->cp.wptr = RREG32(RADEON_CP_RB_WPTR);
+- /* protect against crazy HW on resume */
+- rdev->cp.wptr &= rdev->cp.ptr_mask;
+ /* Set cp mode to bus mastering & enable cp*/
+ WREG32(RADEON_CP_CSQ_MODE,
+ REG_SET(RADEON_INDIRECT2_START, indirect2_start) |
+diff --git a/drivers/gpu/drm/radeon/r200.c b/drivers/gpu/drm/radeon/r200.c
+index f240583..a1f3ba0 100644
+--- a/drivers/gpu/drm/radeon/r200.c
++++ b/drivers/gpu/drm/radeon/r200.c
+@@ -84,7 +84,7 @@ static int r200_get_vtx_size_0(uint32_t vtx_fmt_0)
+ int r200_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+- unsigned num_pages,
++ unsigned num_gpu_pages,
+ struct radeon_fence *fence)
+ {
+ uint32_t size;
+@@ -93,7 +93,7 @@ int r200_copy_dma(struct radeon_device *rdev,
+ int r = 0;
+
+ /* radeon pitch is /64 */
+- size = num_pages << PAGE_SHIFT;
++ size = num_gpu_pages << RADEON_GPU_PAGE_SHIFT;
+ num_loops = DIV_ROUND_UP(size, 0x1FFFFF);
+ r = radeon_ring_lock(rdev, num_loops * 4 + 64);
+ if (r) {
+diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
+index bc54b26..1dea9d6 100644
+--- a/drivers/gpu/drm/radeon/r600.c
++++ b/drivers/gpu/drm/radeon/r600.c
+@@ -2208,7 +2208,8 @@ int r600_cp_resume(struct radeon_device *rdev)
+ /* Initialize the ring buffer's read and write pointers */
+ WREG32(CP_RB_CNTL, tmp | RB_RPTR_WR_ENA);
+ WREG32(CP_RB_RPTR_WR, 0);
+- WREG32(CP_RB_WPTR, 0);
++ rdev->cp.wptr = 0;
++ WREG32(CP_RB_WPTR, rdev->cp.wptr);
+
+ /* set the wb address whether it's enabled or not */
+ WREG32(CP_RB_RPTR_ADDR,
+@@ -2233,7 +2234,6 @@ int r600_cp_resume(struct radeon_device *rdev)
+ WREG32(CP_DEBUG, (1 << 27) | (1 << 28));
+
+ rdev->cp.rptr = RREG32(CP_RB_RPTR);
+- rdev->cp.wptr = RREG32(CP_RB_WPTR);
+
+ r600_cp_start(rdev);
+ rdev->cp.ready = true;
+@@ -2355,21 +2355,23 @@ void r600_fence_ring_emit(struct radeon_device *rdev,
+ }
+
+ int r600_copy_blit(struct radeon_device *rdev,
+- uint64_t src_offset, uint64_t dst_offset,
+- unsigned num_pages, struct radeon_fence *fence)
++ uint64_t src_offset,
++ uint64_t dst_offset,
++ unsigned num_gpu_pages,
++ struct radeon_fence *fence)
+ {
+ int r;
+
+ mutex_lock(&rdev->r600_blit.mutex);
+ rdev->r600_blit.vb_ib = NULL;
+- r = r600_blit_prepare_copy(rdev, num_pages * RADEON_GPU_PAGE_SIZE);
++ r = r600_blit_prepare_copy(rdev, num_gpu_pages * RADEON_GPU_PAGE_SIZE);
+ if (r) {
+ if (rdev->r600_blit.vb_ib)
+ radeon_ib_free(rdev, &rdev->r600_blit.vb_ib);
+ mutex_unlock(&rdev->r600_blit.mutex);
+ return r;
+ }
+- r600_kms_blit_copy(rdev, src_offset, dst_offset, num_pages * RADEON_GPU_PAGE_SIZE);
++ r600_kms_blit_copy(rdev, src_offset, dst_offset, num_gpu_pages * RADEON_GPU_PAGE_SIZE);
+ r600_blit_done_copy(rdev, fence);
+ mutex_unlock(&rdev->r600_blit.mutex);
+ return 0;
+diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
+index ef0e0e0..0bb4ddf 100644
+--- a/drivers/gpu/drm/radeon/radeon.h
++++ b/drivers/gpu/drm/radeon/radeon.h
+@@ -322,6 +322,7 @@ union radeon_gart_table {
+
+ #define RADEON_GPU_PAGE_SIZE 4096
+ #define RADEON_GPU_PAGE_MASK (RADEON_GPU_PAGE_SIZE - 1)
++#define RADEON_GPU_PAGE_SHIFT 12
+
+ struct radeon_gart {
+ dma_addr_t table_addr;
+@@ -914,17 +915,17 @@ struct radeon_asic {
+ int (*copy_blit)(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+- unsigned num_pages,
++ unsigned num_gpu_pages,
+ struct radeon_fence *fence);
+ int (*copy_dma)(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+- unsigned num_pages,
++ unsigned num_gpu_pages,
+ struct radeon_fence *fence);
+ int (*copy)(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+- unsigned num_pages,
++ unsigned num_gpu_pages,
+ struct radeon_fence *fence);
+ uint32_t (*get_engine_clock)(struct radeon_device *rdev);
+ void (*set_engine_clock)(struct radeon_device *rdev, uint32_t eng_clock);
+diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h
+index 3d7a0d7..3dedaa0 100644
+--- a/drivers/gpu/drm/radeon/radeon_asic.h
++++ b/drivers/gpu/drm/radeon/radeon_asic.h
+@@ -75,7 +75,7 @@ uint32_t r100_pll_rreg(struct radeon_device *rdev, uint32_t reg);
+ int r100_copy_blit(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+- unsigned num_pages,
++ unsigned num_gpu_pages,
+ struct radeon_fence *fence);
+ int r100_set_surface_reg(struct radeon_device *rdev, int reg,
+ uint32_t tiling_flags, uint32_t pitch,
+@@ -143,7 +143,7 @@ extern void r100_post_page_flip(struct radeon_device *rdev, int crtc);
+ extern int r200_copy_dma(struct radeon_device *rdev,
+ uint64_t src_offset,
+ uint64_t dst_offset,
+- unsigned num_pages,
++ unsigned num_gpu_pages,
+ struct radeon_fence *fence);
+ void r200_set_safe_registers(struct radeon_device *rdev);
+
+@@ -311,7 +311,7 @@ void r600_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
+ int r600_ring_test(struct radeon_device *rdev);
+ int r600_copy_blit(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+- unsigned num_pages, struct radeon_fence *fence);
++ unsigned num_gpu_pages, struct radeon_fence *fence);
+ void r600_hpd_init(struct radeon_device *rdev);
+ void r600_hpd_fini(struct radeon_device *rdev);
+ bool r600_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd);
+@@ -403,7 +403,7 @@ void evergreen_bandwidth_update(struct radeon_device *rdev);
+ void evergreen_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib);
+ int evergreen_copy_blit(struct radeon_device *rdev,
+ uint64_t src_offset, uint64_t dst_offset,
+- unsigned num_pages, struct radeon_fence *fence);
++ unsigned num_gpu_pages, struct radeon_fence *fence);
+ void evergreen_hpd_init(struct radeon_device *rdev);
+ void evergreen_hpd_fini(struct radeon_device *rdev);
+ bool evergreen_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd);
+diff --git a/drivers/gpu/drm/radeon/radeon_clocks.c b/drivers/gpu/drm/radeon/radeon_clocks.c
+index 2d48e7a..b956cf1 100644
+--- a/drivers/gpu/drm/radeon/radeon_clocks.c
++++ b/drivers/gpu/drm/radeon/radeon_clocks.c
+@@ -219,6 +219,9 @@ void radeon_get_clock_info(struct drm_device *dev)
+ } else {
+ DRM_INFO("Using generic clock info\n");
+
++ /* may need to be per card */
++ rdev->clock.max_pixel_clock = 35000;
++
+ if (rdev->flags & RADEON_IS_IGP) {
+ p1pll->reference_freq = 1432;
+ p2pll->reference_freq = 1432;
+diff --git a/drivers/gpu/drm/radeon/radeon_combios.c b/drivers/gpu/drm/radeon/radeon_combios.c
+index a74217c..cd3c86c 100644
+--- a/drivers/gpu/drm/radeon/radeon_combios.c
++++ b/drivers/gpu/drm/radeon/radeon_combios.c
+@@ -3279,6 +3279,14 @@ void radeon_combios_asic_init(struct drm_device *dev)
+ rdev->pdev->subsystem_device == 0x30a4)
+ return;
+
++ /* quirk for rs4xx Compaq Presario V5245EU laptop to make it resume
++ * - it hangs on resume inside the dynclk 1 table.
++ */
++ if (rdev->family == CHIP_RS480 &&
++ rdev->pdev->subsystem_vendor == 0x103c &&
++ rdev->pdev->subsystem_device == 0x30ae)
++ return;
++
+ /* DYN CLK 1 */
+ table = combios_get_table_offset(dev, COMBIOS_DYN_CLK_1_TABLE);
+ if (table)
+diff --git a/drivers/gpu/drm/radeon/radeon_encoders.c b/drivers/gpu/drm/radeon/radeon_encoders.c
+index 319d85d..13690f3 100644
+--- a/drivers/gpu/drm/radeon/radeon_encoders.c
++++ b/drivers/gpu/drm/radeon/radeon_encoders.c
+@@ -1507,7 +1507,14 @@ radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode)
+ switch (mode) {
+ case DRM_MODE_DPMS_ON:
+ args.ucAction = ATOM_ENABLE;
+- atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
++ /* workaround for DVOOutputControl on some RS690 systems */
++ if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_DDI) {
++ u32 reg = RREG32(RADEON_BIOS_3_SCRATCH);
++ WREG32(RADEON_BIOS_3_SCRATCH, reg & ~ATOM_S3_DFP2I_ACTIVE);
++ atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
++ WREG32(RADEON_BIOS_3_SCRATCH, reg);
++ } else
++ atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
+ if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
+ args.ucAction = ATOM_LCD_BLON;
+ atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
+diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
+index 60125dd..3e9b41b 100644
+--- a/drivers/gpu/drm/radeon/radeon_ttm.c
++++ b/drivers/gpu/drm/radeon/radeon_ttm.c
+@@ -277,7 +277,12 @@ static int radeon_move_blit(struct ttm_buffer_object *bo,
+ DRM_ERROR("Trying to move memory with CP turned off.\n");
+ return -EINVAL;
+ }
+- r = radeon_copy(rdev, old_start, new_start, new_mem->num_pages, fence);
++
++ BUILD_BUG_ON((PAGE_SIZE % RADEON_GPU_PAGE_SIZE) != 0);
++
++ r = radeon_copy(rdev, old_start, new_start,
++ new_mem->num_pages * (PAGE_SIZE / RADEON_GPU_PAGE_SIZE), /* GPU pages */
++ fence);
+ /* FIXME: handle copy error */
+ r = ttm_bo_move_accel_cleanup(bo, (void *)fence, NULL,
+ evict, no_wait_reserve, no_wait_gpu, new_mem);
+diff --git a/drivers/hwmon/ds620.c b/drivers/hwmon/ds620.c
+index 257957c..4f7c3fc 100644
+--- a/drivers/hwmon/ds620.c
++++ b/drivers/hwmon/ds620.c
+@@ -72,7 +72,7 @@ struct ds620_data {
+ char valid; /* !=0 if following fields are valid */
+ unsigned long last_updated; /* In jiffies */
+
+- u16 temp[3]; /* Register values, word */
++ s16 temp[3]; /* Register values, word */
+ };
+
+ /*
+diff --git a/drivers/hwmon/max16065.c b/drivers/hwmon/max16065.c
+index d94a24f..dd2d7b9 100644
+--- a/drivers/hwmon/max16065.c
++++ b/drivers/hwmon/max16065.c
+@@ -124,7 +124,7 @@ static inline int MV_TO_LIMIT(int mv, int range)
+
+ static inline int ADC_TO_CURR(int adc, int gain)
+ {
+- return adc * 1400000 / gain * 255;
++ return adc * 1400000 / (gain * 255);
+ }
+
+ /*
+diff --git a/drivers/infiniband/hw/cxgb3/iwch_cm.c b/drivers/infiniband/hw/cxgb3/iwch_cm.c
+index 0a5008f..2332dc2 100644
+--- a/drivers/infiniband/hw/cxgb3/iwch_cm.c
++++ b/drivers/infiniband/hw/cxgb3/iwch_cm.c
+@@ -287,7 +287,7 @@ void __free_ep(struct kref *kref)
+ if (test_bit(RELEASE_RESOURCES, &ep->com.flags)) {
+ cxgb3_remove_tid(ep->com.tdev, (void *)ep, ep->hwtid);
+ dst_release(ep->dst);
+- l2t_release(L2DATA(ep->com.tdev), ep->l2t);
++ l2t_release(ep->com.tdev, ep->l2t);
+ }
+ kfree(ep);
+ }
+@@ -1178,7 +1178,7 @@ static int act_open_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
+ release_tid(ep->com.tdev, GET_TID(rpl), NULL);
+ cxgb3_free_atid(ep->com.tdev, ep->atid);
+ dst_release(ep->dst);
+- l2t_release(L2DATA(ep->com.tdev), ep->l2t);
++ l2t_release(ep->com.tdev, ep->l2t);
+ put_ep(&ep->com);
+ return CPL_RET_BUF_DONE;
+ }
+@@ -1375,7 +1375,7 @@ static int pass_accept_req(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
+ if (!child_ep) {
+ printk(KERN_ERR MOD "%s - failed to allocate ep entry!\n",
+ __func__);
+- l2t_release(L2DATA(tdev), l2t);
++ l2t_release(tdev, l2t);
+ dst_release(dst);
+ goto reject;
+ }
+@@ -1952,7 +1952,7 @@ int iwch_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ if (!err)
+ goto out;
+
+- l2t_release(L2DATA(h->rdev.t3cdev_p), ep->l2t);
++ l2t_release(h->rdev.t3cdev_p, ep->l2t);
+ fail4:
+ dst_release(ep->dst);
+ fail3:
+@@ -2123,7 +2123,7 @@ int iwch_ep_redirect(void *ctx, struct dst_entry *old, struct dst_entry *new,
+ PDBG("%s ep %p redirect to dst %p l2t %p\n", __func__, ep, new,
+ l2t);
+ dst_hold(new);
+- l2t_release(L2DATA(ep->com.tdev), ep->l2t);
++ l2t_release(ep->com.tdev, ep->l2t);
+ ep->l2t = l2t;
+ dst_release(old);
+ ep->dst = new;
+diff --git a/drivers/leds/ledtrig-timer.c b/drivers/leds/ledtrig-timer.c
+index d87c9d0..328c64c 100644
+--- a/drivers/leds/ledtrig-timer.c
++++ b/drivers/leds/ledtrig-timer.c
+@@ -41,6 +41,7 @@ static ssize_t led_delay_on_store(struct device *dev,
+
+ if (count == size) {
+ led_blink_set(led_cdev, &state, &led_cdev->blink_delay_off);
++ led_cdev->blink_delay_on = state;
+ ret = count;
+ }
+
+@@ -69,6 +70,7 @@ static ssize_t led_delay_off_store(struct device *dev,
+
+ if (count == size) {
+ led_blink_set(led_cdev, &led_cdev->blink_delay_on, &state);
++ led_cdev->blink_delay_off = state;
+ ret = count;
+ }
+
+diff --git a/drivers/md/linear.h b/drivers/md/linear.h
+index 0ce29b6..2f2da05 100644
+--- a/drivers/md/linear.h
++++ b/drivers/md/linear.h
+@@ -10,9 +10,9 @@ typedef struct dev_info dev_info_t;
+
+ struct linear_private_data
+ {
++ struct rcu_head rcu;
+ sector_t array_sectors;
+ dev_info_t disks[0];
+- struct rcu_head rcu;
+ };
+
+
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 91e31e2..8554082 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -1084,8 +1084,11 @@ static int super_90_load(mdk_rdev_t *rdev, mdk_rdev_t *refdev, int minor_version
+ ret = 0;
+ }
+ rdev->sectors = rdev->sb_start;
++ /* Limit to 4TB as metadata cannot record more than that */
++ if (rdev->sectors >= (2ULL << 32))
++ rdev->sectors = (2ULL << 32) - 2;
+
+- if (rdev->sectors < sb->size * 2 && sb->level > 1)
++ if (rdev->sectors < ((sector_t)sb->size) * 2 && sb->level >= 1)
+ /* "this cannot possibly happen" ... */
+ ret = -EINVAL;
+
+@@ -1119,7 +1122,7 @@ static int super_90_validate(mddev_t *mddev, mdk_rdev_t *rdev)
+ mddev->clevel[0] = 0;
+ mddev->layout = sb->layout;
+ mddev->raid_disks = sb->raid_disks;
+- mddev->dev_sectors = sb->size * 2;
++ mddev->dev_sectors = ((sector_t)sb->size) * 2;
+ mddev->events = ev1;
+ mddev->bitmap_info.offset = 0;
+ mddev->bitmap_info.default_offset = MD_SB_BYTES >> 9;
+@@ -1361,6 +1364,11 @@ super_90_rdev_size_change(mdk_rdev_t *rdev, sector_t num_sectors)
+ rdev->sb_start = calc_dev_sboffset(rdev);
+ if (!num_sectors || num_sectors > rdev->sb_start)
+ num_sectors = rdev->sb_start;
++ /* Limit to 4TB as metadata cannot record more than that.
++ * 4TB == 2^32 KB, or 2*2^32 sectors.
++ */
++ if (num_sectors >= (2ULL << 32))
++ num_sectors = (2ULL << 32) - 2;
+ md_super_write(rdev->mddev, rdev, rdev->sb_start, rdev->sb_size,
+ rdev->sb_page);
+ md_super_wait(rdev->mddev);
+diff --git a/drivers/media/dvb/dvb-usb/vp7045.c b/drivers/media/dvb/dvb-usb/vp7045.c
+index 3db89e3..536c16c 100644
+--- a/drivers/media/dvb/dvb-usb/vp7045.c
++++ b/drivers/media/dvb/dvb-usb/vp7045.c
+@@ -224,26 +224,8 @@ static struct dvb_usb_device_properties vp7045_properties;
+ static int vp7045_usb_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
+ {
+- struct dvb_usb_device *d;
+- int ret = dvb_usb_device_init(intf, &vp7045_properties,
+- THIS_MODULE, &d, adapter_nr);
+- if (ret)
+- return ret;
+-
+- d->priv = kmalloc(20, GFP_KERNEL);
+- if (!d->priv) {
+- dvb_usb_device_exit(intf);
+- return -ENOMEM;
+- }
+-
+- return ret;
+-}
+-
+-static void vp7045_usb_disconnect(struct usb_interface *intf)
+-{
+- struct dvb_usb_device *d = usb_get_intfdata(intf);
+- kfree(d->priv);
+- dvb_usb_device_exit(intf);
++ return dvb_usb_device_init(intf, &vp7045_properties,
++ THIS_MODULE, NULL, adapter_nr);
+ }
+
+ static struct usb_device_id vp7045_usb_table [] = {
+@@ -258,7 +240,7 @@ MODULE_DEVICE_TABLE(usb, vp7045_usb_table);
+ static struct dvb_usb_device_properties vp7045_properties = {
+ .usb_ctrl = CYPRESS_FX2,
+ .firmware = "dvb-usb-vp7045-01.fw",
+- .size_of_priv = sizeof(u8 *),
++ .size_of_priv = 20,
+
+ .num_adapters = 1,
+ .adapter = {
+@@ -305,7 +287,7 @@ static struct dvb_usb_device_properties vp7045_properties = {
+ static struct usb_driver vp7045_usb_driver = {
+ .name = "dvb_usb_vp7045",
+ .probe = vp7045_usb_probe,
+- .disconnect = vp7045_usb_disconnect,
++ .disconnect = dvb_usb_device_exit,
+ .id_table = vp7045_usb_table,
+ };
+
+diff --git a/drivers/media/rc/nuvoton-cir.c b/drivers/media/rc/nuvoton-cir.c
+index ce595f9..9fd019e 100644
+--- a/drivers/media/rc/nuvoton-cir.c
++++ b/drivers/media/rc/nuvoton-cir.c
+@@ -624,7 +624,6 @@ static void nvt_dump_rx_buf(struct nvt_dev *nvt)
+ static void nvt_process_rx_ir_data(struct nvt_dev *nvt)
+ {
+ DEFINE_IR_RAW_EVENT(rawir);
+- unsigned int count;
+ u32 carrier;
+ u8 sample;
+ int i;
+@@ -637,65 +636,38 @@ static void nvt_process_rx_ir_data(struct nvt_dev *nvt)
+ if (nvt->carrier_detect_enabled)
+ carrier = nvt_rx_carrier_detect(nvt);
+
+- count = nvt->pkts;
+- nvt_dbg_verbose("Processing buffer of len %d", count);
++ nvt_dbg_verbose("Processing buffer of len %d", nvt->pkts);
+
+ init_ir_raw_event(&rawir);
+
+- for (i = 0; i < count; i++) {
+- nvt->pkts--;
++ for (i = 0; i < nvt->pkts; i++) {
+ sample = nvt->buf[i];
+
+ rawir.pulse = ((sample & BUF_PULSE_BIT) != 0);
+ rawir.duration = US_TO_NS((sample & BUF_LEN_MASK)
+ * SAMPLE_PERIOD);
+
+- if ((sample & BUF_LEN_MASK) == BUF_LEN_MASK) {
+- if (nvt->rawir.pulse == rawir.pulse)
+- nvt->rawir.duration += rawir.duration;
+- else {
+- nvt->rawir.duration = rawir.duration;
+- nvt->rawir.pulse = rawir.pulse;
+- }
+- continue;
+- }
+-
+- rawir.duration += nvt->rawir.duration;
++ nvt_dbg("Storing %s with duration %d",
++ rawir.pulse ? "pulse" : "space", rawir.duration);
+
+- init_ir_raw_event(&nvt->rawir);
+- nvt->rawir.duration = 0;
+- nvt->rawir.pulse = rawir.pulse;
+-
+- if (sample == BUF_PULSE_BIT)
+- rawir.pulse = false;
+-
+- if (rawir.duration) {
+- nvt_dbg("Storing %s with duration %d",
+- rawir.pulse ? "pulse" : "space",
+- rawir.duration);
+-
+- ir_raw_event_store_with_filter(nvt->rdev, &rawir);
+- }
++ ir_raw_event_store_with_filter(nvt->rdev, &rawir);
+
+ /*
+ * BUF_PULSE_BIT indicates end of IR data, BUF_REPEAT_BYTE
+ * indicates end of IR signal, but new data incoming. In both
+ * cases, it means we're ready to call ir_raw_event_handle
+ */
+- if ((sample == BUF_PULSE_BIT) && nvt->pkts) {
++ if ((sample == BUF_PULSE_BIT) && (i + 1 < nvt->pkts)) {
+ nvt_dbg("Calling ir_raw_event_handle (signal end)\n");
+ ir_raw_event_handle(nvt->rdev);
+ }
+ }
+
++ nvt->pkts = 0;
++
+ nvt_dbg("Calling ir_raw_event_handle (buffer empty)\n");
+ ir_raw_event_handle(nvt->rdev);
+
+- if (nvt->pkts) {
+- nvt_dbg("Odd, pkts should be 0 now... (its %u)", nvt->pkts);
+- nvt->pkts = 0;
+- }
+-
+ nvt_dbg_verbose("%s done", __func__);
+ }
+
+@@ -1054,7 +1026,6 @@ static int nvt_probe(struct pnp_dev *pdev, const struct pnp_device_id *dev_id)
+
+ spin_lock_init(&nvt->nvt_lock);
+ spin_lock_init(&nvt->tx.lock);
+- init_ir_raw_event(&nvt->rawir);
+
+ ret = -EBUSY;
+ /* now claim resources */
+diff --git a/drivers/media/rc/nuvoton-cir.h b/drivers/media/rc/nuvoton-cir.h
+index 1241fc8..0d5e087 100644
+--- a/drivers/media/rc/nuvoton-cir.h
++++ b/drivers/media/rc/nuvoton-cir.h
+@@ -67,7 +67,6 @@ static int debug;
+ struct nvt_dev {
+ struct pnp_dev *pdev;
+ struct rc_dev *rdev;
+- struct ir_raw_event rawir;
+
+ spinlock_t nvt_lock;
+
+diff --git a/drivers/mfd/omap-usb-host.c b/drivers/mfd/omap-usb-host.c
+index 1717144..e67c3d3 100644
+--- a/drivers/mfd/omap-usb-host.c
++++ b/drivers/mfd/omap-usb-host.c
+@@ -676,7 +676,6 @@ static void usbhs_omap_tll_init(struct device *dev, u8 tll_channel_count)
+ | OMAP_TLL_CHANNEL_CONF_ULPINOBITSTUFF
+ | OMAP_TLL_CHANNEL_CONF_ULPIDDRMODE);
+
+- reg |= (1 << (i + 1));
+ } else
+ continue;
+
+diff --git a/drivers/mfd/tps65910-irq.c b/drivers/mfd/tps65910-irq.c
+index 2bfad5c..a56be93 100644
+--- a/drivers/mfd/tps65910-irq.c
++++ b/drivers/mfd/tps65910-irq.c
+@@ -178,8 +178,10 @@ int tps65910_irq_init(struct tps65910 *tps65910, int irq,
+ switch (tps65910_chip_id(tps65910)) {
+ case TPS65910:
+ tps65910->irq_num = TPS65910_NUM_IRQ;
++ break;
+ case TPS65911:
+ tps65910->irq_num = TPS65911_NUM_IRQ;
++ break;
+ }
+
+ /* Register with genirq */
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index 7843efe..38089b2 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -132,7 +132,7 @@ void mmc_request_done(struct mmc_host *host, struct mmc_request *mrq)
+ if (mrq->done)
+ mrq->done(mrq);
+
+- mmc_host_clk_gate(host);
++ mmc_host_clk_release(host);
+ }
+ }
+
+@@ -191,7 +191,7 @@ mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
+ mrq->stop->mrq = mrq;
+ }
+ }
+- mmc_host_clk_ungate(host);
++ mmc_host_clk_hold(host);
+ led_trigger_event(host->led, LED_FULL);
+ host->ops->request(host, mrq);
+ }
+@@ -634,15 +634,17 @@ static inline void mmc_set_ios(struct mmc_host *host)
+ */
+ void mmc_set_chip_select(struct mmc_host *host, int mode)
+ {
++ mmc_host_clk_hold(host);
+ host->ios.chip_select = mode;
+ mmc_set_ios(host);
++ mmc_host_clk_release(host);
+ }
+
+ /*
+ * Sets the host clock to the highest possible frequency that
+ * is below "hz".
+ */
+-void mmc_set_clock(struct mmc_host *host, unsigned int hz)
++static void __mmc_set_clock(struct mmc_host *host, unsigned int hz)
+ {
+ WARN_ON(hz < host->f_min);
+
+@@ -653,6 +655,13 @@ void mmc_set_clock(struct mmc_host *host, unsigned int hz)
+ mmc_set_ios(host);
+ }
+
++void mmc_set_clock(struct mmc_host *host, unsigned int hz)
++{
++ mmc_host_clk_hold(host);
++ __mmc_set_clock(host, hz);
++ mmc_host_clk_release(host);
++}
++
+ #ifdef CONFIG_MMC_CLKGATE
+ /*
+ * This gates the clock by setting it to 0 Hz.
+@@ -685,7 +694,7 @@ void mmc_ungate_clock(struct mmc_host *host)
+ if (host->clk_old) {
+ BUG_ON(host->ios.clock);
+ /* This call will also set host->clk_gated to false */
+- mmc_set_clock(host, host->clk_old);
++ __mmc_set_clock(host, host->clk_old);
+ }
+ }
+
+@@ -713,8 +722,10 @@ void mmc_set_ungated(struct mmc_host *host)
+ */
+ void mmc_set_bus_mode(struct mmc_host *host, unsigned int mode)
+ {
++ mmc_host_clk_hold(host);
+ host->ios.bus_mode = mode;
+ mmc_set_ios(host);
++ mmc_host_clk_release(host);
+ }
+
+ /*
+@@ -722,8 +733,10 @@ void mmc_set_bus_mode(struct mmc_host *host, unsigned int mode)
+ */
+ void mmc_set_bus_width(struct mmc_host *host, unsigned int width)
+ {
++ mmc_host_clk_hold(host);
+ host->ios.bus_width = width;
+ mmc_set_ios(host);
++ mmc_host_clk_release(host);
+ }
+
+ /**
+@@ -921,8 +934,10 @@ u32 mmc_select_voltage(struct mmc_host *host, u32 ocr)
+
+ ocr &= 3 << bit;
+
++ mmc_host_clk_hold(host);
+ host->ios.vdd = bit;
+ mmc_set_ios(host);
++ mmc_host_clk_release(host);
+ } else {
+ pr_warning("%s: host doesn't support card's voltages\n",
+ mmc_hostname(host));
+@@ -969,8 +984,10 @@ int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage, bool cmd11
+ */
+ void mmc_set_timing(struct mmc_host *host, unsigned int timing)
+ {
++ mmc_host_clk_hold(host);
+ host->ios.timing = timing;
+ mmc_set_ios(host);
++ mmc_host_clk_release(host);
+ }
+
+ /*
+@@ -978,8 +995,10 @@ void mmc_set_timing(struct mmc_host *host, unsigned int timing)
+ */
+ void mmc_set_driver_type(struct mmc_host *host, unsigned int drv_type)
+ {
++ mmc_host_clk_hold(host);
+ host->ios.drv_type = drv_type;
+ mmc_set_ios(host);
++ mmc_host_clk_release(host);
+ }
+
+ /*
+@@ -997,6 +1016,8 @@ static void mmc_power_up(struct mmc_host *host)
+ {
+ int bit;
+
++ mmc_host_clk_hold(host);
++
+ /* If ocr is set, we use it */
+ if (host->ocr)
+ bit = ffs(host->ocr) - 1;
+@@ -1032,10 +1053,14 @@ static void mmc_power_up(struct mmc_host *host)
+ * time required to reach a stable voltage.
+ */
+ mmc_delay(10);
++
++ mmc_host_clk_release(host);
+ }
+
+ static void mmc_power_off(struct mmc_host *host)
+ {
++ mmc_host_clk_hold(host);
++
+ host->ios.clock = 0;
+ host->ios.vdd = 0;
+
+@@ -1053,6 +1078,8 @@ static void mmc_power_off(struct mmc_host *host)
+ host->ios.bus_width = MMC_BUS_WIDTH_1;
+ host->ios.timing = MMC_TIMING_LEGACY;
+ mmc_set_ios(host);
++
++ mmc_host_clk_release(host);
+ }
+
+ /*
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index b29d3e8..793d0a0 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -119,14 +119,14 @@ static void mmc_host_clk_gate_work(struct work_struct *work)
+ }
+
+ /**
+- * mmc_host_clk_ungate - ungate hardware MCI clocks
++ * mmc_host_clk_hold - ungate hardware MCI clocks
+ * @host: host to ungate.
+ *
+ * Makes sure the host ios.clock is restored to a non-zero value
+ * past this call. Increase clock reference count and ungate clock
+ * if we're the first user.
+ */
+-void mmc_host_clk_ungate(struct mmc_host *host)
++void mmc_host_clk_hold(struct mmc_host *host)
+ {
+ unsigned long flags;
+
+@@ -164,14 +164,14 @@ static bool mmc_host_may_gate_card(struct mmc_card *card)
+ }
+
+ /**
+- * mmc_host_clk_gate - gate off hardware MCI clocks
++ * mmc_host_clk_release - gate off hardware MCI clocks
+ * @host: host to gate.
+ *
+ * Calls the host driver with ios.clock set to zero as often as possible
+ * in order to gate off hardware MCI clocks. Decrease clock reference
+ * count and schedule disabling of clock.
+ */
+-void mmc_host_clk_gate(struct mmc_host *host)
++void mmc_host_clk_release(struct mmc_host *host)
+ {
+ unsigned long flags;
+
+@@ -179,7 +179,7 @@ void mmc_host_clk_gate(struct mmc_host *host)
+ host->clk_requests--;
+ if (mmc_host_may_gate_card(host->card) &&
+ !host->clk_requests)
+- schedule_work(&host->clk_gate_work);
++ queue_work(system_nrt_wq, &host->clk_gate_work);
+ spin_unlock_irqrestore(&host->clk_lock, flags);
+ }
+
+@@ -231,7 +231,7 @@ static inline void mmc_host_clk_exit(struct mmc_host *host)
+ if (cancel_work_sync(&host->clk_gate_work))
+ mmc_host_clk_gate_delayed(host);
+ if (host->clk_gated)
+- mmc_host_clk_ungate(host);
++ mmc_host_clk_hold(host);
+ /* There should be only one user now */
+ WARN_ON(host->clk_requests > 1);
+ }
+diff --git a/drivers/mmc/core/host.h b/drivers/mmc/core/host.h
+index de199f9..fb8a5cd 100644
+--- a/drivers/mmc/core/host.h
++++ b/drivers/mmc/core/host.h
+@@ -16,16 +16,16 @@ int mmc_register_host_class(void);
+ void mmc_unregister_host_class(void);
+
+ #ifdef CONFIG_MMC_CLKGATE
+-void mmc_host_clk_ungate(struct mmc_host *host);
+-void mmc_host_clk_gate(struct mmc_host *host);
++void mmc_host_clk_hold(struct mmc_host *host);
++void mmc_host_clk_release(struct mmc_host *host);
+ unsigned int mmc_host_clk_rate(struct mmc_host *host);
+
+ #else
+-static inline void mmc_host_clk_ungate(struct mmc_host *host)
++static inline void mmc_host_clk_hold(struct mmc_host *host)
+ {
+ }
+
+-static inline void mmc_host_clk_gate(struct mmc_host *host)
++static inline void mmc_host_clk_release(struct mmc_host *host)
+ {
+ }
+
+diff --git a/drivers/mmc/host/sdhci-s3c.c b/drivers/mmc/host/sdhci-s3c.c
+index 69e3ee3..8cd999f 100644
+--- a/drivers/mmc/host/sdhci-s3c.c
++++ b/drivers/mmc/host/sdhci-s3c.c
+@@ -301,6 +301,8 @@ static int sdhci_s3c_platform_8bit_width(struct sdhci_host *host, int width)
+ ctrl &= ~SDHCI_CTRL_8BITBUS;
+ break;
+ default:
++ ctrl &= ~SDHCI_CTRL_4BITBUS;
++ ctrl &= ~SDHCI_CTRL_8BITBUS;
+ break;
+ }
+
+diff --git a/drivers/net/bnx2.c b/drivers/net/bnx2.c
+index 57d3293..74580bb 100644
+--- a/drivers/net/bnx2.c
++++ b/drivers/net/bnx2.c
+@@ -416,6 +416,9 @@ struct cnic_eth_dev *bnx2_cnic_probe(struct net_device *dev)
+ struct bnx2 *bp = netdev_priv(dev);
+ struct cnic_eth_dev *cp = &bp->cnic_eth_dev;
+
++ if (!cp->max_iscsi_conn)
++ return NULL;
++
+ cp->drv_owner = THIS_MODULE;
+ cp->chip_id = bp->chip_id;
+ cp->pdev = bp->pdev;
+@@ -8177,6 +8180,10 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
+ bp->timer.data = (unsigned long) bp;
+ bp->timer.function = bnx2_timer;
+
++#ifdef BCM_CNIC
++ bp->cnic_eth_dev.max_iscsi_conn =
++ bnx2_reg_rd_ind(bp, BNX2_FW_MAX_ISCSI_CONN);
++#endif
+ pci_save_state(pdev);
+
+ return 0;
+diff --git a/drivers/net/bnx2x/bnx2x_dcb.c b/drivers/net/bnx2x/bnx2x_dcb.c
+index 410a49e..d11af7c 100644
+--- a/drivers/net/bnx2x/bnx2x_dcb.c
++++ b/drivers/net/bnx2x/bnx2x_dcb.c
+@@ -1858,6 +1858,7 @@ static u8 bnx2x_dcbnl_get_cap(struct net_device *netdev, int capid, u8 *cap)
+ break;
+ case DCB_CAP_ATTR_DCBX:
+ *cap = BNX2X_DCBX_CAPS;
++ break;
+ default:
+ rval = -EINVAL;
+ break;
+diff --git a/drivers/net/bnx2x/bnx2x_main.c b/drivers/net/bnx2x/bnx2x_main.c
+index 74be989..04976db 100644
+--- a/drivers/net/bnx2x/bnx2x_main.c
++++ b/drivers/net/bnx2x/bnx2x_main.c
+@@ -4138,7 +4138,7 @@ static void bnx2x_init_def_sb(struct bnx2x *bp)
+ int igu_seg_id;
+ int port = BP_PORT(bp);
+ int func = BP_FUNC(bp);
+- int reg_offset;
++ int reg_offset, reg_offset_en5;
+ u64 section;
+ int index;
+ struct hc_sp_status_block_data sp_sb_data;
+@@ -4161,6 +4161,8 @@ static void bnx2x_init_def_sb(struct bnx2x *bp)
+
+ reg_offset = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_0 :
+ MISC_REG_AEU_ENABLE1_FUNC_0_OUT_0);
++ reg_offset_en5 = (port ? MISC_REG_AEU_ENABLE5_FUNC_1_OUT_0 :
++ MISC_REG_AEU_ENABLE5_FUNC_0_OUT_0);
+ for (index = 0; index < MAX_DYNAMIC_ATTN_GRPS; index++) {
+ int sindex;
+ /* take care of sig[0]..sig[4] */
+@@ -4175,7 +4177,7 @@ static void bnx2x_init_def_sb(struct bnx2x *bp)
+ * and not 16 between the different groups
+ */
+ bp->attn_group[index].sig[4] = REG_RD(bp,
+- reg_offset + 0x10 + 0x4*index);
++ reg_offset_en5 + 0x4*index);
+ else
+ bp->attn_group[index].sig[4] = 0;
+ }
+diff --git a/drivers/net/bnx2x/bnx2x_reg.h b/drivers/net/bnx2x/bnx2x_reg.h
+index 86bba25..0380b3a 100644
+--- a/drivers/net/bnx2x/bnx2x_reg.h
++++ b/drivers/net/bnx2x/bnx2x_reg.h
+@@ -1325,6 +1325,18 @@
+ Latched ump_tx_parity; [31] MCP Latched scpad_parity; */
+ #define MISC_REG_AEU_ENABLE4_PXP_0 0xa108
+ #define MISC_REG_AEU_ENABLE4_PXP_1 0xa1a8
++/* [RW 32] fifth 32b for enabling the output for function 0 output0. Mapped
++ * as follows: [0] PGLUE config_space; [1] PGLUE misc_flr; [2] PGLUE B RBC
++ * attention [3] PGLUE B RBC parity; [4] ATC attention; [5] ATC parity; [6]
++ * mstat0 attention; [7] mstat0 parity; [8] mstat1 attention; [9] mstat1
++ * parity; [31-10] Reserved; */
++#define MISC_REG_AEU_ENABLE5_FUNC_0_OUT_0 0xa688
++/* [RW 32] Fifth 32b for enabling the output for function 1 output0. Mapped
++ * as follows: [0] PGLUE config_space; [1] PGLUE misc_flr; [2] PGLUE B RBC
++ * attention [3] PGLUE B RBC parity; [4] ATC attention; [5] ATC parity; [6]
++ * mstat0 attention; [7] mstat0 parity; [8] mstat1 attention; [9] mstat1
++ * parity; [31-10] Reserved; */
++#define MISC_REG_AEU_ENABLE5_FUNC_1_OUT_0 0xa6b0
+ /* [RW 1] set/clr general attention 0; this will set/clr bit 94 in the aeu
+ 128 bit vector */
+ #define MISC_REG_AEU_GENERAL_ATTN_0 0xa000
+diff --git a/drivers/net/cnic.c b/drivers/net/cnic.c
+index 11a92af..363c7f3 100644
+--- a/drivers/net/cnic.c
++++ b/drivers/net/cnic.c
+@@ -605,11 +605,12 @@ static int cnic_unregister_device(struct cnic_dev *dev, int ulp_type)
+ }
+ EXPORT_SYMBOL(cnic_unregister_driver);
+
+-static int cnic_init_id_tbl(struct cnic_id_tbl *id_tbl, u32 size, u32 start_id)
++static int cnic_init_id_tbl(struct cnic_id_tbl *id_tbl, u32 size, u32 start_id,
++ u32 next)
+ {
+ id_tbl->start = start_id;
+ id_tbl->max = size;
+- id_tbl->next = 0;
++ id_tbl->next = next;
+ spin_lock_init(&id_tbl->lock);
+ id_tbl->table = kzalloc(DIV_ROUND_UP(size, 32) * 4, GFP_KERNEL);
+ if (!id_tbl->table)
+@@ -2778,13 +2779,10 @@ static u32 cnic_service_bnx2_queues(struct cnic_dev *dev)
+
+ /* Tell compiler that status_blk fields can change. */
+ barrier();
+- if (status_idx != *cp->kcq1.status_idx_ptr) {
+- status_idx = (u16) *cp->kcq1.status_idx_ptr;
+- /* status block index must be read first */
+- rmb();
+- cp->kwq_con_idx = *cp->kwq_con_idx_ptr;
+- } else
+- break;
++ status_idx = (u16) *cp->kcq1.status_idx_ptr;
++ /* status block index must be read first */
++ rmb();
++ cp->kwq_con_idx = *cp->kwq_con_idx_ptr;
+ }
+
+ CNIC_WR16(dev, cp->kcq1.io_addr, cp->kcq1.sw_prod_idx);
+@@ -2908,8 +2906,6 @@ static u32 cnic_service_bnx2x_kcq(struct cnic_dev *dev, struct kcq_info *info)
+
+ /* Tell compiler that sblk fields can change. */
+ barrier();
+- if (last_status == *info->status_idx_ptr)
+- break;
+
+ last_status = *info->status_idx_ptr;
+ /* status block index must be read before reading the KCQ */
+@@ -3772,7 +3768,13 @@ static void cnic_cm_process_kcqe(struct cnic_dev *dev, struct kcqe *kcqe)
+ break;
+
+ case L4_KCQE_OPCODE_VALUE_CLOSE_RECEIVED:
+- cnic_cm_upcall(cp, csk, opcode);
++ /* after we already sent CLOSE_REQ */
++ if (test_bit(CNIC_F_BNX2X_CLASS, &dev->flags) &&
++ !test_bit(SK_F_OFFLD_COMPLETE, &csk->flags) &&
++ csk->state == L4_KCQE_OPCODE_VALUE_CLOSE_COMP)
++ cp->close_conn(csk, L4_KCQE_OPCODE_VALUE_RESET_COMP);
++ else
++ cnic_cm_upcall(cp, csk, opcode);
+ break;
+ }
+ csk_put(csk);
+@@ -3803,14 +3805,17 @@ static void cnic_cm_free_mem(struct cnic_dev *dev)
+ static int cnic_cm_alloc_mem(struct cnic_dev *dev)
+ {
+ struct cnic_local *cp = dev->cnic_priv;
++ u32 port_id;
+
+ cp->csk_tbl = kzalloc(sizeof(struct cnic_sock) * MAX_CM_SK_TBL_SZ,
+ GFP_KERNEL);
+ if (!cp->csk_tbl)
+ return -ENOMEM;
+
++ get_random_bytes(&port_id, sizeof(port_id));
++ port_id %= CNIC_LOCAL_PORT_RANGE;
+ if (cnic_init_id_tbl(&cp->csk_port_tbl, CNIC_LOCAL_PORT_RANGE,
+- CNIC_LOCAL_PORT_MIN)) {
++ CNIC_LOCAL_PORT_MIN, port_id)) {
+ cnic_cm_free_mem(dev);
+ return -ENOMEM;
+ }
+@@ -3826,12 +3831,14 @@ static int cnic_ready_to_close(struct cnic_sock *csk, u32 opcode)
+ }
+
+ /* 1. If event opcode matches the expected event in csk->state
+- * 2. If the expected event is CLOSE_COMP, we accept any event
++ * 2. If the expected event is CLOSE_COMP or RESET_COMP, we accept any
++ * event
+ * 3. If the expected event is 0, meaning the connection was never
+ * never established, we accept the opcode from cm_abort.
+ */
+ if (opcode == csk->state || csk->state == 0 ||
+- csk->state == L4_KCQE_OPCODE_VALUE_CLOSE_COMP) {
++ csk->state == L4_KCQE_OPCODE_VALUE_CLOSE_COMP ||
++ csk->state == L4_KCQE_OPCODE_VALUE_RESET_COMP) {
+ if (!test_and_set_bit(SK_F_CLOSING, &csk->flags)) {
+ if (csk->state == 0)
+ csk->state = opcode;
+@@ -4218,14 +4225,6 @@ static void cnic_enable_bnx2_int(struct cnic_dev *dev)
+ BNX2_PCICFG_INT_ACK_CMD_INDEX_VALID | cp->last_status_idx);
+ }
+
+-static void cnic_get_bnx2_iscsi_info(struct cnic_dev *dev)
+-{
+- u32 max_conn;
+-
+- max_conn = cnic_reg_rd_ind(dev, BNX2_FW_MAX_ISCSI_CONN);
+- dev->max_iscsi_conn = max_conn;
+-}
+-
+ static void cnic_disable_bnx2_int_sync(struct cnic_dev *dev)
+ {
+ struct cnic_local *cp = dev->cnic_priv;
+@@ -4550,8 +4549,6 @@ static int cnic_start_bnx2_hw(struct cnic_dev *dev)
+ return err;
+ }
+
+- cnic_get_bnx2_iscsi_info(dev);
+-
+ return 0;
+ }
+
+@@ -4826,7 +4823,7 @@ static int cnic_start_bnx2x_hw(struct cnic_dev *dev)
+ pfid = cp->pfid;
+
+ ret = cnic_init_id_tbl(&cp->cid_tbl, MAX_ISCSI_TBL_SZ,
+- cp->iscsi_start_cid);
++ cp->iscsi_start_cid, 0);
+
+ if (ret)
+ return -ENOMEM;
+@@ -4834,7 +4831,7 @@ static int cnic_start_bnx2x_hw(struct cnic_dev *dev)
+ if (BNX2X_CHIP_IS_E2(cp->chip_id)) {
+ ret = cnic_init_id_tbl(&cp->fcoe_cid_tbl,
+ BNX2X_FCOE_NUM_CONNECTIONS,
+- cp->fcoe_start_cid);
++ cp->fcoe_start_cid, 0);
+
+ if (ret)
+ return -ENOMEM;
+@@ -5217,6 +5214,8 @@ static struct cnic_dev *init_bnx2_cnic(struct net_device *dev)
+ cdev->pcidev = pdev;
+ cp->chip_id = ethdev->chip_id;
+
++ cdev->max_iscsi_conn = ethdev->max_iscsi_conn;
++
+ cp->cnic_ops = &cnic_bnx2_ops;
+ cp->start_hw = cnic_start_bnx2_hw;
+ cp->stop_hw = cnic_stop_bnx2_hw;
+@@ -5335,7 +5334,7 @@ static int cnic_netdev_event(struct notifier_block *this, unsigned long event,
+
+ dev = cnic_from_netdev(netdev);
+
+- if (!dev && (event == NETDEV_REGISTER || event == NETDEV_UP)) {
++ if (!dev && (event == NETDEV_REGISTER || netif_running(netdev))) {
+ /* Check for the hot-plug device */
+ dev = is_cnic_dev(netdev);
+ if (dev) {
+@@ -5351,7 +5350,7 @@ static int cnic_netdev_event(struct notifier_block *this, unsigned long event,
+ else if (event == NETDEV_UNREGISTER)
+ cnic_ulp_exit(dev);
+
+- if (event == NETDEV_UP) {
++ if (event == NETDEV_UP || (new_dev && netif_running(netdev))) {
+ if (cnic_register_netdev(dev) != 0) {
+ cnic_put(dev);
+ goto done;
+diff --git a/drivers/net/cxgb3/cxgb3_offload.c b/drivers/net/cxgb3/cxgb3_offload.c
+index 862804f..3f2e12c 100644
+--- a/drivers/net/cxgb3/cxgb3_offload.c
++++ b/drivers/net/cxgb3/cxgb3_offload.c
+@@ -1149,12 +1149,14 @@ static void cxgb_redirect(struct dst_entry *old, struct dst_entry *new)
+ if (te && te->ctx && te->client && te->client->redirect) {
+ update_tcb = te->client->redirect(te->ctx, old, new, e);
+ if (update_tcb) {
++ rcu_read_lock();
+ l2t_hold(L2DATA(tdev), e);
++ rcu_read_unlock();
+ set_l2t_ix(tdev, tid, e);
+ }
+ }
+ }
+- l2t_release(L2DATA(tdev), e);
++ l2t_release(tdev, e);
+ }
+
+ /*
+@@ -1267,7 +1269,7 @@ int cxgb3_offload_activate(struct adapter *adapter)
+ goto out_free;
+
+ err = -ENOMEM;
+- L2DATA(dev) = t3_init_l2t(l2t_capacity);
++ RCU_INIT_POINTER(dev->l2opt, t3_init_l2t(l2t_capacity));
+ if (!L2DATA(dev))
+ goto out_free;
+
+@@ -1301,16 +1303,24 @@ int cxgb3_offload_activate(struct adapter *adapter)
+
+ out_free_l2t:
+ t3_free_l2t(L2DATA(dev));
+- L2DATA(dev) = NULL;
++ rcu_assign_pointer(dev->l2opt, NULL);
+ out_free:
+ kfree(t);
+ return err;
+ }
+
++static void clean_l2_data(struct rcu_head *head)
++{
++ struct l2t_data *d = container_of(head, struct l2t_data, rcu_head);
++ t3_free_l2t(d);
++}
++
++
+ void cxgb3_offload_deactivate(struct adapter *adapter)
+ {
+ struct t3cdev *tdev = &adapter->tdev;
+ struct t3c_data *t = T3C_DATA(tdev);
++ struct l2t_data *d;
+
+ remove_adapter(adapter);
+ if (list_empty(&adapter_list))
+@@ -1318,8 +1328,11 @@ void cxgb3_offload_deactivate(struct adapter *adapter)
+
+ free_tid_maps(&t->tid_maps);
+ T3C_DATA(tdev) = NULL;
+- t3_free_l2t(L2DATA(tdev));
+- L2DATA(tdev) = NULL;
++ rcu_read_lock();
++ d = L2DATA(tdev);
++ rcu_read_unlock();
++ rcu_assign_pointer(tdev->l2opt, NULL);
++ call_rcu(&d->rcu_head, clean_l2_data);
+ if (t->nofail_skb)
+ kfree_skb(t->nofail_skb);
+ kfree(t);
+diff --git a/drivers/net/cxgb3/l2t.c b/drivers/net/cxgb3/l2t.c
+index f452c40..4154097 100644
+--- a/drivers/net/cxgb3/l2t.c
++++ b/drivers/net/cxgb3/l2t.c
+@@ -300,14 +300,21 @@ static inline void reuse_entry(struct l2t_entry *e, struct neighbour *neigh)
+ struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct neighbour *neigh,
+ struct net_device *dev)
+ {
+- struct l2t_entry *e;
+- struct l2t_data *d = L2DATA(cdev);
++ struct l2t_entry *e = NULL;
++ struct l2t_data *d;
++ int hash;
+ u32 addr = *(u32 *) neigh->primary_key;
+ int ifidx = neigh->dev->ifindex;
+- int hash = arp_hash(addr, ifidx, d);
+ struct port_info *p = netdev_priv(dev);
+ int smt_idx = p->port_id;
+
++ rcu_read_lock();
++ d = L2DATA(cdev);
++ if (!d)
++ goto done_rcu;
++
++ hash = arp_hash(addr, ifidx, d);
++
+ write_lock_bh(&d->lock);
+ for (e = d->l2tab[hash].first; e; e = e->next)
+ if (e->addr == addr && e->ifindex == ifidx &&
+@@ -338,6 +345,8 @@ struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct neighbour *neigh,
+ }
+ done:
+ write_unlock_bh(&d->lock);
++done_rcu:
++ rcu_read_unlock();
+ return e;
+ }
+
+diff --git a/drivers/net/cxgb3/l2t.h b/drivers/net/cxgb3/l2t.h
+index fd3eb07..c4dd066 100644
+--- a/drivers/net/cxgb3/l2t.h
++++ b/drivers/net/cxgb3/l2t.h
+@@ -76,6 +76,7 @@ struct l2t_data {
+ atomic_t nfree; /* number of free entries */
+ rwlock_t lock;
+ struct l2t_entry l2tab[0];
++ struct rcu_head rcu_head; /* to handle rcu cleanup */
+ };
+
+ typedef void (*arp_failure_handler_func)(struct t3cdev * dev,
+@@ -99,7 +100,7 @@ static inline void set_arp_failure_handler(struct sk_buff *skb,
+ /*
+ * Getting to the L2 data from an offload device.
+ */
+-#define L2DATA(dev) ((dev)->l2opt)
++#define L2DATA(cdev) (rcu_dereference((cdev)->l2opt))
+
+ #define W_TCB_L2T_IX 0
+ #define S_TCB_L2T_IX 7
+@@ -126,15 +127,22 @@ static inline int l2t_send(struct t3cdev *dev, struct sk_buff *skb,
+ return t3_l2t_send_slow(dev, skb, e);
+ }
+
+-static inline void l2t_release(struct l2t_data *d, struct l2t_entry *e)
++static inline void l2t_release(struct t3cdev *t, struct l2t_entry *e)
+ {
+- if (atomic_dec_and_test(&e->refcnt))
++ struct l2t_data *d;
++
++ rcu_read_lock();
++ d = L2DATA(t);
++
++ if (atomic_dec_and_test(&e->refcnt) && d)
+ t3_l2e_free(d, e);
++
++ rcu_read_unlock();
+ }
+
+ static inline void l2t_hold(struct l2t_data *d, struct l2t_entry *e)
+ {
+- if (atomic_add_return(1, &e->refcnt) == 1) /* 0 -> 1 transition */
++ if (d && atomic_add_return(1, &e->refcnt) == 1) /* 0 -> 1 transition */
+ atomic_dec(&d->nfree);
+ }
+
+diff --git a/drivers/net/e1000/e1000_hw.c b/drivers/net/e1000/e1000_hw.c
+index 7501d97..f17aaa1 100644
+--- a/drivers/net/e1000/e1000_hw.c
++++ b/drivers/net/e1000/e1000_hw.c
+@@ -4028,6 +4028,12 @@ s32 e1000_validate_eeprom_checksum(struct e1000_hw *hw)
+ checksum += eeprom_data;
+ }
+
++#ifdef CONFIG_PARISC
++ /* This is a signature and not a checksum on HP c8000 */
++ if ((hw->subsystem_vendor_id == 0x103C) && (eeprom_data == 0x16d6))
++ return E1000_SUCCESS;
++
++#endif
+ if (checksum == (u16) EEPROM_SUM)
+ return E1000_SUCCESS;
+ else {
+diff --git a/drivers/net/ibmveth.c b/drivers/net/ibmveth.c
+index b388d78..145c924 100644
+--- a/drivers/net/ibmveth.c
++++ b/drivers/net/ibmveth.c
+@@ -394,7 +394,7 @@ static inline struct sk_buff *ibmveth_rxq_get_buffer(struct ibmveth_adapter *ada
+ }
+
+ /* recycle the current buffer on the rx queue */
+-static void ibmveth_rxq_recycle_buffer(struct ibmveth_adapter *adapter)
++static int ibmveth_rxq_recycle_buffer(struct ibmveth_adapter *adapter)
+ {
+ u32 q_index = adapter->rx_queue.index;
+ u64 correlator = adapter->rx_queue.queue_addr[q_index].correlator;
+@@ -402,6 +402,7 @@ static void ibmveth_rxq_recycle_buffer(struct ibmveth_adapter *adapter)
+ unsigned int index = correlator & 0xffffffffUL;
+ union ibmveth_buf_desc desc;
+ unsigned long lpar_rc;
++ int ret = 1;
+
+ BUG_ON(pool >= IBMVETH_NUM_BUFF_POOLS);
+ BUG_ON(index >= adapter->rx_buff_pool[pool].size);
+@@ -409,7 +410,7 @@ static void ibmveth_rxq_recycle_buffer(struct ibmveth_adapter *adapter)
+ if (!adapter->rx_buff_pool[pool].active) {
+ ibmveth_rxq_harvest_buffer(adapter);
+ ibmveth_free_buffer_pool(adapter, &adapter->rx_buff_pool[pool]);
+- return;
++ goto out;
+ }
+
+ desc.fields.flags_len = IBMVETH_BUF_VALID |
+@@ -422,12 +423,16 @@ static void ibmveth_rxq_recycle_buffer(struct ibmveth_adapter *adapter)
+ netdev_dbg(adapter->netdev, "h_add_logical_lan_buffer failed "
+ "during recycle rc=%ld", lpar_rc);
+ ibmveth_remove_buffer_from_pool(adapter, adapter->rx_queue.queue_addr[adapter->rx_queue.index].correlator);
++ ret = 0;
+ }
+
+ if (++adapter->rx_queue.index == adapter->rx_queue.num_slots) {
+ adapter->rx_queue.index = 0;
+ adapter->rx_queue.toggle = !adapter->rx_queue.toggle;
+ }
++
++out:
++ return ret;
+ }
+
+ static void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter)
+@@ -806,7 +811,7 @@ static int ibmveth_set_csum_offload(struct net_device *dev, u32 data)
+ } else
+ adapter->fw_ipv6_csum_support = data;
+
+- if (ret != H_SUCCESS || ret6 != H_SUCCESS)
++ if (ret == H_SUCCESS || ret6 == H_SUCCESS)
+ adapter->rx_csum = data;
+ else
+ rc1 = -EIO;
+@@ -924,6 +929,7 @@ static netdev_tx_t ibmveth_start_xmit(struct sk_buff *skb,
+ union ibmveth_buf_desc descs[6];
+ int last, i;
+ int force_bounce = 0;
++ dma_addr_t dma_addr;
+
+ /*
+ * veth handles a maximum of 6 segments including the header, so
+@@ -988,17 +994,16 @@ retry_bounce:
+ }
+
+ /* Map the header */
+- descs[0].fields.address = dma_map_single(&adapter->vdev->dev, skb->data,
+- skb_headlen(skb),
+- DMA_TO_DEVICE);
+- if (dma_mapping_error(&adapter->vdev->dev, descs[0].fields.address))
++ dma_addr = dma_map_single(&adapter->vdev->dev, skb->data,
++ skb_headlen(skb), DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->vdev->dev, dma_addr))
+ goto map_failed;
+
+ descs[0].fields.flags_len = desc_flags | skb_headlen(skb);
++ descs[0].fields.address = dma_addr;
+
+ /* Map the frags */
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+- unsigned long dma_addr;
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+
+ dma_addr = dma_map_page(&adapter->vdev->dev, frag->page,
+@@ -1020,7 +1025,12 @@ retry_bounce:
+ netdev->stats.tx_bytes += skb->len;
+ }
+
+- for (i = 0; i < skb_shinfo(skb)->nr_frags + 1; i++)
++ dma_unmap_single(&adapter->vdev->dev,
++ descs[0].fields.address,
++ descs[0].fields.flags_len & IBMVETH_BUF_LEN_MASK,
++ DMA_TO_DEVICE);
++
++ for (i = 1; i < skb_shinfo(skb)->nr_frags + 1; i++)
+ dma_unmap_page(&adapter->vdev->dev, descs[i].fields.address,
+ descs[i].fields.flags_len & IBMVETH_BUF_LEN_MASK,
+ DMA_TO_DEVICE);
+@@ -1083,8 +1093,9 @@ restart_poll:
+ if (rx_flush)
+ ibmveth_flush_buffer(skb->data,
+ length + offset);
++ if (!ibmveth_rxq_recycle_buffer(adapter))
++ kfree_skb(skb);
+ skb = new_skb;
+- ibmveth_rxq_recycle_buffer(adapter);
+ } else {
+ ibmveth_rxq_harvest_buffer(adapter);
+ skb_reserve(skb, offset);
+diff --git a/drivers/net/igb/igb_main.c b/drivers/net/igb/igb_main.c
+index 2c28621..97f46ac 100644
+--- a/drivers/net/igb/igb_main.c
++++ b/drivers/net/igb/igb_main.c
+@@ -1985,7 +1985,7 @@ static int __devinit igb_probe(struct pci_dev *pdev,
+
+ if (hw->bus.func == 0)
+ hw->nvm.ops.read(hw, NVM_INIT_CONTROL3_PORT_A, 1, &eeprom_data);
+- else if (hw->mac.type == e1000_82580)
++ else if (hw->mac.type >= e1000_82580)
+ hw->nvm.ops.read(hw, NVM_INIT_CONTROL3_PORT_A +
+ NVM_82580_LAN_FUNC_OFFSET(hw->bus.func), 1,
+ &eeprom_data);
+diff --git a/drivers/net/irda/smsc-ircc2.c b/drivers/net/irda/smsc-ircc2.c
+index 8800e1f..6a4826a 100644
+--- a/drivers/net/irda/smsc-ircc2.c
++++ b/drivers/net/irda/smsc-ircc2.c
+@@ -515,7 +515,7 @@ static const struct net_device_ops smsc_ircc_netdev_ops = {
+ * Try to open driver instance
+ *
+ */
+-static int __init smsc_ircc_open(unsigned int fir_base, unsigned int sir_base, u8 dma, u8 irq)
++static int __devinit smsc_ircc_open(unsigned int fir_base, unsigned int sir_base, u8 dma, u8 irq)
+ {
+ struct smsc_ircc_cb *self;
+ struct net_device *dev;
+diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
+index 08e8e25..83f197d 100644
+--- a/drivers/net/ixgbe/ixgbe_main.c
++++ b/drivers/net/ixgbe/ixgbe_main.c
+@@ -1366,8 +1366,8 @@ static void ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
+ if (ring_is_rsc_enabled(rx_ring))
+ pkt_is_rsc = ixgbe_get_rsc_state(rx_desc);
+
+- /* if this is a skb from previous receive DMA will be 0 */
+- if (rx_buffer_info->dma) {
++ /* linear means we are building an skb from multiple pages */
++ if (!skb_is_nonlinear(skb)) {
+ u16 hlen;
+ if (pkt_is_rsc &&
+ !(staterr & IXGBE_RXD_STAT_EOP) &&
+diff --git a/drivers/net/rionet.c b/drivers/net/rionet.c
+index 5d3436d..ca4694e 100644
+--- a/drivers/net/rionet.c
++++ b/drivers/net/rionet.c
+@@ -80,13 +80,13 @@ static int rionet_capable = 1;
+ */
+ static struct rio_dev **rionet_active;
+
+-#define is_rionet_capable(pef, src_ops, dst_ops) \
+- ((pef & RIO_PEF_INB_MBOX) && \
+- (pef & RIO_PEF_INB_DOORBELL) && \
++#define is_rionet_capable(src_ops, dst_ops) \
++ ((src_ops & RIO_SRC_OPS_DATA_MSG) && \
++ (dst_ops & RIO_DST_OPS_DATA_MSG) && \
+ (src_ops & RIO_SRC_OPS_DOORBELL) && \
+ (dst_ops & RIO_DST_OPS_DOORBELL))
+ #define dev_rionet_capable(dev) \
+- is_rionet_capable(dev->pef, dev->src_ops, dev->dst_ops)
++ is_rionet_capable(dev->src_ops, dev->dst_ops)
+
+ #define RIONET_MAC_MATCH(x) (*(u32 *)x == 0x00010001)
+ #define RIONET_GET_DESTID(x) (*(u16 *)(x + 4))
+@@ -282,7 +282,6 @@ static int rionet_open(struct net_device *ndev)
+ {
+ int i, rc = 0;
+ struct rionet_peer *peer, *tmp;
+- u32 pwdcsr;
+ struct rionet_private *rnet = netdev_priv(ndev);
+
+ if (netif_msg_ifup(rnet))
+@@ -332,13 +331,8 @@ static int rionet_open(struct net_device *ndev)
+ continue;
+ }
+
+- /*
+- * If device has initialized inbound doorbells,
+- * send a join message
+- */
+- rio_read_config_32(peer->rdev, RIO_WRITE_PORT_CSR, &pwdcsr);
+- if (pwdcsr & RIO_DOORBELL_AVAIL)
+- rio_send_doorbell(peer->rdev, RIONET_DOORBELL_JOIN);
++ /* Send a join message */
++ rio_send_doorbell(peer->rdev, RIONET_DOORBELL_JOIN);
+ }
+
+ out:
+@@ -492,7 +486,7 @@ static int rionet_setup_netdev(struct rio_mport *mport, struct net_device *ndev)
+ static int rionet_probe(struct rio_dev *rdev, const struct rio_device_id *id)
+ {
+ int rc = -ENODEV;
+- u32 lpef, lsrc_ops, ldst_ops;
++ u32 lsrc_ops, ldst_ops;
+ struct rionet_peer *peer;
+ struct net_device *ndev = NULL;
+
+@@ -515,12 +509,11 @@ static int rionet_probe(struct rio_dev *rdev, const struct rio_device_id *id)
+ * on later probes
+ */
+ if (!rionet_check) {
+- rio_local_read_config_32(rdev->net->hport, RIO_PEF_CAR, &lpef);
+ rio_local_read_config_32(rdev->net->hport, RIO_SRC_OPS_CAR,
+ &lsrc_ops);
+ rio_local_read_config_32(rdev->net->hport, RIO_DST_OPS_CAR,
+ &ldst_ops);
+- if (!is_rionet_capable(lpef, lsrc_ops, ldst_ops)) {
++ if (!is_rionet_capable(lsrc_ops, ldst_ops)) {
+ printk(KERN_ERR
+ "%s: local device is not network capable\n",
+ DRV_NAME);
+diff --git a/drivers/net/sfc/efx.c b/drivers/net/sfc/efx.c
+index c914729..7d1651b 100644
+--- a/drivers/net/sfc/efx.c
++++ b/drivers/net/sfc/efx.c
+@@ -1051,7 +1051,6 @@ static int efx_init_io(struct efx_nic *efx)
+ {
+ struct pci_dev *pci_dev = efx->pci_dev;
+ dma_addr_t dma_mask = efx->type->max_dma_mask;
+- bool use_wc;
+ int rc;
+
+ netif_dbg(efx, probe, efx->net_dev, "initialising I/O\n");
+@@ -1102,21 +1101,8 @@ static int efx_init_io(struct efx_nic *efx)
+ rc = -EIO;
+ goto fail3;
+ }
+-
+- /* bug22643: If SR-IOV is enabled then tx push over a write combined
+- * mapping is unsafe. We need to disable write combining in this case.
+- * MSI is unsupported when SR-IOV is enabled, and the firmware will
+- * have removed the MSI capability. So write combining is safe if
+- * there is an MSI capability.
+- */
+- use_wc = (!EFX_WORKAROUND_22643(efx) ||
+- pci_find_capability(pci_dev, PCI_CAP_ID_MSI));
+- if (use_wc)
+- efx->membase = ioremap_wc(efx->membase_phys,
+- efx->type->mem_map_size);
+- else
+- efx->membase = ioremap_nocache(efx->membase_phys,
+- efx->type->mem_map_size);
++ efx->membase = ioremap_nocache(efx->membase_phys,
++ efx->type->mem_map_size);
+ if (!efx->membase) {
+ netif_err(efx, probe, efx->net_dev,
+ "could not map memory BAR at %llx+%x\n",
+diff --git a/drivers/net/sfc/io.h b/drivers/net/sfc/io.h
+index cc97880..dc45110 100644
+--- a/drivers/net/sfc/io.h
++++ b/drivers/net/sfc/io.h
+@@ -48,9 +48,9 @@
+ * replacing the low 96 bits with zero does not affect functionality.
+ * - If the host writes to the last dword address of such a register
+ * (i.e. the high 32 bits) the underlying register will always be
+- * written. If the collector and the current write together do not
+- * provide values for all 128 bits of the register, the low 96 bits
+- * will be written as zero.
++ * written. If the collector does not hold values for the low 96
++ * bits of the register, they will be written as zero. Writing to
++ * the last qword does not have this effect and must not be done.
+ * - If the host writes to the address of any other part of such a
+ * register while the collector already holds values for some other
+ * register, the write is discarded and the collector maintains its
+@@ -103,7 +103,6 @@ static inline void efx_writeo(struct efx_nic *efx, efx_oword_t *value,
+ _efx_writed(efx, value->u32[2], reg + 8);
+ _efx_writed(efx, value->u32[3], reg + 12);
+ #endif
+- wmb();
+ mmiowb();
+ spin_unlock_irqrestore(&efx->biu_lock, flags);
+ }
+@@ -126,7 +125,6 @@ static inline void efx_sram_writeq(struct efx_nic *efx, void __iomem *membase,
+ __raw_writel((__force u32)value->u32[0], membase + addr);
+ __raw_writel((__force u32)value->u32[1], membase + addr + 4);
+ #endif
+- wmb();
+ mmiowb();
+ spin_unlock_irqrestore(&efx->biu_lock, flags);
+ }
+@@ -141,7 +139,6 @@ static inline void efx_writed(struct efx_nic *efx, efx_dword_t *value,
+
+ /* No lock required */
+ _efx_writed(efx, value->u32[0], reg);
+- wmb();
+ }
+
+ /* Read a 128-bit CSR, locking as appropriate. */
+@@ -152,7 +149,6 @@ static inline void efx_reado(struct efx_nic *efx, efx_oword_t *value,
+
+ spin_lock_irqsave(&efx->biu_lock, flags);
+ value->u32[0] = _efx_readd(efx, reg + 0);
+- rmb();
+ value->u32[1] = _efx_readd(efx, reg + 4);
+ value->u32[2] = _efx_readd(efx, reg + 8);
+ value->u32[3] = _efx_readd(efx, reg + 12);
+@@ -175,7 +171,6 @@ static inline void efx_sram_readq(struct efx_nic *efx, void __iomem *membase,
+ value->u64[0] = (__force __le64)__raw_readq(membase + addr);
+ #else
+ value->u32[0] = (__force __le32)__raw_readl(membase + addr);
+- rmb();
+ value->u32[1] = (__force __le32)__raw_readl(membase + addr + 4);
+ #endif
+ spin_unlock_irqrestore(&efx->biu_lock, flags);
+@@ -242,14 +237,12 @@ static inline void _efx_writeo_page(struct efx_nic *efx, efx_oword_t *value,
+
+ #ifdef EFX_USE_QWORD_IO
+ _efx_writeq(efx, value->u64[0], reg + 0);
+- _efx_writeq(efx, value->u64[1], reg + 8);
+ #else
+ _efx_writed(efx, value->u32[0], reg + 0);
+ _efx_writed(efx, value->u32[1], reg + 4);
++#endif
+ _efx_writed(efx, value->u32[2], reg + 8);
+ _efx_writed(efx, value->u32[3], reg + 12);
+-#endif
+- wmb();
+ }
+ #define efx_writeo_page(efx, value, reg, page) \
+ _efx_writeo_page(efx, value, \
+diff --git a/drivers/net/sfc/mcdi.c b/drivers/net/sfc/mcdi.c
+index 3dd45ed..81a4253 100644
+--- a/drivers/net/sfc/mcdi.c
++++ b/drivers/net/sfc/mcdi.c
+@@ -50,20 +50,6 @@ static inline struct efx_mcdi_iface *efx_mcdi(struct efx_nic *efx)
+ return &nic_data->mcdi;
+ }
+
+-static inline void
+-efx_mcdi_readd(struct efx_nic *efx, efx_dword_t *value, unsigned reg)
+-{
+- struct siena_nic_data *nic_data = efx->nic_data;
+- value->u32[0] = (__force __le32)__raw_readl(nic_data->mcdi_smem + reg);
+-}
+-
+-static inline void
+-efx_mcdi_writed(struct efx_nic *efx, const efx_dword_t *value, unsigned reg)
+-{
+- struct siena_nic_data *nic_data = efx->nic_data;
+- __raw_writel((__force u32)value->u32[0], nic_data->mcdi_smem + reg);
+-}
+-
+ void efx_mcdi_init(struct efx_nic *efx)
+ {
+ struct efx_mcdi_iface *mcdi;
+@@ -84,8 +70,8 @@ static void efx_mcdi_copyin(struct efx_nic *efx, unsigned cmd,
+ const u8 *inbuf, size_t inlen)
+ {
+ struct efx_mcdi_iface *mcdi = efx_mcdi(efx);
+- unsigned pdu = MCDI_PDU(efx);
+- unsigned doorbell = MCDI_DOORBELL(efx);
++ unsigned pdu = FR_CZ_MC_TREG_SMEM + MCDI_PDU(efx);
++ unsigned doorbell = FR_CZ_MC_TREG_SMEM + MCDI_DOORBELL(efx);
+ unsigned int i;
+ efx_dword_t hdr;
+ u32 xflags, seqno;
+@@ -106,28 +92,29 @@ static void efx_mcdi_copyin(struct efx_nic *efx, unsigned cmd,
+ MCDI_HEADER_SEQ, seqno,
+ MCDI_HEADER_XFLAGS, xflags);
+
+- efx_mcdi_writed(efx, &hdr, pdu);
++ efx_writed(efx, &hdr, pdu);
+
+ for (i = 0; i < inlen; i += 4)
+- efx_mcdi_writed(efx, (const efx_dword_t *)(inbuf + i),
+- pdu + 4 + i);
++ _efx_writed(efx, *((__le32 *)(inbuf + i)), pdu + 4 + i);
++
++ /* Ensure the payload is written out before the header */
++ wmb();
+
+ /* ring the doorbell with a distinctive value */
+- EFX_POPULATE_DWORD_1(hdr, EFX_DWORD_0, 0x45789abc);
+- efx_mcdi_writed(efx, &hdr, doorbell);
++ _efx_writed(efx, (__force __le32) 0x45789abc, doorbell);
+ }
+
+ static void efx_mcdi_copyout(struct efx_nic *efx, u8 *outbuf, size_t outlen)
+ {
+ struct efx_mcdi_iface *mcdi = efx_mcdi(efx);
+- unsigned int pdu = MCDI_PDU(efx);
++ unsigned int pdu = FR_CZ_MC_TREG_SMEM + MCDI_PDU(efx);
+ int i;
+
+ BUG_ON(atomic_read(&mcdi->state) == MCDI_STATE_QUIESCENT);
+ BUG_ON(outlen & 3 || outlen >= 0x100);
+
+ for (i = 0; i < outlen; i += 4)
+- efx_mcdi_readd(efx, (efx_dword_t *)(outbuf + i), pdu + 4 + i);
++ *((__le32 *)(outbuf + i)) = _efx_readd(efx, pdu + 4 + i);
+ }
+
+ static int efx_mcdi_poll(struct efx_nic *efx)
+@@ -135,7 +122,7 @@ static int efx_mcdi_poll(struct efx_nic *efx)
+ struct efx_mcdi_iface *mcdi = efx_mcdi(efx);
+ unsigned int time, finish;
+ unsigned int respseq, respcmd, error;
+- unsigned int pdu = MCDI_PDU(efx);
++ unsigned int pdu = FR_CZ_MC_TREG_SMEM + MCDI_PDU(efx);
+ unsigned int rc, spins;
+ efx_dword_t reg;
+
+@@ -161,7 +148,8 @@ static int efx_mcdi_poll(struct efx_nic *efx)
+
+ time = get_seconds();
+
+- efx_mcdi_readd(efx, ®, pdu);
++ rmb();
++ efx_readd(efx, ®, pdu);
+
+ /* All 1's indicates that shared memory is in reset (and is
+ * not a valid header). Wait for it to come out reset before
+@@ -188,7 +176,7 @@ static int efx_mcdi_poll(struct efx_nic *efx)
+ respseq, mcdi->seqno);
+ rc = EIO;
+ } else if (error) {
+- efx_mcdi_readd(efx, ®, pdu + 4);
++ efx_readd(efx, ®, pdu + 4);
+ switch (EFX_DWORD_FIELD(reg, EFX_DWORD_0)) {
+ #define TRANSLATE_ERROR(name) \
+ case MC_CMD_ERR_ ## name: \
+@@ -222,21 +210,21 @@ out:
+ /* Test and clear MC-rebooted flag for this port/function */
+ int efx_mcdi_poll_reboot(struct efx_nic *efx)
+ {
+- unsigned int addr = MCDI_REBOOT_FLAG(efx);
++ unsigned int addr = FR_CZ_MC_TREG_SMEM + MCDI_REBOOT_FLAG(efx);
+ efx_dword_t reg;
+ uint32_t value;
+
+ if (efx_nic_rev(efx) < EFX_REV_SIENA_A0)
+ return false;
+
+- efx_mcdi_readd(efx, ®, addr);
++ efx_readd(efx, ®, addr);
+ value = EFX_DWORD_FIELD(reg, EFX_DWORD_0);
+
+ if (value == 0)
+ return 0;
+
+ EFX_ZERO_DWORD(reg);
+- efx_mcdi_writed(efx, ®, addr);
++ efx_writed(efx, ®, addr);
+
+ if (value == MC_STATUS_DWORD_ASSERT)
+ return -EINTR;
+diff --git a/drivers/net/sfc/nic.c b/drivers/net/sfc/nic.c
+index f2a2b94..5ac9fa2 100644
+--- a/drivers/net/sfc/nic.c
++++ b/drivers/net/sfc/nic.c
+@@ -1935,13 +1935,6 @@ void efx_nic_get_regs(struct efx_nic *efx, void *buf)
+
+ size = min_t(size_t, table->step, 16);
+
+- if (table->offset >= efx->type->mem_map_size) {
+- /* No longer mapped; return dummy data */
+- memcpy(buf, "\xde\xc0\xad\xde", 4);
+- buf += table->rows * size;
+- continue;
+- }
+-
+ for (i = 0; i < table->rows; i++) {
+ switch (table->step) {
+ case 4: /* 32-bit register or SRAM */
+diff --git a/drivers/net/sfc/nic.h b/drivers/net/sfc/nic.h
+index 4bd1f28..7443f99 100644
+--- a/drivers/net/sfc/nic.h
++++ b/drivers/net/sfc/nic.h
+@@ -143,12 +143,10 @@ static inline struct falcon_board *falcon_board(struct efx_nic *efx)
+ /**
+ * struct siena_nic_data - Siena NIC state
+ * @mcdi: Management-Controller-to-Driver Interface
+- * @mcdi_smem: MCDI shared memory mapping. The mapping is always uncacheable.
+ * @wol_filter_id: Wake-on-LAN packet filter id
+ */
+ struct siena_nic_data {
+ struct efx_mcdi_iface mcdi;
+- void __iomem *mcdi_smem;
+ int wol_filter_id;
+ };
+
+diff --git a/drivers/net/sfc/siena.c b/drivers/net/sfc/siena.c
+index fb4721f..ceac1c9 100644
+--- a/drivers/net/sfc/siena.c
++++ b/drivers/net/sfc/siena.c
+@@ -220,26 +220,12 @@ static int siena_probe_nic(struct efx_nic *efx)
+ efx_reado(efx, ®, FR_AZ_CS_DEBUG);
+ efx->net_dev->dev_id = EFX_OWORD_FIELD(reg, FRF_CZ_CS_PORT_NUM) - 1;
+
+- /* Initialise MCDI */
+- nic_data->mcdi_smem = ioremap_nocache(efx->membase_phys +
+- FR_CZ_MC_TREG_SMEM,
+- FR_CZ_MC_TREG_SMEM_STEP *
+- FR_CZ_MC_TREG_SMEM_ROWS);
+- if (!nic_data->mcdi_smem) {
+- netif_err(efx, probe, efx->net_dev,
+- "could not map MCDI at %llx+%x\n",
+- (unsigned long long)efx->membase_phys +
+- FR_CZ_MC_TREG_SMEM,
+- FR_CZ_MC_TREG_SMEM_STEP * FR_CZ_MC_TREG_SMEM_ROWS);
+- rc = -ENOMEM;
+- goto fail1;
+- }
+ efx_mcdi_init(efx);
+
+ /* Recover from a failed assertion before probing */
+ rc = efx_mcdi_handle_assertion(efx);
+ if (rc)
+- goto fail2;
++ goto fail1;
+
+ /* Let the BMC know that the driver is now in charge of link and
+ * filter settings. We must do this before we reset the NIC */
+@@ -294,7 +280,6 @@ fail4:
+ fail3:
+ efx_mcdi_drv_attach(efx, false, NULL);
+ fail2:
+- iounmap(nic_data->mcdi_smem);
+ fail1:
+ kfree(efx->nic_data);
+ return rc;
+@@ -374,8 +359,6 @@ static int siena_init_nic(struct efx_nic *efx)
+
+ static void siena_remove_nic(struct efx_nic *efx)
+ {
+- struct siena_nic_data *nic_data = efx->nic_data;
+-
+ efx_nic_free_buffer(efx, &efx->irq_status);
+
+ siena_reset_hw(efx, RESET_TYPE_ALL);
+@@ -385,8 +368,7 @@ static void siena_remove_nic(struct efx_nic *efx)
+ efx_mcdi_drv_attach(efx, false, NULL);
+
+ /* Tear down the private nic state */
+- iounmap(nic_data->mcdi_smem);
+- kfree(nic_data);
++ kfree(efx->nic_data);
+ efx->nic_data = NULL;
+ }
+
+@@ -624,7 +606,8 @@ const struct efx_nic_type siena_a0_nic_type = {
+ .default_mac_ops = &efx_mcdi_mac_operations,
+
+ .revision = EFX_REV_SIENA_A0,
+- .mem_map_size = FR_CZ_MC_TREG_SMEM, /* MC_TREG_SMEM mapped separately */
++ .mem_map_size = (FR_CZ_MC_TREG_SMEM +
++ FR_CZ_MC_TREG_SMEM_STEP * FR_CZ_MC_TREG_SMEM_ROWS),
+ .txd_ptr_tbl_base = FR_BZ_TX_DESC_PTR_TBL,
+ .rxd_ptr_tbl_base = FR_BZ_RX_DESC_PTR_TBL,
+ .buf_tbl_base = FR_BZ_BUF_FULL_TBL,
+diff --git a/drivers/net/sfc/workarounds.h b/drivers/net/sfc/workarounds.h
+index 99ff114..e4dd3a7 100644
+--- a/drivers/net/sfc/workarounds.h
++++ b/drivers/net/sfc/workarounds.h
+@@ -38,8 +38,6 @@
+ #define EFX_WORKAROUND_15783 EFX_WORKAROUND_ALWAYS
+ /* Legacy interrupt storm when interrupt fifo fills */
+ #define EFX_WORKAROUND_17213 EFX_WORKAROUND_SIENA
+-/* Write combining and sriov=enabled are incompatible */
+-#define EFX_WORKAROUND_22643 EFX_WORKAROUND_SIENA
+
+ /* Spurious parity errors in TSORT buffers */
+ #define EFX_WORKAROUND_5129 EFX_WORKAROUND_FALCON_A
+diff --git a/drivers/net/tg3.c b/drivers/net/tg3.c
+index a1f9f9e..38f6859 100644
+--- a/drivers/net/tg3.c
++++ b/drivers/net/tg3.c
+@@ -7267,16 +7267,11 @@ static int tg3_chip_reset(struct tg3 *tp)
+ tw32(TG3PCI_CLOCK_CTRL, tp->pci_clock_ctrl);
+ }
+
+- if (tg3_flag(tp, ENABLE_APE))
+- tp->mac_mode = MAC_MODE_APE_TX_EN |
+- MAC_MODE_APE_RX_EN |
+- MAC_MODE_TDE_ENABLE;
+-
+ if (tp->phy_flags & TG3_PHYFLG_PHY_SERDES) {
+- tp->mac_mode |= MAC_MODE_PORT_MODE_TBI;
++ tp->mac_mode = MAC_MODE_PORT_MODE_TBI;
+ val = tp->mac_mode;
+ } else if (tp->phy_flags & TG3_PHYFLG_MII_SERDES) {
+- tp->mac_mode |= MAC_MODE_PORT_MODE_GMII;
++ tp->mac_mode = MAC_MODE_PORT_MODE_GMII;
+ val = tp->mac_mode;
+ } else
+ val = 0;
+@@ -8408,12 +8403,11 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
+ udelay(10);
+ }
+
+- if (tg3_flag(tp, ENABLE_APE))
+- tp->mac_mode = MAC_MODE_APE_TX_EN | MAC_MODE_APE_RX_EN;
+- else
+- tp->mac_mode = 0;
+ tp->mac_mode |= MAC_MODE_TXSTAT_ENABLE | MAC_MODE_RXSTAT_ENABLE |
+- MAC_MODE_TDE_ENABLE | MAC_MODE_RDE_ENABLE | MAC_MODE_FHDE_ENABLE;
++ MAC_MODE_TDE_ENABLE | MAC_MODE_RDE_ENABLE |
++ MAC_MODE_FHDE_ENABLE;
++ if (tg3_flag(tp, ENABLE_APE))
++ tp->mac_mode |= MAC_MODE_APE_TX_EN | MAC_MODE_APE_RX_EN;
+ if (!tg3_flag(tp, 5705_PLUS) &&
+ !(tp->phy_flags & TG3_PHYFLG_PHY_SERDES) &&
+ GET_ASIC_REV(tp->pci_chip_rev_id) != ASIC_REV_5700)
+@@ -8988,7 +8982,7 @@ static int tg3_test_interrupt(struct tg3 *tp)
+ * Turn off MSI one shot mode. Otherwise this test has no
+ * observable way to know whether the interrupt was delivered.
+ */
+- if (tg3_flag(tp, 57765_PLUS) && tg3_flag(tp, USING_MSI)) {
++ if (tg3_flag(tp, 57765_PLUS)) {
+ val = tr32(MSGINT_MODE) | MSGINT_MODE_ONE_SHOT_DISABLE;
+ tw32(MSGINT_MODE, val);
+ }
+@@ -9016,6 +9010,10 @@ static int tg3_test_interrupt(struct tg3 *tp)
+ break;
+ }
+
++ if (tg3_flag(tp, 57765_PLUS) &&
++ tnapi->hw_status->status_tag != tnapi->last_tag)
++ tw32_mailbox_f(tnapi->int_mbox, tnapi->last_tag << 24);
++
+ msleep(10);
+ }
+
+@@ -9030,7 +9028,7 @@ static int tg3_test_interrupt(struct tg3 *tp)
+
+ if (intr_ok) {
+ /* Reenable MSI one shot mode. */
+- if (tg3_flag(tp, 57765_PLUS) && tg3_flag(tp, USING_MSI)) {
++ if (tg3_flag(tp, 57765_PLUS)) {
+ val = tr32(MSGINT_MODE) & ~MSGINT_MODE_ONE_SHOT_DISABLE;
+ tw32(MSGINT_MODE, val);
+ }
+@@ -12947,7 +12945,9 @@ static int __devinit tg3_phy_probe(struct tg3 *tp)
+ }
+
+ if (!(tp->phy_flags & TG3_PHYFLG_ANY_SERDES) &&
+- ((tp->pdev->device == TG3PCI_DEVICE_TIGON3_5718 &&
++ (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719 ||
++ GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5720 ||
++ (tp->pdev->device == TG3PCI_DEVICE_TIGON3_5718 &&
+ tp->pci_chip_rev_id != CHIPREV_ID_5717_A0) ||
+ (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_57765 &&
+ tp->pci_chip_rev_id != CHIPREV_ID_57765_A0)))
+diff --git a/drivers/net/usb/asix.c b/drivers/net/usb/asix.c
+index 6998aa6..5250288 100644
+--- a/drivers/net/usb/asix.c
++++ b/drivers/net/usb/asix.c
+@@ -1502,6 +1502,10 @@ static const struct usb_device_id products [] = {
+ USB_DEVICE (0x04f1, 0x3008),
+ .driver_info = (unsigned long) &ax8817x_info,
+ }, {
++ // ASIX AX88772B 10/100
++ USB_DEVICE (0x0b95, 0x772b),
++ .driver_info = (unsigned long) &ax88772_info,
++}, {
+ // ASIX AX88772 10/100
+ USB_DEVICE (0x0b95, 0x7720),
+ .driver_info = (unsigned long) &ax88772_info,
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index f33ca6a..d3b9e95 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -54,7 +54,7 @@
+ #include <linux/usb/usbnet.h>
+ #include <linux/usb/cdc.h>
+
+-#define DRIVER_VERSION "01-June-2011"
++#define DRIVER_VERSION "04-Aug-2011"
+
+ /* CDC NCM subclass 3.2.1 */
+ #define USB_CDC_NCM_NDP16_LENGTH_MIN 0x10
+@@ -164,35 +164,8 @@ cdc_ncm_get_drvinfo(struct net_device *net, struct ethtool_drvinfo *info)
+ usb_make_path(dev->udev, info->bus_info, sizeof(info->bus_info));
+ }
+
+-static int
+-cdc_ncm_do_request(struct cdc_ncm_ctx *ctx, struct usb_cdc_notification *req,
+- void *data, u16 flags, u16 *actlen, u16 timeout)
+-{
+- int err;
+-
+- err = usb_control_msg(ctx->udev, (req->bmRequestType & USB_DIR_IN) ?
+- usb_rcvctrlpipe(ctx->udev, 0) :
+- usb_sndctrlpipe(ctx->udev, 0),
+- req->bNotificationType, req->bmRequestType,
+- req->wValue,
+- req->wIndex, data,
+- req->wLength, timeout);
+-
+- if (err < 0) {
+- if (actlen)
+- *actlen = 0;
+- return err;
+- }
+-
+- if (actlen)
+- *actlen = err;
+-
+- return 0;
+-}
+-
+ static u8 cdc_ncm_setup(struct cdc_ncm_ctx *ctx)
+ {
+- struct usb_cdc_notification req;
+ u32 val;
+ u8 flags;
+ u8 iface_no;
+@@ -201,14 +174,14 @@ static u8 cdc_ncm_setup(struct cdc_ncm_ctx *ctx)
+
+ iface_no = ctx->control->cur_altsetting->desc.bInterfaceNumber;
+
+- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_IN | USB_RECIP_INTERFACE;
+- req.bNotificationType = USB_CDC_GET_NTB_PARAMETERS;
+- req.wValue = 0;
+- req.wIndex = cpu_to_le16(iface_no);
+- req.wLength = cpu_to_le16(sizeof(ctx->ncm_parm));
+-
+- err = cdc_ncm_do_request(ctx, &req, &ctx->ncm_parm, 0, NULL, 1000);
+- if (err) {
++ err = usb_control_msg(ctx->udev,
++ usb_rcvctrlpipe(ctx->udev, 0),
++ USB_CDC_GET_NTB_PARAMETERS,
++ USB_TYPE_CLASS | USB_DIR_IN
++ | USB_RECIP_INTERFACE,
++ 0, iface_no, &ctx->ncm_parm,
++ sizeof(ctx->ncm_parm), 10000);
++ if (err < 0) {
+ pr_debug("failed GET_NTB_PARAMETERS\n");
+ return 1;
+ }
+@@ -254,31 +227,26 @@ static u8 cdc_ncm_setup(struct cdc_ncm_ctx *ctx)
+
+ /* inform device about NTB input size changes */
+ if (ctx->rx_max != le32_to_cpu(ctx->ncm_parm.dwNtbInMaxSize)) {
+- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+- USB_RECIP_INTERFACE;
+- req.bNotificationType = USB_CDC_SET_NTB_INPUT_SIZE;
+- req.wValue = 0;
+- req.wIndex = cpu_to_le16(iface_no);
+
+ if (flags & USB_CDC_NCM_NCAP_NTB_INPUT_SIZE) {
+ struct usb_cdc_ncm_ndp_input_size ndp_in_sz;
+-
+- req.wLength = 8;
+- ndp_in_sz.dwNtbInMaxSize = cpu_to_le32(ctx->rx_max);
+- ndp_in_sz.wNtbInMaxDatagrams =
+- cpu_to_le16(CDC_NCM_DPT_DATAGRAMS_MAX);
+- ndp_in_sz.wReserved = 0;
+- err = cdc_ncm_do_request(ctx, &req, &ndp_in_sz, 0, NULL,
+- 1000);
++ err = usb_control_msg(ctx->udev,
++ usb_sndctrlpipe(ctx->udev, 0),
++ USB_CDC_SET_NTB_INPUT_SIZE,
++ USB_TYPE_CLASS | USB_DIR_OUT
++ | USB_RECIP_INTERFACE,
++ 0, iface_no, &ndp_in_sz, 8, 1000);
+ } else {
+ __le32 dwNtbInMaxSize = cpu_to_le32(ctx->rx_max);
+-
+- req.wLength = 4;
+- err = cdc_ncm_do_request(ctx, &req, &dwNtbInMaxSize, 0,
+- NULL, 1000);
++ err = usb_control_msg(ctx->udev,
++ usb_sndctrlpipe(ctx->udev, 0),
++ USB_CDC_SET_NTB_INPUT_SIZE,
++ USB_TYPE_CLASS | USB_DIR_OUT
++ | USB_RECIP_INTERFACE,
++ 0, iface_no, &dwNtbInMaxSize, 4, 1000);
+ }
+
+- if (err)
++ if (err < 0)
+ pr_debug("Setting NTB Input Size failed\n");
+ }
+
+@@ -333,29 +301,24 @@ static u8 cdc_ncm_setup(struct cdc_ncm_ctx *ctx)
+
+ /* set CRC Mode */
+ if (flags & USB_CDC_NCM_NCAP_CRC_MODE) {
+- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+- USB_RECIP_INTERFACE;
+- req.bNotificationType = USB_CDC_SET_CRC_MODE;
+- req.wValue = cpu_to_le16(USB_CDC_NCM_CRC_NOT_APPENDED);
+- req.wIndex = cpu_to_le16(iface_no);
+- req.wLength = 0;
+-
+- err = cdc_ncm_do_request(ctx, &req, NULL, 0, NULL, 1000);
+- if (err)
++ err = usb_control_msg(ctx->udev, usb_sndctrlpipe(ctx->udev, 0),
++ USB_CDC_SET_CRC_MODE,
++ USB_TYPE_CLASS | USB_DIR_OUT
++ | USB_RECIP_INTERFACE,
++ USB_CDC_NCM_CRC_NOT_APPENDED,
++ iface_no, NULL, 0, 1000);
++ if (err < 0)
+ pr_debug("Setting CRC mode off failed\n");
+ }
+
+ /* set NTB format, if both formats are supported */
+ if (ntb_fmt_supported & USB_CDC_NCM_NTH32_SIGN) {
+- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+- USB_RECIP_INTERFACE;
+- req.bNotificationType = USB_CDC_SET_NTB_FORMAT;
+- req.wValue = cpu_to_le16(USB_CDC_NCM_NTB16_FORMAT);
+- req.wIndex = cpu_to_le16(iface_no);
+- req.wLength = 0;
+-
+- err = cdc_ncm_do_request(ctx, &req, NULL, 0, NULL, 1000);
+- if (err)
++ err = usb_control_msg(ctx->udev, usb_sndctrlpipe(ctx->udev, 0),
++ USB_CDC_SET_NTB_FORMAT, USB_TYPE_CLASS
++ | USB_DIR_OUT | USB_RECIP_INTERFACE,
++ USB_CDC_NCM_NTB16_FORMAT,
++ iface_no, NULL, 0, 1000);
++ if (err < 0)
+ pr_debug("Setting NTB format to 16-bit failed\n");
+ }
+
+@@ -365,17 +328,13 @@ static u8 cdc_ncm_setup(struct cdc_ncm_ctx *ctx)
+ if (flags & USB_CDC_NCM_NCAP_MAX_DATAGRAM_SIZE) {
+ __le16 max_datagram_size;
+ u16 eth_max_sz = le16_to_cpu(ctx->ether_desc->wMaxSegmentSize);
+-
+- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_IN |
+- USB_RECIP_INTERFACE;
+- req.bNotificationType = USB_CDC_GET_MAX_DATAGRAM_SIZE;
+- req.wValue = 0;
+- req.wIndex = cpu_to_le16(iface_no);
+- req.wLength = cpu_to_le16(2);
+-
+- err = cdc_ncm_do_request(ctx, &req, &max_datagram_size, 0, NULL,
+- 1000);
+- if (err) {
++ err = usb_control_msg(ctx->udev, usb_rcvctrlpipe(ctx->udev, 0),
++ USB_CDC_GET_MAX_DATAGRAM_SIZE,
++ USB_TYPE_CLASS | USB_DIR_IN
++ | USB_RECIP_INTERFACE,
++ 0, iface_no, &max_datagram_size,
++ 2, 1000);
++ if (err < 0) {
+ pr_debug("GET_MAX_DATAGRAM_SIZE failed, use size=%u\n",
+ CDC_NCM_MIN_DATAGRAM_SIZE);
+ } else {
+@@ -396,17 +355,15 @@ static u8 cdc_ncm_setup(struct cdc_ncm_ctx *ctx)
+ CDC_NCM_MIN_DATAGRAM_SIZE;
+
+ /* if value changed, update device */
+- req.bmRequestType = USB_TYPE_CLASS | USB_DIR_OUT |
+- USB_RECIP_INTERFACE;
+- req.bNotificationType = USB_CDC_SET_MAX_DATAGRAM_SIZE;
+- req.wValue = 0;
+- req.wIndex = cpu_to_le16(iface_no);
+- req.wLength = 2;
+- max_datagram_size = cpu_to_le16(ctx->max_datagram_size);
+-
+- err = cdc_ncm_do_request(ctx, &req, &max_datagram_size,
+- 0, NULL, 1000);
+- if (err)
++ err = usb_control_msg(ctx->udev,
++ usb_sndctrlpipe(ctx->udev, 0),
++ USB_CDC_SET_MAX_DATAGRAM_SIZE,
++ USB_TYPE_CLASS | USB_DIR_OUT
++ | USB_RECIP_INTERFACE,
++ 0,
++ iface_no, &max_datagram_size,
++ 2, 1000);
++ if (err < 0)
+ pr_debug("SET_MAX_DATAGRAM_SIZE failed\n");
+ }
+
+@@ -672,7 +629,7 @@ cdc_ncm_fill_tx_frame(struct cdc_ncm_ctx *ctx, struct sk_buff *skb)
+ u32 rem;
+ u32 offset;
+ u32 last_offset;
+- u16 n = 0;
++ u16 n = 0, index;
+ u8 ready2send = 0;
+
+ /* if there is a remaining skb, it gets priority */
+@@ -860,8 +817,8 @@ cdc_ncm_fill_tx_frame(struct cdc_ncm_ctx *ctx, struct sk_buff *skb)
+ cpu_to_le16(sizeof(ctx->tx_ncm.nth16));
+ ctx->tx_ncm.nth16.wSequence = cpu_to_le16(ctx->tx_seq);
+ ctx->tx_ncm.nth16.wBlockLength = cpu_to_le16(last_offset);
+- ctx->tx_ncm.nth16.wNdpIndex = ALIGN(sizeof(struct usb_cdc_ncm_nth16),
+- ctx->tx_ndp_modulus);
++ index = ALIGN(sizeof(struct usb_cdc_ncm_nth16), ctx->tx_ndp_modulus);
++ ctx->tx_ncm.nth16.wNdpIndex = cpu_to_le16(index);
+
+ memcpy(skb_out->data, &(ctx->tx_ncm.nth16), sizeof(ctx->tx_ncm.nth16));
+ ctx->tx_seq++;
+@@ -874,12 +831,11 @@ cdc_ncm_fill_tx_frame(struct cdc_ncm_ctx *ctx, struct sk_buff *skb)
+ ctx->tx_ncm.ndp16.wLength = cpu_to_le16(rem);
+ ctx->tx_ncm.ndp16.wNextNdpIndex = 0; /* reserved */
+
+- memcpy(((u8 *)skb_out->data) + ctx->tx_ncm.nth16.wNdpIndex,
++ memcpy(((u8 *)skb_out->data) + index,
+ &(ctx->tx_ncm.ndp16),
+ sizeof(ctx->tx_ncm.ndp16));
+
+- memcpy(((u8 *)skb_out->data) + ctx->tx_ncm.nth16.wNdpIndex +
+- sizeof(ctx->tx_ncm.ndp16),
++ memcpy(((u8 *)skb_out->data) + index + sizeof(ctx->tx_ncm.ndp16),
+ &(ctx->tx_ncm.dpe16),
+ (ctx->tx_curr_frame_num + 1) *
+ sizeof(struct usb_cdc_ncm_dpe16));
+diff --git a/drivers/net/wireless/ath/ath9k/ar9002_calib.c b/drivers/net/wireless/ath/ath9k/ar9002_calib.c
+index 2d4c091..2d394af 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9002_calib.c
++++ b/drivers/net/wireless/ath/ath9k/ar9002_calib.c
+@@ -41,7 +41,8 @@ static bool ar9002_hw_is_cal_supported(struct ath_hw *ah,
+ case ADC_DC_CAL:
+ /* Run ADC Gain Cal for non-CCK & non 2GHz-HT20 only */
+ if (!IS_CHAN_B(chan) &&
+- !(IS_CHAN_2GHZ(chan) && IS_CHAN_HT20(chan)))
++ !((IS_CHAN_2GHZ(chan) || IS_CHAN_A_FAST_CLOCK(ah, chan)) &&
++ IS_CHAN_HT20(chan)))
+ supported = true;
+ break;
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h b/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
+index e8ac70d..029773c 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
++++ b/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
+@@ -1516,7 +1516,7 @@ static const u32 ar9300_2p2_mac_core[][2] = {
+ {0x00008258, 0x00000000},
+ {0x0000825c, 0x40000000},
+ {0x00008260, 0x00080922},
+- {0x00008264, 0x9bc00010},
++ {0x00008264, 0x9d400010},
+ {0x00008268, 0xffffffff},
+ {0x0000826c, 0x0000ffff},
+ {0x00008270, 0x00000000},
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+index 7e07f0f..417106b 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+@@ -68,7 +68,7 @@ static int ar9003_hw_power_interpolate(int32_t x,
+ static const struct ar9300_eeprom ar9300_default = {
+ .eepromVersion = 2,
+ .templateVersion = 2,
+- .macAddr = {1, 2, 3, 4, 5, 6},
++ .macAddr = {0, 2, 3, 4, 5, 6},
+ .custData = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+ .baseEepHeader = {
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index 2ca351f..5362306 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -2260,7 +2260,11 @@ static void ath9k_set_coverage_class(struct ieee80211_hw *hw, u8 coverage_class)
+
+ mutex_lock(&sc->mutex);
+ ah->coverage_class = coverage_class;
++
++ ath9k_ps_wakeup(sc);
+ ath9k_hw_init_global_settings(ah);
++ ath9k_ps_restore(sc);
++
+ mutex_unlock(&sc->mutex);
+ }
+
+diff --git a/drivers/net/wireless/ath/carl9170/main.c b/drivers/net/wireless/ath/carl9170/main.c
+index 54d093c..b54966c 100644
+--- a/drivers/net/wireless/ath/carl9170/main.c
++++ b/drivers/net/wireless/ath/carl9170/main.c
+@@ -1066,8 +1066,10 @@ static int carl9170_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ * the high througput speed in 802.11n networks.
+ */
+
+- if (!is_main_vif(ar, vif))
++ if (!is_main_vif(ar, vif)) {
++ mutex_lock(&ar->mutex);
+ goto err_softw;
++ }
+
+ /*
+ * While the hardware supports *catch-all* key, for offloading
+diff --git a/drivers/net/wireless/b43/main.c b/drivers/net/wireless/b43/main.c
+index eb41596..b1fe4fe 100644
+--- a/drivers/net/wireless/b43/main.c
++++ b/drivers/net/wireless/b43/main.c
+@@ -1571,7 +1571,8 @@ static void handle_irq_beacon(struct b43_wldev *dev)
+ u32 cmd, beacon0_valid, beacon1_valid;
+
+ if (!b43_is_mode(wl, NL80211_IFTYPE_AP) &&
+- !b43_is_mode(wl, NL80211_IFTYPE_MESH_POINT))
++ !b43_is_mode(wl, NL80211_IFTYPE_MESH_POINT) &&
++ !b43_is_mode(wl, NL80211_IFTYPE_ADHOC))
+ return;
+
+ /* This is the bottom half of the asynchronous beacon update. */
+diff --git a/drivers/net/wireless/iwlegacy/iwl-3945-rs.c b/drivers/net/wireless/iwlegacy/iwl-3945-rs.c
+index 977bd24..164bcae 100644
+--- a/drivers/net/wireless/iwlegacy/iwl-3945-rs.c
++++ b/drivers/net/wireless/iwlegacy/iwl-3945-rs.c
+@@ -822,12 +822,15 @@ static void iwl3945_rs_get_rate(void *priv_r, struct ieee80211_sta *sta,
+
+ out:
+
+- rs_sta->last_txrate_idx = index;
+- if (sband->band == IEEE80211_BAND_5GHZ)
+- info->control.rates[0].idx = rs_sta->last_txrate_idx -
+- IWL_FIRST_OFDM_RATE;
+- else
++ if (sband->band == IEEE80211_BAND_5GHZ) {
++ if (WARN_ON_ONCE(index < IWL_FIRST_OFDM_RATE))
++ index = IWL_FIRST_OFDM_RATE;
++ rs_sta->last_txrate_idx = index;
++ info->control.rates[0].idx = index - IWL_FIRST_OFDM_RATE;
++ } else {
++ rs_sta->last_txrate_idx = index;
+ info->control.rates[0].idx = rs_sta->last_txrate_idx;
++ }
+
+ IWL_DEBUG_RATE(priv, "leave: %d\n", index);
+ }
+diff --git a/drivers/net/wireless/iwlegacy/iwl-core.c b/drivers/net/wireless/iwlegacy/iwl-core.c
+index 3be76bd..d273d50 100644
+--- a/drivers/net/wireless/iwlegacy/iwl-core.c
++++ b/drivers/net/wireless/iwlegacy/iwl-core.c
+@@ -938,7 +938,7 @@ void iwl_legacy_irq_handle_error(struct iwl_priv *priv)
+ &priv->contexts[IWL_RXON_CTX_BSS]);
+ #endif
+
+- wake_up_interruptible(&priv->wait_command_queue);
++ wake_up(&priv->wait_command_queue);
+
+ /* Keep the restart process from trying to send host
+ * commands by clearing the INIT status bit */
+@@ -1776,7 +1776,7 @@ int iwl_legacy_force_reset(struct iwl_priv *priv, int mode, bool external)
+ IWL_ERR(priv, "On demand firmware reload\n");
+ /* Set the FW error flag -- cleared on iwl_down */
+ set_bit(STATUS_FW_ERROR, &priv->status);
+- wake_up_interruptible(&priv->wait_command_queue);
++ wake_up(&priv->wait_command_queue);
+ /*
+ * Keep the restart process from trying to send host
+ * commands by clearing the INIT status bit
+diff --git a/drivers/net/wireless/iwlegacy/iwl-hcmd.c b/drivers/net/wireless/iwlegacy/iwl-hcmd.c
+index 62b4b09..ce1fc9f 100644
+--- a/drivers/net/wireless/iwlegacy/iwl-hcmd.c
++++ b/drivers/net/wireless/iwlegacy/iwl-hcmd.c
+@@ -167,7 +167,7 @@ int iwl_legacy_send_cmd_sync(struct iwl_priv *priv, struct iwl_host_cmd *cmd)
+ goto out;
+ }
+
+- ret = wait_event_interruptible_timeout(priv->wait_command_queue,
++ ret = wait_event_timeout(priv->wait_command_queue,
+ !test_bit(STATUS_HCMD_ACTIVE, &priv->status),
+ HOST_COMPLETE_TIMEOUT);
+ if (!ret) {
+diff --git a/drivers/net/wireless/iwlegacy/iwl-tx.c b/drivers/net/wireless/iwlegacy/iwl-tx.c
+index 4fff995..ef9e268 100644
+--- a/drivers/net/wireless/iwlegacy/iwl-tx.c
++++ b/drivers/net/wireless/iwlegacy/iwl-tx.c
+@@ -625,6 +625,8 @@ iwl_legacy_tx_cmd_complete(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
+ cmd = txq->cmd[cmd_index];
+ meta = &txq->meta[cmd_index];
+
++ txq->time_stamp = jiffies;
++
+ pci_unmap_single(priv->pci_dev,
+ dma_unmap_addr(meta, mapping),
+ dma_unmap_len(meta, len),
+@@ -645,7 +647,7 @@ iwl_legacy_tx_cmd_complete(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
+ clear_bit(STATUS_HCMD_ACTIVE, &priv->status);
+ IWL_DEBUG_INFO(priv, "Clearing HCMD_ACTIVE for command %s\n",
+ iwl_legacy_get_cmd_string(cmd->hdr.cmd));
+- wake_up_interruptible(&priv->wait_command_queue);
++ wake_up(&priv->wait_command_queue);
+ }
+
+ /* Mark as unmapped */
+diff --git a/drivers/net/wireless/iwlegacy/iwl3945-base.c b/drivers/net/wireless/iwlegacy/iwl3945-base.c
+index 0ee6be6..421d5c8 100644
+--- a/drivers/net/wireless/iwlegacy/iwl3945-base.c
++++ b/drivers/net/wireless/iwlegacy/iwl3945-base.c
+@@ -841,7 +841,7 @@ static void iwl3945_rx_card_state_notif(struct iwl_priv *priv,
+ wiphy_rfkill_set_hw_state(priv->hw->wiphy,
+ test_bit(STATUS_RF_KILL_HW, &priv->status));
+ else
+- wake_up_interruptible(&priv->wait_command_queue);
++ wake_up(&priv->wait_command_queue);
+ }
+
+ /**
+@@ -2518,7 +2518,7 @@ static void iwl3945_alive_start(struct iwl_priv *priv)
+ iwl3945_reg_txpower_periodic(priv);
+
+ IWL_DEBUG_INFO(priv, "ALIVE processing complete.\n");
+- wake_up_interruptible(&priv->wait_command_queue);
++ wake_up(&priv->wait_command_queue);
+
+ return;
+
+@@ -2549,7 +2549,7 @@ static void __iwl3945_down(struct iwl_priv *priv)
+ iwl_legacy_clear_driver_stations(priv);
+
+ /* Unblock any waiting calls */
+- wake_up_interruptible_all(&priv->wait_command_queue);
++ wake_up_all(&priv->wait_command_queue);
+
+ /* Wipe out the EXIT_PENDING status bit if we are not actually
+ * exiting the module */
+@@ -3125,7 +3125,7 @@ static int iwl3945_mac_start(struct ieee80211_hw *hw)
+
+ /* Wait for START_ALIVE from ucode. Otherwise callbacks from
+ * mac80211 will not be run successfully. */
+- ret = wait_event_interruptible_timeout(priv->wait_command_queue,
++ ret = wait_event_timeout(priv->wait_command_queue,
+ test_bit(STATUS_READY, &priv->status),
+ UCODE_READY_TIMEOUT);
+ if (!ret) {
+diff --git a/drivers/net/wireless/iwlegacy/iwl4965-base.c b/drivers/net/wireless/iwlegacy/iwl4965-base.c
+index 7157ba5..0c37c02 100644
+--- a/drivers/net/wireless/iwlegacy/iwl4965-base.c
++++ b/drivers/net/wireless/iwlegacy/iwl4965-base.c
+@@ -704,7 +704,7 @@ static void iwl4965_rx_card_state_notif(struct iwl_priv *priv,
+ wiphy_rfkill_set_hw_state(priv->hw->wiphy,
+ test_bit(STATUS_RF_KILL_HW, &priv->status));
+ else
+- wake_up_interruptible(&priv->wait_command_queue);
++ wake_up(&priv->wait_command_queue);
+ }
+
+ /**
+@@ -1054,7 +1054,7 @@ static void iwl4965_irq_tasklet(struct iwl_priv *priv)
+ handled |= CSR_INT_BIT_FH_TX;
+ /* Wake up uCode load routine, now that load is complete */
+ priv->ucode_write_complete = 1;
+- wake_up_interruptible(&priv->wait_command_queue);
++ wake_up(&priv->wait_command_queue);
+ }
+
+ if (inta & ~handled) {
+@@ -2126,7 +2126,7 @@ static void iwl4965_alive_start(struct iwl_priv *priv)
+ iwl4965_rf_kill_ct_config(priv);
+
+ IWL_DEBUG_INFO(priv, "ALIVE processing complete.\n");
+- wake_up_interruptible(&priv->wait_command_queue);
++ wake_up(&priv->wait_command_queue);
+
+ iwl_legacy_power_update_mode(priv, true);
+ IWL_DEBUG_INFO(priv, "Updated power mode\n");
+@@ -2159,7 +2159,7 @@ static void __iwl4965_down(struct iwl_priv *priv)
+ iwl_legacy_clear_driver_stations(priv);
+
+ /* Unblock any waiting calls */
+- wake_up_interruptible_all(&priv->wait_command_queue);
++ wake_up_all(&priv->wait_command_queue);
+
+ /* Wipe out the EXIT_PENDING status bit if we are not actually
+ * exiting the module */
+@@ -2597,7 +2597,7 @@ int iwl4965_mac_start(struct ieee80211_hw *hw)
+
+ /* Wait for START_ALIVE from Run Time ucode. Otherwise callbacks from
+ * mac80211 will not be run successfully. */
+- ret = wait_event_interruptible_timeout(priv->wait_command_queue,
++ ret = wait_event_timeout(priv->wait_command_queue,
+ test_bit(STATUS_READY, &priv->status),
+ UCODE_READY_TIMEOUT);
+ if (!ret) {
+diff --git a/drivers/net/wireless/iwlwifi/iwl-agn.c b/drivers/net/wireless/iwlwifi/iwl-agn.c
+index 8e1942e..f24165d 100644
+--- a/drivers/net/wireless/iwlwifi/iwl-agn.c
++++ b/drivers/net/wireless/iwlwifi/iwl-agn.c
+@@ -2440,7 +2440,12 @@ static int iwl_mac_setup_register(struct iwl_priv *priv,
+ IEEE80211_HW_SPECTRUM_MGMT |
+ IEEE80211_HW_REPORTS_TX_ACK_STATUS;
+
++ /*
++ * Including the following line will crash some AP's. This
++ * workaround removes the stimulus which causes the crash until
++ * the AP software can be fixed.
+ hw->max_tx_aggregation_subframes = LINK_QUAL_AGG_FRAME_LIMIT_DEF;
++ */
+
+ hw->flags |= IEEE80211_HW_SUPPORTS_PS |
+ IEEE80211_HW_SUPPORTS_DYNAMIC_PS;
+diff --git a/drivers/net/wireless/iwlwifi/iwl-scan.c b/drivers/net/wireless/iwlwifi/iwl-scan.c
+index d60d630..f524016 100644
+--- a/drivers/net/wireless/iwlwifi/iwl-scan.c
++++ b/drivers/net/wireless/iwlwifi/iwl-scan.c
+@@ -406,31 +406,33 @@ int iwl_mac_hw_scan(struct ieee80211_hw *hw,
+
+ mutex_lock(&priv->mutex);
+
+- if (test_bit(STATUS_SCANNING, &priv->status) &&
+- priv->scan_type != IWL_SCAN_NORMAL) {
+- IWL_DEBUG_SCAN(priv, "Scan already in progress.\n");
+- ret = -EAGAIN;
+- goto out_unlock;
+- }
+-
+- /* mac80211 will only ask for one band at a time */
+- priv->scan_request = req;
+- priv->scan_vif = vif;
+-
+ /*
+ * If an internal scan is in progress, just set
+ * up the scan_request as per above.
+ */
+ if (priv->scan_type != IWL_SCAN_NORMAL) {
+- IWL_DEBUG_SCAN(priv, "SCAN request during internal scan\n");
++ IWL_DEBUG_SCAN(priv,
++ "SCAN request during internal scan - defer\n");
++ priv->scan_request = req;
++ priv->scan_vif = vif;
+ ret = 0;
+- } else
++ } else {
++ priv->scan_request = req;
++ priv->scan_vif = vif;
++ /*
++ * mac80211 will only ask for one band at a time
++ * so using channels[0] here is ok
++ */
+ ret = iwl_scan_initiate(priv, vif, IWL_SCAN_NORMAL,
+ req->channels[0]->band);
++ if (ret) {
++ priv->scan_request = NULL;
++ priv->scan_vif = NULL;
++ }
++ }
+
+ IWL_DEBUG_MAC80211(priv, "leave\n");
+
+-out_unlock:
+ mutex_unlock(&priv->mutex);
+
+ return ret;
+diff --git a/drivers/net/wireless/iwlwifi/iwl-tx.c b/drivers/net/wireless/iwlwifi/iwl-tx.c
+index 137dba9..c368c50 100644
+--- a/drivers/net/wireless/iwlwifi/iwl-tx.c
++++ b/drivers/net/wireless/iwlwifi/iwl-tx.c
+@@ -802,6 +802,8 @@ void iwl_tx_cmd_complete(struct iwl_priv *priv, struct iwl_rx_mem_buffer *rxb)
+ cmd = txq->cmd[cmd_index];
+ meta = &txq->meta[cmd_index];
+
++ txq->time_stamp = jiffies;
++
+ iwlagn_unmap_tfd(priv, meta, &txq->tfds[index], PCI_DMA_BIDIRECTIONAL);
+
+ /* Input error checking is done when commands are added to queue. */
+diff --git a/drivers/net/wireless/rt2x00/rt2800lib.c b/drivers/net/wireless/rt2x00/rt2800lib.c
+index 5a45228..3f7ea1c 100644
+--- a/drivers/net/wireless/rt2x00/rt2800lib.c
++++ b/drivers/net/wireless/rt2x00/rt2800lib.c
+@@ -38,6 +38,7 @@
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
++#include <linux/sched.h>
+
+ #include "rt2x00.h"
+ #include "rt2800lib.h"
+@@ -607,6 +608,15 @@ static bool rt2800_txdone_entry_check(struct queue_entry *entry, u32 reg)
+ int wcid, ack, pid;
+ int tx_wcid, tx_ack, tx_pid;
+
++ if (test_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags) ||
++ !test_bit(ENTRY_DATA_STATUS_PENDING, &entry->flags)) {
++ WARNING(entry->queue->rt2x00dev,
++ "Data pending for entry %u in queue %u\n",
++ entry->entry_idx, entry->queue->qid);
++ cond_resched();
++ return false;
++ }
++
+ wcid = rt2x00_get_field32(reg, TX_STA_FIFO_WCID);
+ ack = rt2x00_get_field32(reg, TX_STA_FIFO_TX_ACK_REQUIRED);
+ pid = rt2x00_get_field32(reg, TX_STA_FIFO_PID_TYPE);
+@@ -754,12 +764,11 @@ void rt2800_txdone(struct rt2x00_dev *rt2x00dev)
+ entry = rt2x00queue_get_entry(queue, Q_INDEX_DONE);
+ if (rt2800_txdone_entry_check(entry, reg))
+ break;
++ entry = NULL;
+ }
+
+- if (!entry || rt2x00queue_empty(queue))
+- break;
+-
+- rt2800_txdone_entry(entry, reg);
++ if (entry)
++ rt2800_txdone_entry(entry, reg);
+ }
+ }
+ EXPORT_SYMBOL_GPL(rt2800_txdone);
+@@ -3503,14 +3512,15 @@ static void rt2800_efuse_read(struct rt2x00_dev *rt2x00dev, unsigned int i)
+ rt2800_regbusy_read(rt2x00dev, EFUSE_CTRL, EFUSE_CTRL_KICK, ®);
+
+ /* Apparently the data is read from end to start */
+- rt2800_register_read_lock(rt2x00dev, EFUSE_DATA3,
+- (u32 *)&rt2x00dev->eeprom[i]);
+- rt2800_register_read_lock(rt2x00dev, EFUSE_DATA2,
+- (u32 *)&rt2x00dev->eeprom[i + 2]);
+- rt2800_register_read_lock(rt2x00dev, EFUSE_DATA1,
+- (u32 *)&rt2x00dev->eeprom[i + 4]);
+- rt2800_register_read_lock(rt2x00dev, EFUSE_DATA0,
+- (u32 *)&rt2x00dev->eeprom[i + 6]);
++ rt2800_register_read_lock(rt2x00dev, EFUSE_DATA3, ®);
++ /* The returned value is in CPU order, but eeprom is le */
++ rt2x00dev->eeprom[i] = cpu_to_le32(reg);
++ rt2800_register_read_lock(rt2x00dev, EFUSE_DATA2, ®);
++ *(u32 *)&rt2x00dev->eeprom[i + 2] = cpu_to_le32(reg);
++ rt2800_register_read_lock(rt2x00dev, EFUSE_DATA1, ®);
++ *(u32 *)&rt2x00dev->eeprom[i + 4] = cpu_to_le32(reg);
++ rt2800_register_read_lock(rt2x00dev, EFUSE_DATA0, ®);
++ *(u32 *)&rt2x00dev->eeprom[i + 6] = cpu_to_le32(reg);
+
+ mutex_unlock(&rt2x00dev->csr_mutex);
+ }
+@@ -3676,19 +3686,23 @@ int rt2800_init_eeprom(struct rt2x00_dev *rt2x00dev)
+ return -ENODEV;
+ }
+
+- if (!rt2x00_rf(rt2x00dev, RF2820) &&
+- !rt2x00_rf(rt2x00dev, RF2850) &&
+- !rt2x00_rf(rt2x00dev, RF2720) &&
+- !rt2x00_rf(rt2x00dev, RF2750) &&
+- !rt2x00_rf(rt2x00dev, RF3020) &&
+- !rt2x00_rf(rt2x00dev, RF2020) &&
+- !rt2x00_rf(rt2x00dev, RF3021) &&
+- !rt2x00_rf(rt2x00dev, RF3022) &&
+- !rt2x00_rf(rt2x00dev, RF3052) &&
+- !rt2x00_rf(rt2x00dev, RF3320) &&
+- !rt2x00_rf(rt2x00dev, RF5370) &&
+- !rt2x00_rf(rt2x00dev, RF5390)) {
+- ERROR(rt2x00dev, "Invalid RF chipset detected.\n");
++ switch (rt2x00dev->chip.rf) {
++ case RF2820:
++ case RF2850:
++ case RF2720:
++ case RF2750:
++ case RF3020:
++ case RF2020:
++ case RF3021:
++ case RF3022:
++ case RF3052:
++ case RF3320:
++ case RF5370:
++ case RF5390:
++ break;
++ default:
++ ERROR(rt2x00dev, "Invalid RF chipset 0x%x detected.\n",
++ rt2x00dev->chip.rf);
+ return -ENODEV;
+ }
+
+diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c
+index ba82c97..6e7fe94 100644
+--- a/drivers/net/wireless/rt2x00/rt2800usb.c
++++ b/drivers/net/wireless/rt2x00/rt2800usb.c
+@@ -477,8 +477,10 @@ static void rt2800usb_work_txdone(struct work_struct *work)
+ while (!rt2x00queue_empty(queue)) {
+ entry = rt2x00queue_get_entry(queue, Q_INDEX_DONE);
+
+- if (test_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags))
++ if (test_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags) ||
++ !test_bit(ENTRY_DATA_STATUS_PENDING, &entry->flags))
+ break;
++
+ if (test_bit(ENTRY_DATA_IO_FAILED, &entry->flags))
+ rt2x00lib_txdone_noinfo(entry, TXDONE_FAILURE);
+ else if (rt2x00queue_status_timeout(entry))
+diff --git a/drivers/net/wireless/rt2x00/rt2x00usb.c b/drivers/net/wireless/rt2x00/rt2x00usb.c
+index 241a099..54f0b13 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00usb.c
++++ b/drivers/net/wireless/rt2x00/rt2x00usb.c
+@@ -870,18 +870,8 @@ int rt2x00usb_suspend(struct usb_interface *usb_intf, pm_message_t state)
+ {
+ struct ieee80211_hw *hw = usb_get_intfdata(usb_intf);
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+- int retval;
+-
+- retval = rt2x00lib_suspend(rt2x00dev, state);
+- if (retval)
+- return retval;
+
+- /*
+- * Decrease usbdev refcount.
+- */
+- usb_put_dev(interface_to_usbdev(usb_intf));
+-
+- return 0;
++ return rt2x00lib_suspend(rt2x00dev, state);
+ }
+ EXPORT_SYMBOL_GPL(rt2x00usb_suspend);
+
+@@ -890,8 +880,6 @@ int rt2x00usb_resume(struct usb_interface *usb_intf)
+ struct ieee80211_hw *hw = usb_get_intfdata(usb_intf);
+ struct rt2x00_dev *rt2x00dev = hw->priv;
+
+- usb_get_dev(interface_to_usbdev(usb_intf));
+-
+ return rt2x00lib_resume(rt2x00dev);
+ }
+ EXPORT_SYMBOL_GPL(rt2x00usb_resume);
+diff --git a/drivers/net/wireless/rtlwifi/core.c b/drivers/net/wireless/rtlwifi/core.c
+index d2ec253..ce0444c 100644
+--- a/drivers/net/wireless/rtlwifi/core.c
++++ b/drivers/net/wireless/rtlwifi/core.c
+@@ -610,6 +610,11 @@ static void rtl_op_bss_info_changed(struct ieee80211_hw *hw,
+
+ mac->link_state = MAC80211_NOLINK;
+ memset(mac->bssid, 0, 6);
++
++ /* reset sec info */
++ rtl_cam_reset_sec_info(hw);
++
++ rtl_cam_reset_all_entry(hw);
+ mac->vendor = PEER_UNKNOWN;
+
+ RT_TRACE(rtlpriv, COMP_MAC80211, DBG_DMESG,
+@@ -1063,6 +1068,9 @@ static int rtl_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ *or clear all entry here.
+ */
+ rtl_cam_delete_one_entry(hw, mac_addr, key_idx);
++
++ rtl_cam_reset_sec_info(hw);
++
+ break;
+ default:
+ RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
+diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/trx.c b/drivers/net/wireless/rtlwifi/rtl8192cu/trx.c
+index 3a92ba3..10b2ef0 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8192cu/trx.c
++++ b/drivers/net/wireless/rtlwifi/rtl8192cu/trx.c
+@@ -549,15 +549,16 @@ void rtl92cu_tx_fill_desc(struct ieee80211_hw *hw,
+ (tcb_desc->rts_use_shortpreamble ? 1 : 0)
+ : (tcb_desc->rts_use_shortgi ? 1 : 0)));
+ if (mac->bw_40) {
+- if (tcb_desc->packet_bw) {
++ if (rate_flag & IEEE80211_TX_RC_DUP_DATA) {
+ SET_TX_DESC_DATA_BW(txdesc, 1);
+ SET_TX_DESC_DATA_SC(txdesc, 3);
++ } else if(rate_flag & IEEE80211_TX_RC_40_MHZ_WIDTH){
++ SET_TX_DESC_DATA_BW(txdesc, 1);
++ SET_TX_DESC_DATA_SC(txdesc, mac->cur_40_prime_sc);
+ } else {
+ SET_TX_DESC_DATA_BW(txdesc, 0);
+- if (rate_flag & IEEE80211_TX_RC_DUP_DATA)
+- SET_TX_DESC_DATA_SC(txdesc,
+- mac->cur_40_prime_sc);
+- }
++ SET_TX_DESC_DATA_SC(txdesc, 0);
++ }
+ } else {
+ SET_TX_DESC_DATA_BW(txdesc, 0);
+ SET_TX_DESC_DATA_SC(txdesc, 0);
+diff --git a/drivers/net/wireless/rtlwifi/usb.c b/drivers/net/wireless/rtlwifi/usb.c
+index a9367eb..e4272b9 100644
+--- a/drivers/net/wireless/rtlwifi/usb.c
++++ b/drivers/net/wireless/rtlwifi/usb.c
+@@ -861,6 +861,7 @@ static void _rtl_usb_tx_preprocess(struct ieee80211_hw *hw, struct sk_buff *skb,
+ u8 tid = 0;
+ u16 seq_number = 0;
+
++ memset(&tcb_desc, 0, sizeof(struct rtl_tcb_desc));
+ if (ieee80211_is_auth(fc)) {
+ RT_TRACE(rtlpriv, COMP_SEND, DBG_DMESG, ("MAC80211_LINKING\n"));
+ rtl_ips_nic_on(hw);
+diff --git a/drivers/pci/dmar.c b/drivers/pci/dmar.c
+index 3dc9bef..6dcc7e2 100644
+--- a/drivers/pci/dmar.c
++++ b/drivers/pci/dmar.c
+@@ -1388,7 +1388,7 @@ int dmar_set_interrupt(struct intel_iommu *iommu)
+ return ret;
+ }
+
+- ret = request_irq(irq, dmar_fault, 0, iommu->name, iommu);
++ ret = request_irq(irq, dmar_fault, IRQF_NO_THREAD, iommu->name, iommu);
+ if (ret)
+ printk(KERN_ERR "IOMMU: can't request irq\n");
+ return ret;
+diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
+index ee89358..ebe77dd 100644
+--- a/drivers/rapidio/rio-scan.c
++++ b/drivers/rapidio/rio-scan.c
+@@ -505,8 +505,7 @@ static struct rio_dev __devinit *rio_setup_device(struct rio_net *net,
+ rdev->dev.dma_mask = &rdev->dma_mask;
+ rdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+
+- if ((rdev->pef & RIO_PEF_INB_DOORBELL) &&
+- (rdev->dst_ops & RIO_DST_OPS_DOORBELL))
++ if (rdev->dst_ops & RIO_DST_OPS_DOORBELL)
+ rio_init_dbell_res(&rdev->riores[RIO_DOORBELL_RESOURCE],
+ 0, 0xffff);
+
+diff --git a/drivers/regulator/tps65910-regulator.c b/drivers/regulator/tps65910-regulator.c
+index 55dd4e6..425aab3 100644
+--- a/drivers/regulator/tps65910-regulator.c
++++ b/drivers/regulator/tps65910-regulator.c
+@@ -759,8 +759,13 @@ static int tps65910_list_voltage_dcdc(struct regulator_dev *dev,
+ mult = (selector / VDD1_2_NUM_VOLTS) + 1;
+ volt = VDD1_2_MIN_VOLT +
+ (selector % VDD1_2_NUM_VOLTS) * VDD1_2_OFFSET;
++ break;
+ case TPS65911_REG_VDDCTRL:
+ volt = VDDCTRL_MIN_VOLT + (selector * VDDCTRL_OFFSET);
++ break;
++ default:
++ BUG();
++ return -EINVAL;
+ }
+
+ return volt * 100 * mult;
+@@ -898,9 +903,11 @@ static __devinit int tps65910_probe(struct platform_device *pdev)
+ case TPS65910:
+ pmic->get_ctrl_reg = &tps65910_get_ctrl_register;
+ info = tps65910_regs;
++ break;
+ case TPS65911:
+ pmic->get_ctrl_reg = &tps65911_get_ctrl_register;
+ info = tps65911_regs;
++ break;
+ default:
+ pr_err("Invalid tps chip version\n");
+ return -ENODEV;
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index 3195dbd..eb4c883 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -708,7 +708,7 @@ int rtc_irq_set_freq(struct rtc_device *rtc, struct rtc_task *task, int freq)
+ int err = 0;
+ unsigned long flags;
+
+- if (freq <= 0 || freq > 5000)
++ if (freq <= 0 || freq > RTC_MAX_FREQ)
+ return -EINVAL;
+ retry:
+ spin_lock_irqsave(&rtc->irq_task_lock, flags);
+diff --git a/drivers/s390/cio/qdio_thinint.c b/drivers/s390/cio/qdio_thinint.c
+index 5c4e741..68be6e1 100644
+--- a/drivers/s390/cio/qdio_thinint.c
++++ b/drivers/s390/cio/qdio_thinint.c
+@@ -95,9 +95,11 @@ void tiqdio_remove_input_queues(struct qdio_irq *irq_ptr)
+ }
+ }
+
+-static inline u32 shared_ind_set(void)
++static inline u32 clear_shared_ind(void)
+ {
+- return q_indicators[TIQDIO_SHARED_IND].ind;
++ if (!atomic_read(&q_indicators[TIQDIO_SHARED_IND].count))
++ return 0;
++ return xchg(&q_indicators[TIQDIO_SHARED_IND].ind, 0);
+ }
+
+ /**
+@@ -107,7 +109,7 @@ static inline u32 shared_ind_set(void)
+ */
+ static void tiqdio_thinint_handler(void *alsi, void *data)
+ {
+- u32 si_used = shared_ind_set();
++ u32 si_used = clear_shared_ind();
+ struct qdio_q *q;
+
+ last_ai_time = S390_lowcore.int_clock;
+@@ -150,13 +152,6 @@ static void tiqdio_thinint_handler(void *alsi, void *data)
+ qperf_inc(q, adapter_int);
+ }
+ rcu_read_unlock();
+-
+- /*
+- * If the shared indicator was used clear it now after all queues
+- * were processed.
+- */
+- if (si_used && shared_ind_set())
+- xchg(&q_indicators[TIQDIO_SHARED_IND].ind, 0);
+ }
+
+ static int set_subchannel_ind(struct qdio_irq *irq_ptr, int reset)
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index b7bd5b0..3868ab2 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -1800,10 +1800,12 @@ static int twa_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_
+ switch (retval) {
+ case SCSI_MLQUEUE_HOST_BUSY:
+ twa_free_request_id(tw_dev, request_id);
++ twa_unmap_scsi_data(tw_dev, request_id);
+ break;
+ case 1:
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
++ twa_unmap_scsi_data(tw_dev, request_id);
+ SCpnt->result = (DID_ERROR << 16);
+ done(SCpnt);
+ retval = 0;
+diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
+index 3c08f53..6153a66 100644
+--- a/drivers/scsi/Makefile
++++ b/drivers/scsi/Makefile
+@@ -88,7 +88,7 @@ obj-$(CONFIG_SCSI_QLOGIC_FAS) += qlogicfas408.o qlogicfas.o
+ obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o
+ obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o
+ obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx/
+-obj-$(CONFIG_SCSI_QLA_ISCSI) += qla4xxx/
++obj-$(CONFIG_SCSI_QLA_ISCSI) += libiscsi.o qla4xxx/
+ obj-$(CONFIG_SCSI_LPFC) += lpfc/
+ obj-$(CONFIG_SCSI_BFA_FC) += bfa/
+ obj-$(CONFIG_SCSI_PAS16) += pas16.o
+diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
+index e7d0d47..e5f2d7d 100644
+--- a/drivers/scsi/aacraid/commsup.c
++++ b/drivers/scsi/aacraid/commsup.c
+@@ -1283,6 +1283,8 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced)
+ kfree(aac->queues);
+ aac->queues = NULL;
+ free_irq(aac->pdev->irq, aac);
++ if (aac->msi)
++ pci_disable_msi(aac->pdev);
+ kfree(aac->fsa_dev);
+ aac->fsa_dev = NULL;
+ quirks = aac_get_driver_ident(index)->quirks;
+diff --git a/drivers/scsi/bnx2fc/bnx2fc.h b/drivers/scsi/bnx2fc/bnx2fc.h
+index 0a404bf..856fcbf 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc.h
++++ b/drivers/scsi/bnx2fc/bnx2fc.h
+@@ -152,7 +152,6 @@ struct bnx2fc_percpu_s {
+ spinlock_t fp_work_lock;
+ };
+
+-
+ struct bnx2fc_hba {
+ struct list_head link;
+ struct cnic_dev *cnic;
+@@ -179,6 +178,7 @@ struct bnx2fc_hba {
+ #define BNX2FC_CTLR_INIT_DONE 1
+ #define BNX2FC_CREATE_DONE 2
+ struct fcoe_ctlr ctlr;
++ struct list_head vports;
+ u8 vlan_enabled;
+ int vlan_id;
+ u32 next_conn_id;
+@@ -232,6 +232,11 @@ struct bnx2fc_hba {
+
+ #define bnx2fc_from_ctlr(fip) container_of(fip, struct bnx2fc_hba, ctlr)
+
++struct bnx2fc_lport {
++ struct list_head list;
++ struct fc_lport *lport;
++};
++
+ struct bnx2fc_cmd_mgr {
+ struct bnx2fc_hba *hba;
+ u16 next_idx;
+@@ -423,6 +428,7 @@ struct bnx2fc_work {
+ struct bnx2fc_unsol_els {
+ struct fc_lport *lport;
+ struct fc_frame *fp;
++ struct bnx2fc_hba *hba;
+ struct work_struct unsol_els_work;
+ };
+
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+index ab255fb..bdf62a5 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+@@ -1225,6 +1225,7 @@ static int bnx2fc_interface_setup(struct bnx2fc_hba *hba,
+ hba->ctlr.get_src_addr = bnx2fc_get_src_mac;
+ set_bit(BNX2FC_CTLR_INIT_DONE, &hba->init_done);
+
++ INIT_LIST_HEAD(&hba->vports);
+ rc = bnx2fc_netdev_setup(hba);
+ if (rc)
+ goto setup_err;
+@@ -1261,8 +1262,15 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_hba *hba,
+ struct fcoe_port *port;
+ struct Scsi_Host *shost;
+ struct fc_vport *vport = dev_to_vport(parent);
++ struct bnx2fc_lport *blport;
+ int rc = 0;
+
++ blport = kzalloc(sizeof(struct bnx2fc_lport), GFP_KERNEL);
++ if (!blport) {
++ BNX2FC_HBA_DBG(hba->ctlr.lp, "Unable to alloc bnx2fc_lport\n");
++ return NULL;
++ }
++
+ /* Allocate Scsi_Host structure */
+ if (!npiv)
+ lport = libfc_host_alloc(&bnx2fc_shost_template, sizeof(*port));
+@@ -1271,7 +1279,7 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_hba *hba,
+
+ if (!lport) {
+ printk(KERN_ERR PFX "could not allocate scsi host structure\n");
+- return NULL;
++ goto free_blport;
+ }
+ shost = lport->host;
+ port = lport_priv(lport);
+@@ -1327,12 +1335,20 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_hba *hba,
+ }
+
+ bnx2fc_interface_get(hba);
++
++ spin_lock_bh(&hba->hba_lock);
++ blport->lport = lport;
++ list_add_tail(&blport->list, &hba->vports);
++ spin_unlock_bh(&hba->hba_lock);
++
+ return lport;
+
+ shost_err:
+ scsi_remove_host(shost);
+ lp_config_err:
+ scsi_host_put(lport->host);
++free_blport:
++ kfree(blport);
+ return NULL;
+ }
+
+@@ -1348,6 +1364,7 @@ static void bnx2fc_if_destroy(struct fc_lport *lport)
+ {
+ struct fcoe_port *port = lport_priv(lport);
+ struct bnx2fc_hba *hba = port->priv;
++ struct bnx2fc_lport *blport, *tmp;
+
+ BNX2FC_HBA_DBG(hba->ctlr.lp, "ENTERED bnx2fc_if_destroy\n");
+ /* Stop the transmit retry timer */
+@@ -1372,6 +1389,15 @@ static void bnx2fc_if_destroy(struct fc_lport *lport)
+ /* Free memory used by statistical counters */
+ fc_lport_free_stats(lport);
+
++ spin_lock_bh(&hba->hba_lock);
++ list_for_each_entry_safe(blport, tmp, &hba->vports, list) {
++ if (blport->lport == lport) {
++ list_del(&blport->list);
++ kfree(blport);
++ }
++ }
++ spin_unlock_bh(&hba->hba_lock);
++
+ /* Release Scsi_Host */
+ scsi_host_put(lport->host);
+
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_hwi.c b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
+index f756d5f..78baa46 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_hwi.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
+@@ -480,16 +480,36 @@ int bnx2fc_send_session_destroy_req(struct bnx2fc_hba *hba,
+ return rc;
+ }
+
++static bool is_valid_lport(struct bnx2fc_hba *hba, struct fc_lport *lport)
++{
++ struct bnx2fc_lport *blport;
++
++ spin_lock_bh(&hba->hba_lock);
++ list_for_each_entry(blport, &hba->vports, list) {
++ if (blport->lport == lport) {
++ spin_unlock_bh(&hba->hba_lock);
++ return true;
++ }
++ }
++ spin_unlock_bh(&hba->hba_lock);
++ return false;
++
++}
++
++
+ static void bnx2fc_unsol_els_work(struct work_struct *work)
+ {
+ struct bnx2fc_unsol_els *unsol_els;
+ struct fc_lport *lport;
++ struct bnx2fc_hba *hba;
+ struct fc_frame *fp;
+
+ unsol_els = container_of(work, struct bnx2fc_unsol_els, unsol_els_work);
+ lport = unsol_els->lport;
+ fp = unsol_els->fp;
+- fc_exch_recv(lport, fp);
++ hba = unsol_els->hba;
++ if (is_valid_lport(hba, lport))
++ fc_exch_recv(lport, fp);
+ kfree(unsol_els);
+ }
+
+@@ -499,6 +519,7 @@ void bnx2fc_process_l2_frame_compl(struct bnx2fc_rport *tgt,
+ {
+ struct fcoe_port *port = tgt->port;
+ struct fc_lport *lport = port->lport;
++ struct bnx2fc_hba *hba = port->priv;
+ struct bnx2fc_unsol_els *unsol_els;
+ struct fc_frame_header *fh;
+ struct fc_frame *fp;
+@@ -559,6 +580,7 @@ void bnx2fc_process_l2_frame_compl(struct bnx2fc_rport *tgt,
+ fr_eof(fp) = FC_EOF_T;
+ fr_crc(fp) = cpu_to_le32(~crc);
+ unsol_els->lport = lport;
++ unsol_els->hba = hba;
+ unsol_els->fp = fp;
+ INIT_WORK(&unsol_els->unsol_els_work, bnx2fc_unsol_els_work);
+ queue_work(bnx2fc_wq, &unsol_els->unsol_els_work);
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_io.c b/drivers/scsi/bnx2fc/bnx2fc_io.c
+index b5b5c34..454c72c 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_io.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_io.c
+@@ -1734,7 +1734,6 @@ void bnx2fc_process_scsi_cmd_compl(struct bnx2fc_cmd *io_req,
+ printk(KERN_ERR PFX "SCp.ptr is NULL\n");
+ return;
+ }
+- io_req->sc_cmd = NULL;
+
+ if (io_req->on_active_queue) {
+ list_del_init(&io_req->link);
+@@ -1754,6 +1753,7 @@ void bnx2fc_process_scsi_cmd_compl(struct bnx2fc_cmd *io_req,
+ }
+
+ bnx2fc_unmap_sg_list(io_req);
++ io_req->sc_cmd = NULL;
+
+ switch (io_req->fcp_status) {
+ case FC_GOOD:
+diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+index fc2cdb6..b2d6611 100644
+--- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
++++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+@@ -913,7 +913,7 @@ static void l2t_put(struct cxgbi_sock *csk)
+ struct t3cdev *t3dev = (struct t3cdev *)csk->cdev->lldev;
+
+ if (csk->l2t) {
+- l2t_release(L2DATA(t3dev), csk->l2t);
++ l2t_release(t3dev, csk->l2t);
+ csk->l2t = NULL;
+ cxgbi_sock_put(csk);
+ }
+diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
+index 155d7b9..8885b3e 100644
+--- a/drivers/scsi/fcoe/fcoe.c
++++ b/drivers/scsi/fcoe/fcoe.c
+@@ -749,12 +749,27 @@ static int fcoe_shost_config(struct fc_lport *lport, struct device *dev)
+ * The offload EM that this routine is associated with will handle any
+ * packets that are for SCSI read requests.
+ *
++ * This has been enhanced to work when FCoE stack is operating in target
++ * mode.
++ *
+ * Returns: True for read types I/O, otherwise returns false.
+ */
+ bool fcoe_oem_match(struct fc_frame *fp)
+ {
+- return fc_fcp_is_read(fr_fsp(fp)) &&
+- (fr_fsp(fp)->data_len > fcoe_ddp_min);
++ struct fc_frame_header *fh = fc_frame_header_get(fp);
++ struct fcp_cmnd *fcp;
++
++ if (fc_fcp_is_read(fr_fsp(fp)) &&
++ (fr_fsp(fp)->data_len > fcoe_ddp_min))
++ return true;
++ else if (!(ntoh24(fh->fh_f_ctl) & FC_FC_EX_CTX)) {
++ fcp = fc_frame_payload_get(fp, sizeof(*fcp));
++ if (ntohs(fh->fh_rx_id) == FC_XID_UNKNOWN &&
++ fcp && (ntohl(fcp->fc_dl) > fcoe_ddp_min) &&
++ (fcp->fc_flags & FCP_CFL_WRDATA))
++ return true;
++ }
++ return false;
+ }
+
+ /**
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index 6bba23a..78c2e20 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -676,6 +676,16 @@ static void hpsa_scsi_replace_entry(struct ctlr_info *h, int hostno,
+ BUG_ON(entry < 0 || entry >= HPSA_MAX_SCSI_DEVS_PER_HBA);
+ removed[*nremoved] = h->dev[entry];
+ (*nremoved)++;
++
++ /*
++ * New physical devices won't have target/lun assigned yet
++ * so we need to preserve the values in the slot we are replacing.
++ */
++ if (new_entry->target == -1) {
++ new_entry->target = h->dev[entry]->target;
++ new_entry->lun = h->dev[entry]->lun;
++ }
++
+ h->dev[entry] = new_entry;
+ added[*nadded] = new_entry;
+ (*nadded)++;
+@@ -1548,10 +1558,17 @@ static inline void hpsa_set_bus_target_lun(struct hpsa_scsi_dev_t *device,
+ }
+
+ static int hpsa_update_device_info(struct ctlr_info *h,
+- unsigned char scsi3addr[], struct hpsa_scsi_dev_t *this_device)
++ unsigned char scsi3addr[], struct hpsa_scsi_dev_t *this_device,
++ unsigned char *is_OBDR_device)
+ {
+-#define OBDR_TAPE_INQ_SIZE 49
++
++#define OBDR_SIG_OFFSET 43
++#define OBDR_TAPE_SIG "$DR-10"
++#define OBDR_SIG_LEN (sizeof(OBDR_TAPE_SIG) - 1)
++#define OBDR_TAPE_INQ_SIZE (OBDR_SIG_OFFSET + OBDR_SIG_LEN)
++
+ unsigned char *inq_buff;
++ unsigned char *obdr_sig;
+
+ inq_buff = kzalloc(OBDR_TAPE_INQ_SIZE, GFP_KERNEL);
+ if (!inq_buff)
+@@ -1583,6 +1600,16 @@ static int hpsa_update_device_info(struct ctlr_info *h,
+ else
+ this_device->raid_level = RAID_UNKNOWN;
+
++ if (is_OBDR_device) {
++ /* See if this is a One-Button-Disaster-Recovery device
++ * by looking for "$DR-10" at offset 43 in inquiry data.
++ */
++ obdr_sig = &inq_buff[OBDR_SIG_OFFSET];
++ *is_OBDR_device = (this_device->devtype == TYPE_ROM &&
++ strncmp(obdr_sig, OBDR_TAPE_SIG,
++ OBDR_SIG_LEN) == 0);
++ }
++
+ kfree(inq_buff);
+ return 0;
+
+@@ -1716,7 +1743,7 @@ static int add_msa2xxx_enclosure_device(struct ctlr_info *h,
+ return 0;
+ }
+
+- if (hpsa_update_device_info(h, scsi3addr, this_device))
++ if (hpsa_update_device_info(h, scsi3addr, this_device, NULL))
+ return 0;
+ (*nmsa2xxx_enclosures)++;
+ hpsa_set_bus_target_lun(this_device, bus, target, 0);
+@@ -1808,7 +1835,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
+ */
+ struct ReportLUNdata *physdev_list = NULL;
+ struct ReportLUNdata *logdev_list = NULL;
+- unsigned char *inq_buff = NULL;
+ u32 nphysicals = 0;
+ u32 nlogicals = 0;
+ u32 ndev_allocated = 0;
+@@ -1824,11 +1850,9 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
+ GFP_KERNEL);
+ physdev_list = kzalloc(reportlunsize, GFP_KERNEL);
+ logdev_list = kzalloc(reportlunsize, GFP_KERNEL);
+- inq_buff = kmalloc(OBDR_TAPE_INQ_SIZE, GFP_KERNEL);
+ tmpdevice = kzalloc(sizeof(*tmpdevice), GFP_KERNEL);
+
+- if (!currentsd || !physdev_list || !logdev_list ||
+- !inq_buff || !tmpdevice) {
++ if (!currentsd || !physdev_list || !logdev_list || !tmpdevice) {
+ dev_err(&h->pdev->dev, "out of memory\n");
+ goto out;
+ }
+@@ -1863,7 +1887,7 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
+ /* adjust our table of devices */
+ nmsa2xxx_enclosures = 0;
+ for (i = 0; i < nphysicals + nlogicals + 1; i++) {
+- u8 *lunaddrbytes;
++ u8 *lunaddrbytes, is_OBDR = 0;
+
+ /* Figure out where the LUN ID info is coming from */
+ lunaddrbytes = figure_lunaddrbytes(h, raid_ctlr_position,
+@@ -1874,7 +1898,8 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
+ continue;
+
+ /* Get device type, vendor, model, device id */
+- if (hpsa_update_device_info(h, lunaddrbytes, tmpdevice))
++ if (hpsa_update_device_info(h, lunaddrbytes, tmpdevice,
++ &is_OBDR))
+ continue; /* skip it if we can't talk to it. */
+ figure_bus_target_lun(h, lunaddrbytes, &bus, &target, &lun,
+ tmpdevice);
+@@ -1898,7 +1923,7 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
+ hpsa_set_bus_target_lun(this_device, bus, target, lun);
+
+ switch (this_device->devtype) {
+- case TYPE_ROM: {
++ case TYPE_ROM:
+ /* We don't *really* support actual CD-ROM devices,
+ * just "One Button Disaster Recovery" tape drive
+ * which temporarily pretends to be a CD-ROM drive.
+@@ -1906,15 +1931,8 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
+ * device by checking for "$DR-10" in bytes 43-48 of
+ * the inquiry data.
+ */
+- char obdr_sig[7];
+-#define OBDR_TAPE_SIG "$DR-10"
+- strncpy(obdr_sig, &inq_buff[43], 6);
+- obdr_sig[6] = '\0';
+- if (strncmp(obdr_sig, OBDR_TAPE_SIG, 6) != 0)
+- /* Not OBDR device, ignore it. */
+- break;
+- }
+- ncurrent++;
++ if (is_OBDR)
++ ncurrent++;
+ break;
+ case TYPE_DISK:
+ if (i < nphysicals)
+@@ -1947,7 +1965,6 @@ out:
+ for (i = 0; i < ndev_allocated; i++)
+ kfree(currentsd[i]);
+ kfree(currentsd);
+- kfree(inq_buff);
+ kfree(physdev_list);
+ kfree(logdev_list);
+ }
+diff --git a/drivers/scsi/isci/host.c b/drivers/scsi/isci/host.c
+index 26072f1..ef46d83 100644
+--- a/drivers/scsi/isci/host.c
++++ b/drivers/scsi/isci/host.c
+@@ -531,6 +531,9 @@ static void sci_controller_process_completions(struct isci_host *ihost)
+ break;
+
+ case SCU_COMPLETION_TYPE_EVENT:
++ sci_controller_event_completion(ihost, ent);
++ break;
++
+ case SCU_COMPLETION_TYPE_NOTIFY: {
+ event_cycle ^= ((event_get+1) & SCU_MAX_EVENTS) <<
+ (SMU_COMPLETION_QUEUE_GET_EVENT_CYCLE_BIT_SHIFT - SCU_MAX_EVENTS_SHIFT);
+diff --git a/drivers/scsi/isci/phy.c b/drivers/scsi/isci/phy.c
+index 79313a7..430fc8f 100644
+--- a/drivers/scsi/isci/phy.c
++++ b/drivers/scsi/isci/phy.c
+@@ -104,6 +104,7 @@ sci_phy_link_layer_initialization(struct isci_phy *iphy,
+ u32 parity_count = 0;
+ u32 llctl, link_rate;
+ u32 clksm_value = 0;
++ u32 sp_timeouts = 0;
+
+ iphy->link_layer_registers = reg;
+
+@@ -211,6 +212,18 @@ sci_phy_link_layer_initialization(struct isci_phy *iphy,
+ llctl |= SCU_SAS_LLCTL_GEN_VAL(MAX_LINK_RATE, link_rate);
+ writel(llctl, &iphy->link_layer_registers->link_layer_control);
+
++ sp_timeouts = readl(&iphy->link_layer_registers->sas_phy_timeouts);
++
++ /* Clear the default 0x36 (54us) RATE_CHANGE timeout value. */
++ sp_timeouts &= ~SCU_SAS_PHYTOV_GEN_VAL(RATE_CHANGE, 0xFF);
++
++ /* Set RATE_CHANGE timeout value to 0x3B (59us). This ensures SCU can
++ * lock with 3Gb drive when SCU max rate is set to 1.5Gb.
++ */
++ sp_timeouts |= SCU_SAS_PHYTOV_GEN_VAL(RATE_CHANGE, 0x3B);
++
++ writel(sp_timeouts, &iphy->link_layer_registers->sas_phy_timeouts);
++
+ if (is_a2(ihost->pdev)) {
+ /* Program the max ARB time for the PHY to 700us so we inter-operate with
+ * the PMC expander which shuts down PHYs if the expander PHY generates too
+diff --git a/drivers/scsi/isci/registers.h b/drivers/scsi/isci/registers.h
+index 9b266c7..00afc73 100644
+--- a/drivers/scsi/isci/registers.h
++++ b/drivers/scsi/isci/registers.h
+@@ -1299,6 +1299,18 @@ struct scu_transport_layer_registers {
+ #define SCU_AFE_XCVRCR_OFFSET 0x00DC
+ #define SCU_AFE_LUTCR_OFFSET 0x00E0
+
++#define SCU_SAS_PHY_TIMER_TIMEOUT_VALUES_ALIGN_DETECTION_SHIFT (0UL)
++#define SCU_SAS_PHY_TIMER_TIMEOUT_VALUES_ALIGN_DETECTION_MASK (0x000000FFUL)
++#define SCU_SAS_PHY_TIMER_TIMEOUT_VALUES_HOT_PLUG_SHIFT (8UL)
++#define SCU_SAS_PHY_TIMER_TIMEOUT_VALUES_HOT_PLUG_MASK (0x0000FF00UL)
++#define SCU_SAS_PHY_TIMER_TIMEOUT_VALUES_COMSAS_DETECTION_SHIFT (16UL)
++#define SCU_SAS_PHY_TIMER_TIMEOUT_VALUES_COMSAS_DETECTION_MASK (0x00FF0000UL)
++#define SCU_SAS_PHY_TIMER_TIMEOUT_VALUES_RATE_CHANGE_SHIFT (24UL)
++#define SCU_SAS_PHY_TIMER_TIMEOUT_VALUES_RATE_CHANGE_MASK (0xFF000000UL)
++
++#define SCU_SAS_PHYTOV_GEN_VAL(name, value) \
++ SCU_GEN_VALUE(SCU_SAS_PHY_TIMER_TIMEOUT_VALUES_##name, value)
++
+ #define SCU_SAS_LINK_LAYER_CONTROL_MAX_LINK_RATE_SHIFT (0)
+ #define SCU_SAS_LINK_LAYER_CONTROL_MAX_LINK_RATE_MASK (0x00000003)
+ #define SCU_SAS_LINK_LAYER_CONTROL_MAX_LINK_RATE_GEN1 (0)
+diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c
+index a46e07a..b5d3a8c 100644
+--- a/drivers/scsi/isci/request.c
++++ b/drivers/scsi/isci/request.c
+@@ -732,12 +732,20 @@ sci_io_request_terminate(struct isci_request *ireq)
+ sci_change_state(&ireq->sm, SCI_REQ_ABORTING);
+ return SCI_SUCCESS;
+ case SCI_REQ_TASK_WAIT_TC_RESP:
++ /* The task frame was already confirmed to have been
++ * sent by the SCU HW. Since the state machine is
++ * now only waiting for the task response itself,
++ * abort the request and complete it immediately
++ * and don't wait for the task response.
++ */
+ sci_change_state(&ireq->sm, SCI_REQ_ABORTING);
+ sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
+ return SCI_SUCCESS;
+ case SCI_REQ_ABORTING:
+- sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
+- return SCI_SUCCESS;
++ /* If a request has a termination requested twice, return
++ * a failure indication, since HW confirmation of the first
++ * abort is still outstanding.
++ */
+ case SCI_REQ_COMPLETED:
+ default:
+ dev_warn(&ireq->owning_controller->pdev->dev,
+@@ -2399,22 +2407,19 @@ static void isci_task_save_for_upper_layer_completion(
+ }
+ }
+
+-static void isci_request_process_stp_response(struct sas_task *task,
+- void *response_buffer)
++static void isci_process_stp_response(struct sas_task *task, struct dev_to_host_fis *fis)
+ {
+- struct dev_to_host_fis *d2h_reg_fis = response_buffer;
+ struct task_status_struct *ts = &task->task_status;
+ struct ata_task_resp *resp = (void *)&ts->buf[0];
+
+- resp->frame_len = le16_to_cpu(*(__le16 *)(response_buffer + 6));
+- memcpy(&resp->ending_fis[0], response_buffer + 16, 24);
++ resp->frame_len = sizeof(*fis);
++ memcpy(resp->ending_fis, fis, sizeof(*fis));
+ ts->buf_valid_size = sizeof(*resp);
+
+- /**
+- * If the device fault bit is set in the status register, then
++ /* If the device fault bit is set in the status register, then
+ * set the sense data and return.
+ */
+- if (d2h_reg_fis->status & ATA_DF)
++ if (fis->status & ATA_DF)
+ ts->stat = SAS_PROTO_RESPONSE;
+ else
+ ts->stat = SAM_STAT_GOOD;
+@@ -2428,7 +2433,6 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
+ {
+ struct sas_task *task = isci_request_access_task(request);
+ struct ssp_response_iu *resp_iu;
+- void *resp_buf;
+ unsigned long task_flags;
+ struct isci_remote_device *idev = isci_lookup_device(task->dev);
+ enum service_response response = SAS_TASK_UNDELIVERED;
+@@ -2565,9 +2569,7 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
+ task);
+
+ if (sas_protocol_ata(task->task_proto)) {
+- resp_buf = &request->stp.rsp;
+- isci_request_process_stp_response(task,
+- resp_buf);
++ isci_process_stp_response(task, &request->stp.rsp);
+ } else if (SAS_PROTOCOL_SSP == task->task_proto) {
+
+ /* crack the iu response buffer. */
+diff --git a/drivers/scsi/isci/unsolicited_frame_control.c b/drivers/scsi/isci/unsolicited_frame_control.c
+index e9e1e2a..16f88ab 100644
+--- a/drivers/scsi/isci/unsolicited_frame_control.c
++++ b/drivers/scsi/isci/unsolicited_frame_control.c
+@@ -72,7 +72,7 @@ int sci_unsolicited_frame_control_construct(struct isci_host *ihost)
+ */
+ buf_len = SCU_MAX_UNSOLICITED_FRAMES * SCU_UNSOLICITED_FRAME_BUFFER_SIZE;
+ header_len = SCU_MAX_UNSOLICITED_FRAMES * sizeof(struct scu_unsolicited_frame_header);
+- size = buf_len + header_len + SCU_MAX_UNSOLICITED_FRAMES * sizeof(dma_addr_t);
++ size = buf_len + header_len + SCU_MAX_UNSOLICITED_FRAMES * sizeof(uf_control->address_table.array[0]);
+
+ /*
+ * The Unsolicited Frame buffers are set at the start of the UF
+diff --git a/drivers/scsi/isci/unsolicited_frame_control.h b/drivers/scsi/isci/unsolicited_frame_control.h
+index 31cb950..75d8966 100644
+--- a/drivers/scsi/isci/unsolicited_frame_control.h
++++ b/drivers/scsi/isci/unsolicited_frame_control.h
+@@ -214,7 +214,7 @@ struct sci_uf_address_table_array {
+ * starting address of the UF address table.
+ * 64-bit pointers are required by the hardware.
+ */
+- dma_addr_t *array;
++ u64 *array;
+
+ /**
+ * This field specifies the physical address location for the UF
+diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
+index 3df9853..7724414 100644
+--- a/drivers/scsi/iscsi_tcp.c
++++ b/drivers/scsi/iscsi_tcp.c
+@@ -107,10 +107,12 @@ static int iscsi_sw_tcp_recv(read_descriptor_t *rd_desc, struct sk_buff *skb,
+ * If the socket is in CLOSE or CLOSE_WAIT we should
+ * not close the connection if there is still some
+ * data pending.
++ *
++ * Must be called with sk_callback_lock.
+ */
+ static inline int iscsi_sw_sk_state_check(struct sock *sk)
+ {
+- struct iscsi_conn *conn = (struct iscsi_conn*)sk->sk_user_data;
++ struct iscsi_conn *conn = sk->sk_user_data;
+
+ if ((sk->sk_state == TCP_CLOSE_WAIT || sk->sk_state == TCP_CLOSE) &&
+ !atomic_read(&sk->sk_rmem_alloc)) {
+@@ -123,11 +125,17 @@ static inline int iscsi_sw_sk_state_check(struct sock *sk)
+
+ static void iscsi_sw_tcp_data_ready(struct sock *sk, int flag)
+ {
+- struct iscsi_conn *conn = sk->sk_user_data;
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
++ struct iscsi_conn *conn;
++ struct iscsi_tcp_conn *tcp_conn;
+ read_descriptor_t rd_desc;
+
+ read_lock(&sk->sk_callback_lock);
++ conn = sk->sk_user_data;
++ if (!conn) {
++ read_unlock(&sk->sk_callback_lock);
++ return;
++ }
++ tcp_conn = conn->dd_data;
+
+ /*
+ * Use rd_desc to pass 'conn' to iscsi_tcp_recv.
+@@ -141,11 +149,10 @@ static void iscsi_sw_tcp_data_ready(struct sock *sk, int flag)
+
+ iscsi_sw_sk_state_check(sk);
+
+- read_unlock(&sk->sk_callback_lock);
+-
+ /* If we had to (atomically) map a highmem page,
+ * unmap it now. */
+ iscsi_tcp_segment_unmap(&tcp_conn->in.segment);
++ read_unlock(&sk->sk_callback_lock);
+ }
+
+ static void iscsi_sw_tcp_state_change(struct sock *sk)
+@@ -157,8 +164,11 @@ static void iscsi_sw_tcp_state_change(struct sock *sk)
+ void (*old_state_change)(struct sock *);
+
+ read_lock(&sk->sk_callback_lock);
+-
+- conn = (struct iscsi_conn*)sk->sk_user_data;
++ conn = sk->sk_user_data;
++ if (!conn) {
++ read_unlock(&sk->sk_callback_lock);
++ return;
++ }
+ session = conn->session;
+
+ iscsi_sw_sk_state_check(sk);
+@@ -178,11 +188,25 @@ static void iscsi_sw_tcp_state_change(struct sock *sk)
+ **/
+ static void iscsi_sw_tcp_write_space(struct sock *sk)
+ {
+- struct iscsi_conn *conn = (struct iscsi_conn*)sk->sk_user_data;
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+- struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
++ struct iscsi_conn *conn;
++ struct iscsi_tcp_conn *tcp_conn;
++ struct iscsi_sw_tcp_conn *tcp_sw_conn;
++ void (*old_write_space)(struct sock *);
++
++ read_lock_bh(&sk->sk_callback_lock);
++ conn = sk->sk_user_data;
++ if (!conn) {
++ read_unlock_bh(&sk->sk_callback_lock);
++ return;
++ }
++
++ tcp_conn = conn->dd_data;
++ tcp_sw_conn = tcp_conn->dd_data;
++ old_write_space = tcp_sw_conn->old_write_space;
++ read_unlock_bh(&sk->sk_callback_lock);
++
++ old_write_space(sk);
+
+- tcp_sw_conn->old_write_space(sk);
+ ISCSI_SW_TCP_DBG(conn, "iscsi_write_space\n");
+ iscsi_conn_queue_work(conn);
+ }
+@@ -592,20 +616,17 @@ static void iscsi_sw_tcp_conn_stop(struct iscsi_cls_conn *cls_conn, int flag)
+ /* userspace may have goofed up and not bound us */
+ if (!sock)
+ return;
+- /*
+- * Make sure our recv side is stopped.
+- * Older tools called conn stop before ep_disconnect
+- * so IO could still be coming in.
+- */
+- write_lock_bh(&tcp_sw_conn->sock->sk->sk_callback_lock);
+- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_rx);
+- write_unlock_bh(&tcp_sw_conn->sock->sk->sk_callback_lock);
+
+ sock->sk->sk_err = EIO;
+ wake_up_interruptible(sk_sleep(sock->sk));
+
+- iscsi_conn_stop(cls_conn, flag);
++ /* stop xmit side */
++ iscsi_suspend_tx(conn);
++
++ /* stop recv side and release socket */
+ iscsi_sw_tcp_release_conn(conn);
++
++ iscsi_conn_stop(cls_conn, flag);
+ }
+
+ static int
+diff --git a/drivers/scsi/libfc/fc_rport.c b/drivers/scsi/libfc/fc_rport.c
+index 49e1ccc..3b66937 100644
+--- a/drivers/scsi/libfc/fc_rport.c
++++ b/drivers/scsi/libfc/fc_rport.c
+@@ -801,6 +801,20 @@ static void fc_rport_recv_flogi_req(struct fc_lport *lport,
+
+ switch (rdata->rp_state) {
+ case RPORT_ST_INIT:
++ /*
++ * If received the FLOGI request on RPORT which is INIT state
++ * (means not transition to FLOGI either fc_rport timeout
++ * function didn;t trigger or this end hasn;t received
++ * beacon yet from other end. In that case only, allow RPORT
++ * state machine to continue, otherwise fall through which
++ * causes the code to send reject response.
++ * NOTE; Not checking for FIP->state such as VNMP_UP or
++ * VNMP_CLAIM because if FIP state is not one of those,
++ * RPORT wouldn;t have created and 'rport_lookup' would have
++ * failed anyway in that case.
++ */
++ if (lport->point_to_multipoint)
++ break;
+ case RPORT_ST_DELETE:
+ mutex_unlock(&rdata->rp_mutex);
+ rjt_data.reason = ELS_RJT_FIP;
+diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c
+index e98ae33..09b232f 100644
+--- a/drivers/scsi/libiscsi_tcp.c
++++ b/drivers/scsi/libiscsi_tcp.c
+@@ -1084,7 +1084,8 @@ iscsi_tcp_conn_setup(struct iscsi_cls_session *cls_session, int dd_data_size,
+ struct iscsi_cls_conn *cls_conn;
+ struct iscsi_tcp_conn *tcp_conn;
+
+- cls_conn = iscsi_conn_setup(cls_session, sizeof(*tcp_conn), conn_idx);
++ cls_conn = iscsi_conn_setup(cls_session,
++ sizeof(*tcp_conn) + dd_data_size, conn_idx);
+ if (!cls_conn)
+ return NULL;
+ conn = cls_conn->dd_data;
+@@ -1096,22 +1097,13 @@ iscsi_tcp_conn_setup(struct iscsi_cls_session *cls_session, int dd_data_size,
+
+ tcp_conn = conn->dd_data;
+ tcp_conn->iscsi_conn = conn;
+-
+- tcp_conn->dd_data = kzalloc(dd_data_size, GFP_KERNEL);
+- if (!tcp_conn->dd_data) {
+- iscsi_conn_teardown(cls_conn);
+- return NULL;
+- }
++ tcp_conn->dd_data = conn->dd_data + sizeof(*tcp_conn);
+ return cls_conn;
+ }
+ EXPORT_SYMBOL_GPL(iscsi_tcp_conn_setup);
+
+ void iscsi_tcp_conn_teardown(struct iscsi_cls_conn *cls_conn)
+ {
+- struct iscsi_conn *conn = cls_conn->dd_data;
+- struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
+-
+- kfree(tcp_conn->dd_data);
+ iscsi_conn_teardown(cls_conn);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_tcp_conn_teardown);
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index f84084b..c9e3dc0 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -1721,7 +1721,7 @@ static int sas_find_bcast_dev(struct domain_device *dev,
+ list_for_each_entry(ch, &ex->children, siblings) {
+ if (ch->dev_type == EDGE_DEV || ch->dev_type == FANOUT_DEV) {
+ res = sas_find_bcast_dev(ch, src_dev);
+- if (src_dev)
++ if (*src_dev)
+ return res;
+ }
+ }
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 8ec2c86..0441361 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -20,6 +20,11 @@
+ *******************************************************************/
+
+ #include <scsi/scsi_host.h>
++
++#if defined(CONFIG_DEBUG_FS) && !defined(CONFIG_SCSI_LPFC_DEBUG_FS)
++#define CONFIG_SCSI_LPFC_DEBUG_FS
++#endif
++
+ struct lpfc_sli2_slim;
+
+ #define LPFC_PCI_DEV_LP 0x1
+@@ -465,9 +470,10 @@ enum intr_type_t {
+ struct unsol_rcv_ct_ctx {
+ uint32_t ctxt_id;
+ uint32_t SID;
+- uint32_t oxid;
+ uint32_t flags;
+ #define UNSOL_VALID 0x00000001
++ uint16_t oxid;
++ uint16_t rxid;
+ };
+
+ #define LPFC_USER_LINK_SPEED_AUTO 0 /* auto select (default)*/
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index 135a53b..80ca11c 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -755,6 +755,47 @@ lpfc_issue_reset(struct device *dev, struct device_attribute *attr,
+ }
+
+ /**
++ * lpfc_sli4_pdev_status_reg_wait - Wait for pdev status register for readyness
++ * @phba: lpfc_hba pointer.
++ *
++ * Description:
++ * SLI4 interface type-2 device to wait on the sliport status register for
++ * the readyness after performing a firmware reset.
++ *
++ * Returns:
++ * zero for success
++ **/
++static int
++lpfc_sli4_pdev_status_reg_wait(struct lpfc_hba *phba)
++{
++ struct lpfc_register portstat_reg;
++ int i;
++
++
++ lpfc_readl(phba->sli4_hba.u.if_type2.STATUSregaddr,
++ &portstat_reg.word0);
++
++ /* wait for the SLI port firmware ready after firmware reset */
++ for (i = 0; i < LPFC_FW_RESET_MAXIMUM_WAIT_10MS_CNT; i++) {
++ msleep(10);
++ lpfc_readl(phba->sli4_hba.u.if_type2.STATUSregaddr,
++ &portstat_reg.word0);
++ if (!bf_get(lpfc_sliport_status_err, &portstat_reg))
++ continue;
++ if (!bf_get(lpfc_sliport_status_rn, &portstat_reg))
++ continue;
++ if (!bf_get(lpfc_sliport_status_rdy, &portstat_reg))
++ continue;
++ break;
++ }
++
++ if (i < LPFC_FW_RESET_MAXIMUM_WAIT_10MS_CNT)
++ return 0;
++ else
++ return -EIO;
++}
++
++/**
+ * lpfc_sli4_pdev_reg_request - Request physical dev to perform a register acc
+ * @phba: lpfc_hba pointer.
+ *
+@@ -805,7 +846,10 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode)
+ readl(phba->sli4_hba.conf_regs_memmap_p + LPFC_CTL_PDEV_CTL_OFFSET);
+
+ /* delay driver action following IF_TYPE_2 reset */
+- msleep(100);
++ rc = lpfc_sli4_pdev_status_reg_wait(phba);
++
++ if (rc)
++ return -EIO;
+
+ init_completion(&online_compl);
+ rc = lpfc_workq_post_event(phba, &status, &online_compl,
+@@ -895,6 +939,10 @@ lpfc_board_mode_store(struct device *dev, struct device_attribute *attr,
+
+ if (!phba->cfg_enable_hba_reset)
+ return -EACCES;
++
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
++ "3050 lpfc_board_mode set to %s\n", buf);
++
+ init_completion(&online_compl);
+
+ if(strncmp(buf, "online", sizeof("online") - 1) == 0) {
+@@ -1290,6 +1338,10 @@ lpfc_poll_store(struct device *dev, struct device_attribute *attr,
+ if (phba->sli_rev == LPFC_SLI_REV4)
+ val = 0;
+
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
++ "3051 lpfc_poll changed from %d to %d\n",
++ phba->cfg_poll, val);
++
+ spin_lock_irq(&phba->hbalock);
+
+ old_val = phba->cfg_poll;
+@@ -1414,80 +1466,10 @@ lpfc_sriov_hw_max_virtfn_show(struct device *dev,
+ struct Scsi_Host *shost = class_to_shost(dev);
+ struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ struct lpfc_hba *phba = vport->phba;
+- struct pci_dev *pdev = phba->pcidev;
+- union lpfc_sli4_cfg_shdr *shdr;
+- uint32_t shdr_status, shdr_add_status;
+- LPFC_MBOXQ_t *mboxq;
+- struct lpfc_mbx_get_prof_cfg *get_prof_cfg;
+- struct lpfc_rsrc_desc_pcie *desc;
+- uint32_t max_nr_virtfn;
+- uint32_t desc_count;
+- int length, rc, i;
+-
+- if ((phba->sli_rev < LPFC_SLI_REV4) ||
+- (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) !=
+- LPFC_SLI_INTF_IF_TYPE_2))
+- return -EPERM;
+-
+- if (!pdev->is_physfn)
+- return snprintf(buf, PAGE_SIZE, "%d\n", 0);
+-
+- mboxq = (LPFC_MBOXQ_t *)mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+- if (!mboxq)
+- return -ENOMEM;
++ uint16_t max_nr_virtfn;
+
+- /* get the maximum number of virtfn support by physfn */
+- length = (sizeof(struct lpfc_mbx_get_prof_cfg) -
+- sizeof(struct lpfc_sli4_cfg_mhdr));
+- lpfc_sli4_config(phba, mboxq, LPFC_MBOX_SUBSYSTEM_COMMON,
+- LPFC_MBOX_OPCODE_GET_PROFILE_CONFIG,
+- length, LPFC_SLI4_MBX_EMBED);
+- shdr = (union lpfc_sli4_cfg_shdr *)
+- &mboxq->u.mqe.un.sli4_config.header.cfg_shdr;
+- bf_set(lpfc_mbox_hdr_pf_num, &shdr->request,
+- phba->sli4_hba.iov.pf_number + 1);
+-
+- get_prof_cfg = &mboxq->u.mqe.un.get_prof_cfg;
+- bf_set(lpfc_mbx_get_prof_cfg_prof_tp, &get_prof_cfg->u.request,
+- LPFC_CFG_TYPE_CURRENT_ACTIVE);
+-
+- rc = lpfc_sli_issue_mbox_wait(phba, mboxq,
+- lpfc_mbox_tmo_val(phba, MBX_SLI4_CONFIG));
+-
+- if (rc != MBX_TIMEOUT) {
+- /* check return status */
+- shdr_status = bf_get(lpfc_mbox_hdr_status, &shdr->response);
+- shdr_add_status = bf_get(lpfc_mbox_hdr_add_status,
+- &shdr->response);
+- if (shdr_status || shdr_add_status || rc)
+- goto error_out;
+-
+- } else
+- goto error_out;
+-
+- desc_count = get_prof_cfg->u.response.prof_cfg.rsrc_desc_count;
+-
+- for (i = 0; i < LPFC_RSRC_DESC_MAX_NUM; i++) {
+- desc = (struct lpfc_rsrc_desc_pcie *)
+- &get_prof_cfg->u.response.prof_cfg.desc[i];
+- if (LPFC_RSRC_DESC_TYPE_PCIE ==
+- bf_get(lpfc_rsrc_desc_pcie_type, desc)) {
+- max_nr_virtfn = bf_get(lpfc_rsrc_desc_pcie_nr_virtfn,
+- desc);
+- break;
+- }
+- }
+-
+- if (i < LPFC_RSRC_DESC_MAX_NUM) {
+- if (rc != MBX_TIMEOUT)
+- mempool_free(mboxq, phba->mbox_mem_pool);
+- return snprintf(buf, PAGE_SIZE, "%d\n", max_nr_virtfn);
+- }
+-
+-error_out:
+- if (rc != MBX_TIMEOUT)
+- mempool_free(mboxq, phba->mbox_mem_pool);
+- return -EIO;
++ max_nr_virtfn = lpfc_sli_sriov_nr_virtfn_get(phba);
++ return snprintf(buf, PAGE_SIZE, "%d\n", max_nr_virtfn);
+ }
+
+ /**
+@@ -1605,6 +1587,9 @@ static int \
+ lpfc_##attr##_set(struct lpfc_hba *phba, uint val) \
+ { \
+ if (val >= minval && val <= maxval) {\
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT, \
++ "3052 lpfc_" #attr " changed from %d to %d\n", \
++ phba->cfg_##attr, val); \
+ phba->cfg_##attr = val;\
+ return 0;\
+ }\
+@@ -1762,6 +1747,9 @@ static int \
+ lpfc_##attr##_set(struct lpfc_vport *vport, uint val) \
+ { \
+ if (val >= minval && val <= maxval) {\
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, \
++ "3053 lpfc_" #attr " changed from %d to %d\n", \
++ vport->cfg_##attr, val); \
+ vport->cfg_##attr = val;\
+ return 0;\
+ }\
+@@ -2678,6 +2666,9 @@ lpfc_topology_store(struct device *dev, struct device_attribute *attr,
+ if (nolip)
+ return strlen(buf);
+
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
++ "3054 lpfc_topology changed from %d to %d\n",
++ prev_val, val);
+ err = lpfc_issue_lip(lpfc_shost_from_vport(phba->pport));
+ if (err) {
+ phba->cfg_topology = prev_val;
+@@ -3101,6 +3092,10 @@ lpfc_link_speed_store(struct device *dev, struct device_attribute *attr,
+ if (sscanf(val_buf, "%i", &val) != 1)
+ return -EINVAL;
+
++ lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
++ "3055 lpfc_link_speed changed from %d to %d %s\n",
++ phba->cfg_link_speed, val, nolip ? "(nolip)" : "(lip)");
++
+ if (((val == LPFC_USER_LINK_SPEED_1G) && !(phba->lmt & LMT_1Gb)) ||
+ ((val == LPFC_USER_LINK_SPEED_2G) && !(phba->lmt & LMT_2Gb)) ||
+ ((val == LPFC_USER_LINK_SPEED_4G) && !(phba->lmt & LMT_4Gb)) ||
+@@ -3678,7 +3673,9 @@ LPFC_ATTR_R(enable_bg, 0, 0, 1, "Enable BlockGuard Support");
+ # - Default will result in registering capabilities for all profiles.
+ #
+ */
+-unsigned int lpfc_prot_mask = SHOST_DIF_TYPE1_PROTECTION;
++unsigned int lpfc_prot_mask = SHOST_DIF_TYPE1_PROTECTION |
++ SHOST_DIX_TYPE0_PROTECTION |
++ SHOST_DIX_TYPE1_PROTECTION;
+
+ module_param(lpfc_prot_mask, uint, S_IRUGO);
+ MODULE_PARM_DESC(lpfc_prot_mask, "host protection mask");
+diff --git a/drivers/scsi/lpfc/lpfc_bsg.c b/drivers/scsi/lpfc/lpfc_bsg.c
+index 7fb0ba4..f46378f 100644
+--- a/drivers/scsi/lpfc/lpfc_bsg.c
++++ b/drivers/scsi/lpfc/lpfc_bsg.c
+@@ -960,8 +960,10 @@ lpfc_bsg_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ evt_dat->immed_dat].oxid,
+ phba->ct_ctx[
+ evt_dat->immed_dat].SID);
++ phba->ct_ctx[evt_dat->immed_dat].rxid =
++ piocbq->iocb.ulpContext;
+ phba->ct_ctx[evt_dat->immed_dat].oxid =
+- piocbq->iocb.ulpContext;
++ piocbq->iocb.unsli3.rcvsli3.ox_id;
+ phba->ct_ctx[evt_dat->immed_dat].SID =
+ piocbq->iocb.un.rcvels.remoteID;
+ phba->ct_ctx[evt_dat->immed_dat].flags = UNSOL_VALID;
+@@ -1312,7 +1314,8 @@ lpfc_issue_ct_rsp(struct lpfc_hba *phba, struct fc_bsg_job *job, uint32_t tag,
+ rc = IOCB_ERROR;
+ goto issue_ct_rsp_exit;
+ }
+- icmd->ulpContext = phba->ct_ctx[tag].oxid;
++ icmd->ulpContext = phba->ct_ctx[tag].rxid;
++ icmd->unsli3.rcvsli3.ox_id = phba->ct_ctx[tag].oxid;
+ ndlp = lpfc_findnode_did(phba->pport, phba->ct_ctx[tag].SID);
+ if (!ndlp) {
+ lpfc_printf_log(phba, KERN_WARNING, LOG_ELS,
+@@ -1337,9 +1340,7 @@ lpfc_issue_ct_rsp(struct lpfc_hba *phba, struct fc_bsg_job *job, uint32_t tag,
+ goto issue_ct_rsp_exit;
+ }
+
+- icmd->un.ulpWord[3] = ndlp->nlp_rpi;
+- if (phba->sli_rev == LPFC_SLI_REV4)
+- icmd->ulpContext =
++ icmd->un.ulpWord[3] =
+ phba->sli4_hba.rpi_ids[ndlp->nlp_rpi];
+
+ /* The exchange is done, mark the entry as invalid */
+@@ -1351,8 +1352,8 @@ lpfc_issue_ct_rsp(struct lpfc_hba *phba, struct fc_bsg_job *job, uint32_t tag,
+
+ /* Xmit CT response on exchange <xid> */
+ lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
+- "2722 Xmit CT response on exchange x%x Data: x%x x%x\n",
+- icmd->ulpContext, icmd->ulpIoTag, phba->link_state);
++ "2722 Xmit CT response on exchange x%x Data: x%x x%x x%x\n",
++ icmd->ulpContext, icmd->ulpIoTag, tag, phba->link_state);
+
+ ctiocb->iocb_cmpl = NULL;
+ ctiocb->iocb_flag |= LPFC_IO_LIBDFC;
+@@ -1471,13 +1472,12 @@ send_mgmt_rsp_exit:
+ /**
+ * lpfc_bsg_diag_mode_enter - process preparing into device diag loopback mode
+ * @phba: Pointer to HBA context object.
+- * @job: LPFC_BSG_VENDOR_DIAG_MODE
+ *
+ * This function is responsible for preparing driver for diag loopback
+ * on device.
+ */
+ static int
+-lpfc_bsg_diag_mode_enter(struct lpfc_hba *phba, struct fc_bsg_job *job)
++lpfc_bsg_diag_mode_enter(struct lpfc_hba *phba)
+ {
+ struct lpfc_vport **vports;
+ struct Scsi_Host *shost;
+@@ -1521,7 +1521,6 @@ lpfc_bsg_diag_mode_enter(struct lpfc_hba *phba, struct fc_bsg_job *job)
+ /**
+ * lpfc_bsg_diag_mode_exit - exit process from device diag loopback mode
+ * @phba: Pointer to HBA context object.
+- * @job: LPFC_BSG_VENDOR_DIAG_MODE
+ *
+ * This function is responsible for driver exit processing of setting up
+ * diag loopback mode on device.
+@@ -1586,7 +1585,7 @@ lpfc_sli3_bsg_diag_loopback_mode(struct lpfc_hba *phba, struct fc_bsg_job *job)
+ goto job_error;
+ }
+
+- rc = lpfc_bsg_diag_mode_enter(phba, job);
++ rc = lpfc_bsg_diag_mode_enter(phba);
+ if (rc)
+ goto job_error;
+
+@@ -1758,7 +1757,7 @@ lpfc_sli4_bsg_diag_loopback_mode(struct lpfc_hba *phba, struct fc_bsg_job *job)
+ goto job_error;
+ }
+
+- rc = lpfc_bsg_diag_mode_enter(phba, job);
++ rc = lpfc_bsg_diag_mode_enter(phba);
+ if (rc)
+ goto job_error;
+
+@@ -1982,7 +1981,7 @@ lpfc_sli4_bsg_link_diag_test(struct fc_bsg_job *job)
+ goto job_error;
+ }
+
+- rc = lpfc_bsg_diag_mode_enter(phba, job);
++ rc = lpfc_bsg_diag_mode_enter(phba);
+ if (rc)
+ goto job_error;
+
+@@ -3511,7 +3510,7 @@ lpfc_bsg_sli_cfg_read_cmd_ext(struct lpfc_hba *phba, struct fc_bsg_job *job,
+ lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
+ "2947 Issued SLI_CONFIG ext-buffer "
+ "maibox command, rc:x%x\n", rc);
+- return 1;
++ return SLI_CONFIG_HANDLED;
+ }
+ lpfc_printf_log(phba, KERN_ERR, LOG_LIBDFC,
+ "2948 Failed to issue SLI_CONFIG ext-buffer "
+@@ -3549,7 +3548,7 @@ lpfc_bsg_sli_cfg_write_cmd_ext(struct lpfc_hba *phba, struct fc_bsg_job *job,
+ LPFC_MBOXQ_t *pmboxq = NULL;
+ MAILBOX_t *pmb;
+ uint8_t *mbx;
+- int rc = 0, i;
++ int rc = SLI_CONFIG_NOT_HANDLED, i;
+
+ mbox_req =
+ (struct dfc_mbox_req *)job->request->rqst_data.h_vendor.vendor_cmd;
+@@ -3660,7 +3659,7 @@ lpfc_bsg_sli_cfg_write_cmd_ext(struct lpfc_hba *phba, struct fc_bsg_job *job,
+ lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
+ "2955 Issued SLI_CONFIG ext-buffer "
+ "maibox command, rc:x%x\n", rc);
+- return 1;
++ return SLI_CONFIG_HANDLED;
+ }
+ lpfc_printf_log(phba, KERN_ERR, LOG_LIBDFC,
+ "2956 Failed to issue SLI_CONFIG ext-buffer "
+@@ -3668,6 +3667,11 @@ lpfc_bsg_sli_cfg_write_cmd_ext(struct lpfc_hba *phba, struct fc_bsg_job *job,
+ rc = -EPIPE;
+ }
+
++ /* wait for additoinal external buffers */
++ job->reply->result = 0;
++ job->job_done(job);
++ return SLI_CONFIG_HANDLED;
++
+ job_error:
+ if (pmboxq)
+ mempool_free(pmboxq, phba->mbox_mem_pool);
+@@ -3959,7 +3963,7 @@ lpfc_bsg_write_ebuf_set(struct lpfc_hba *phba, struct fc_bsg_job *job,
+ lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
+ "2969 Issued SLI_CONFIG ext-buffer "
+ "maibox command, rc:x%x\n", rc);
+- return 1;
++ return SLI_CONFIG_HANDLED;
+ }
+ lpfc_printf_log(phba, KERN_ERR, LOG_LIBDFC,
+ "2970 Failed to issue SLI_CONFIG ext-buffer "
+@@ -4039,14 +4043,14 @@ lpfc_bsg_handle_sli_cfg_ext(struct lpfc_hba *phba, struct fc_bsg_job *job,
+ struct lpfc_dmabuf *dmabuf)
+ {
+ struct dfc_mbox_req *mbox_req;
+- int rc;
++ int rc = SLI_CONFIG_NOT_HANDLED;
+
+ mbox_req =
+ (struct dfc_mbox_req *)job->request->rqst_data.h_vendor.vendor_cmd;
+
+ /* mbox command with/without single external buffer */
+ if (mbox_req->extMboxTag == 0 && mbox_req->extSeqNum == 0)
+- return SLI_CONFIG_NOT_HANDLED;
++ return rc;
+
+ /* mbox command and first external buffer */
+ if (phba->mbox_ext_buf_ctx.state == LPFC_BSG_MBOX_IDLE) {
+@@ -4249,7 +4253,7 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
+ * mailbox extension size
+ */
+ if ((transmit_length > receive_length) ||
+- (transmit_length > MAILBOX_EXT_SIZE)) {
++ (transmit_length > BSG_MBOX_SIZE - sizeof(MAILBOX_t))) {
+ rc = -ERANGE;
+ goto job_done;
+ }
+@@ -4272,7 +4276,7 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
+ /* receive length cannot be greater than mailbox
+ * extension size
+ */
+- if (receive_length > MAILBOX_EXT_SIZE) {
++ if (receive_length > BSG_MBOX_SIZE - sizeof(MAILBOX_t)) {
+ rc = -ERANGE;
+ goto job_done;
+ }
+@@ -4306,7 +4310,8 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
+ bde = (struct ulp_bde64 *)&pmb->un.varWords[4];
+
+ /* bde size cannot be greater than mailbox ext size */
+- if (bde->tus.f.bdeSize > MAILBOX_EXT_SIZE) {
++ if (bde->tus.f.bdeSize >
++ BSG_MBOX_SIZE - sizeof(MAILBOX_t)) {
+ rc = -ERANGE;
+ goto job_done;
+ }
+@@ -4332,7 +4337,8 @@ lpfc_bsg_issue_mbox(struct lpfc_hba *phba, struct fc_bsg_job *job,
+ * mailbox extension size
+ */
+ if ((receive_length == 0) ||
+- (receive_length > MAILBOX_EXT_SIZE)) {
++ (receive_length >
++ BSG_MBOX_SIZE - sizeof(MAILBOX_t))) {
+ rc = -ERANGE;
+ goto job_done;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
+index fc20c24..1e41af8 100644
+--- a/drivers/scsi/lpfc/lpfc_crtn.h
++++ b/drivers/scsi/lpfc/lpfc_crtn.h
+@@ -432,6 +432,7 @@ void lpfc_handle_rrq_active(struct lpfc_hba *);
+ int lpfc_send_rrq(struct lpfc_hba *, struct lpfc_node_rrq *);
+ int lpfc_set_rrq_active(struct lpfc_hba *, struct lpfc_nodelist *,
+ uint16_t, uint16_t, uint16_t);
++uint16_t lpfc_sli4_xri_inrange(struct lpfc_hba *, uint16_t);
+ void lpfc_cleanup_wt_rrqs(struct lpfc_hba *);
+ void lpfc_cleanup_vports_rrqs(struct lpfc_vport *, struct lpfc_nodelist *);
+ struct lpfc_node_rrq *lpfc_get_active_rrq(struct lpfc_vport *, uint16_t,
+@@ -439,3 +440,4 @@ struct lpfc_node_rrq *lpfc_get_active_rrq(struct lpfc_vport *, uint16_t,
+ int lpfc_wr_object(struct lpfc_hba *, struct list_head *, uint32_t, uint32_t *);
+ /* functions to support SR-IOV */
+ int lpfc_sli_probe_sriov_nr_virtfn(struct lpfc_hba *, int);
++uint16_t lpfc_sli_sriov_nr_virtfn_get(struct lpfc_hba *);
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 32a0845..1725b81 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -647,21 +647,15 @@ lpfc_cmpl_els_flogi_fabric(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ }
+ lpfc_cleanup_pending_mbox(vport);
+
+- if (phba->sli_rev == LPFC_SLI_REV4)
++ if (phba->sli_rev == LPFC_SLI_REV4) {
+ lpfc_sli4_unreg_all_rpis(vport);
+-
+- if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
+ lpfc_mbx_unreg_vpi(vport);
+ spin_lock_irq(shost->host_lock);
+ vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
+- spin_unlock_irq(shost->host_lock);
+- }
+- /*
+- * If VPI is unreged, driver need to do INIT_VPI
+- * before re-registering
+- */
+- if (phba->sli_rev == LPFC_SLI_REV4) {
+- spin_lock_irq(shost->host_lock);
++ /*
++ * If VPI is unreged, driver need to do INIT_VPI
++ * before re-registering
++ */
+ vport->fc_flag |= FC_VPORT_NEEDS_INIT_VPI;
+ spin_unlock_irq(shost->host_lock);
+ }
+@@ -1096,11 +1090,14 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ /* Set the fcfi to the fcfi we registered with */
+ elsiocb->iocb.ulpContext = phba->fcf.fcfi;
+ }
+- } else if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
+- sp->cmn.request_multiple_Nport = 1;
+- /* For FLOGI, Let FLOGI rsp set the NPortID for VPI 0 */
+- icmd->ulpCt_h = 1;
+- icmd->ulpCt_l = 0;
++ } else {
++ if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
++ sp->cmn.request_multiple_Nport = 1;
++ /* For FLOGI, Let FLOGI rsp set the NPortID for VPI 0 */
++ icmd->ulpCt_h = 1;
++ icmd->ulpCt_l = 0;
++ } else
++ sp->cmn.request_multiple_Nport = 0;
+ }
+
+ if (phba->fc_topology != LPFC_TOPOLOGY_LOOP) {
+@@ -3656,7 +3653,8 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag,
+ }
+
+ icmd = &elsiocb->iocb;
+- icmd->ulpContext = oldcmd->ulpContext; /* Xri */
++ icmd->ulpContext = oldcmd->ulpContext; /* Xri / rx_id */
++ icmd->unsli3.rcvsli3.ox_id = oldcmd->unsli3.rcvsli3.ox_id;
+ pcmd = (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+ *((uint32_t *) (pcmd)) = ELS_CMD_ACC;
+ pcmd += sizeof(uint32_t);
+@@ -3673,7 +3671,8 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag,
+ return 1;
+
+ icmd = &elsiocb->iocb;
+- icmd->ulpContext = oldcmd->ulpContext; /* Xri */
++ icmd->ulpContext = oldcmd->ulpContext; /* Xri / rx_id */
++ icmd->unsli3.rcvsli3.ox_id = oldcmd->unsli3.rcvsli3.ox_id;
+ pcmd = (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+
+ if (mbox)
+@@ -3695,7 +3694,8 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag,
+ return 1;
+
+ icmd = &elsiocb->iocb;
+- icmd->ulpContext = oldcmd->ulpContext; /* Xri */
++ icmd->ulpContext = oldcmd->ulpContext; /* Xri / rx_id */
++ icmd->unsli3.rcvsli3.ox_id = oldcmd->unsli3.rcvsli3.ox_id;
+ pcmd = (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+
+ memcpy(pcmd, ((struct lpfc_dmabuf *) oldiocb->context2)->virt,
+@@ -3781,7 +3781,8 @@ lpfc_els_rsp_reject(struct lpfc_vport *vport, uint32_t rejectError,
+
+ icmd = &elsiocb->iocb;
+ oldcmd = &oldiocb->iocb;
+- icmd->ulpContext = oldcmd->ulpContext; /* Xri */
++ icmd->ulpContext = oldcmd->ulpContext; /* Xri / rx_id */
++ icmd->unsli3.rcvsli3.ox_id = oldcmd->unsli3.rcvsli3.ox_id;
+ pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+
+ *((uint32_t *) (pcmd)) = ELS_CMD_LS_RJT;
+@@ -3853,7 +3854,8 @@ lpfc_els_rsp_adisc_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb,
+
+ icmd = &elsiocb->iocb;
+ oldcmd = &oldiocb->iocb;
+- icmd->ulpContext = oldcmd->ulpContext; /* Xri */
++ icmd->ulpContext = oldcmd->ulpContext; /* Xri / rx_id */
++ icmd->unsli3.rcvsli3.ox_id = oldcmd->unsli3.rcvsli3.ox_id;
+
+ /* Xmit ADISC ACC response tag <ulpIoTag> */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+@@ -3931,7 +3933,9 @@ lpfc_els_rsp_prli_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb,
+
+ icmd = &elsiocb->iocb;
+ oldcmd = &oldiocb->iocb;
+- icmd->ulpContext = oldcmd->ulpContext; /* Xri */
++ icmd->ulpContext = oldcmd->ulpContext; /* Xri / rx_id */
++ icmd->unsli3.rcvsli3.ox_id = oldcmd->unsli3.rcvsli3.ox_id;
++
+ /* Xmit PRLI ACC response tag <ulpIoTag> */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ "0131 Xmit PRLI ACC response tag x%x xri x%x, "
+@@ -4035,7 +4039,9 @@ lpfc_els_rsp_rnid_acc(struct lpfc_vport *vport, uint8_t format,
+
+ icmd = &elsiocb->iocb;
+ oldcmd = &oldiocb->iocb;
+- icmd->ulpContext = oldcmd->ulpContext; /* Xri */
++ icmd->ulpContext = oldcmd->ulpContext; /* Xri / rx_id */
++ icmd->unsli3.rcvsli3.ox_id = oldcmd->unsli3.rcvsli3.ox_id;
++
+ /* Xmit RNID ACC response tag <ulpIoTag> */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ "0132 Xmit RNID ACC response tag x%x xri x%x\n",
+@@ -4163,7 +4169,9 @@ lpfc_els_rsp_echo_acc(struct lpfc_vport *vport, uint8_t *data,
+ if (!elsiocb)
+ return 1;
+
+- elsiocb->iocb.ulpContext = oldiocb->iocb.ulpContext; /* Xri */
++ elsiocb->iocb.ulpContext = oldiocb->iocb.ulpContext; /* Xri / rx_id */
++ elsiocb->iocb.unsli3.rcvsli3.ox_id = oldiocb->iocb.unsli3.rcvsli3.ox_id;
++
+ /* Xmit ECHO ACC response tag <ulpIoTag> */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ "2876 Xmit ECHO ACC response tag x%x xri x%x\n",
+@@ -5054,13 +5062,15 @@ lpfc_els_rsp_rls_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ uint8_t *pcmd;
+ struct lpfc_iocbq *elsiocb;
+ struct lpfc_nodelist *ndlp;
+- uint16_t xri;
++ uint16_t oxid;
++ uint16_t rxid;
+ uint32_t cmdsize;
+
+ mb = &pmb->u.mb;
+
+ ndlp = (struct lpfc_nodelist *) pmb->context2;
+- xri = (uint16_t) ((unsigned long)(pmb->context1));
++ rxid = (uint16_t) ((unsigned long)(pmb->context1) & 0xffff);
++ oxid = (uint16_t) (((unsigned long)(pmb->context1) >> 16) & 0xffff);
+ pmb->context1 = NULL;
+ pmb->context2 = NULL;
+
+@@ -5082,7 +5092,8 @@ lpfc_els_rsp_rls_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ return;
+
+ icmd = &elsiocb->iocb;
+- icmd->ulpContext = xri;
++ icmd->ulpContext = rxid;
++ icmd->unsli3.rcvsli3.ox_id = oxid;
+
+ pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+ *((uint32_t *) (pcmd)) = ELS_CMD_ACC;
+@@ -5137,13 +5148,16 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ uint8_t *pcmd;
+ struct lpfc_iocbq *elsiocb;
+ struct lpfc_nodelist *ndlp;
+- uint16_t xri, status;
++ uint16_t status;
++ uint16_t oxid;
++ uint16_t rxid;
+ uint32_t cmdsize;
+
+ mb = &pmb->u.mb;
+
+ ndlp = (struct lpfc_nodelist *) pmb->context2;
+- xri = (uint16_t) ((unsigned long)(pmb->context1));
++ rxid = (uint16_t) ((unsigned long)(pmb->context1) & 0xffff);
++ oxid = (uint16_t) (((unsigned long)(pmb->context1) >> 16) & 0xffff);
+ pmb->context1 = NULL;
+ pmb->context2 = NULL;
+
+@@ -5165,7 +5179,8 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ return;
+
+ icmd = &elsiocb->iocb;
+- icmd->ulpContext = xri;
++ icmd->ulpContext = rxid;
++ icmd->unsli3.rcvsli3.ox_id = oxid;
+
+ pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+ *((uint32_t *) (pcmd)) = ELS_CMD_ACC;
+@@ -5238,8 +5253,9 @@ lpfc_els_rcv_rls(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ mbox = mempool_alloc(phba->mbox_mem_pool, GFP_ATOMIC);
+ if (mbox) {
+ lpfc_read_lnk_stat(phba, mbox);
+- mbox->context1 =
+- (void *)((unsigned long) cmdiocb->iocb.ulpContext);
++ mbox->context1 = (void *)((unsigned long)
++ ((cmdiocb->iocb.unsli3.rcvsli3.ox_id << 16) |
++ cmdiocb->iocb.ulpContext)); /* rx_id */
+ mbox->context2 = lpfc_nlp_get(ndlp);
+ mbox->vport = vport;
+ mbox->mbox_cmpl = lpfc_els_rsp_rls_acc;
+@@ -5314,7 +5330,8 @@ lpfc_els_rcv_rtv(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ pcmd += sizeof(uint32_t); /* Skip past command */
+
+ /* use the command's xri in the response */
+- elsiocb->iocb.ulpContext = cmdiocb->iocb.ulpContext;
++ elsiocb->iocb.ulpContext = cmdiocb->iocb.ulpContext; /* Xri / rx_id */
++ elsiocb->iocb.unsli3.rcvsli3.ox_id = cmdiocb->iocb.unsli3.rcvsli3.ox_id;
+
+ rtv_rsp = (struct RTV_RSP *)pcmd;
+
+@@ -5399,8 +5416,9 @@ lpfc_els_rcv_rps(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ mbox = mempool_alloc(phba->mbox_mem_pool, GFP_ATOMIC);
+ if (mbox) {
+ lpfc_read_lnk_stat(phba, mbox);
+- mbox->context1 =
+- (void *)((unsigned long) cmdiocb->iocb.ulpContext);
++ mbox->context1 = (void *)((unsigned long)
++ ((cmdiocb->iocb.unsli3.rcvsli3.ox_id << 16) |
++ cmdiocb->iocb.ulpContext)); /* rx_id */
+ mbox->context2 = lpfc_nlp_get(ndlp);
+ mbox->vport = vport;
+ mbox->mbox_cmpl = lpfc_els_rsp_rps_acc;
+@@ -5554,7 +5572,8 @@ lpfc_els_rsp_rpl_acc(struct lpfc_vport *vport, uint16_t cmdsize,
+
+ icmd = &elsiocb->iocb;
+ oldcmd = &oldiocb->iocb;
+- icmd->ulpContext = oldcmd->ulpContext; /* Xri */
++ icmd->ulpContext = oldcmd->ulpContext; /* Xri / rx_id */
++ icmd->unsli3.rcvsli3.ox_id = oldcmd->unsli3.rcvsli3.ox_id;
+
+ pcmd = (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+ *((uint32_t *) (pcmd)) = ELS_CMD_ACC;
+@@ -6586,7 +6605,7 @@ lpfc_find_vport_by_vpid(struct lpfc_hba *phba, uint16_t vpi)
+ {
+ struct lpfc_vport *vport;
+ unsigned long flags;
+- int i;
++ int i = 0;
+
+ /* The physical ports are always vpi 0 - translate is unnecessary. */
+ if (vpi > 0) {
+@@ -6609,7 +6628,7 @@ lpfc_find_vport_by_vpid(struct lpfc_hba *phba, uint16_t vpi)
+
+ spin_lock_irqsave(&phba->hbalock, flags);
+ list_for_each_entry(vport, &phba->port_list, listentry) {
+- if (vport->vpi == vpi) {
++ if (vport->vpi == i) {
+ spin_unlock_irqrestore(&phba->hbalock, flags);
+ return vport;
+ }
+@@ -7787,6 +7806,7 @@ lpfc_sli4_els_xri_aborted(struct lpfc_hba *phba,
+ {
+ uint16_t xri = bf_get(lpfc_wcqe_xa_xri, axri);
+ uint16_t rxid = bf_get(lpfc_wcqe_xa_remote_xid, axri);
++ uint16_t lxri = 0;
+
+ struct lpfc_sglq *sglq_entry = NULL, *sglq_next = NULL;
+ unsigned long iflag = 0;
+@@ -7815,7 +7835,12 @@ lpfc_sli4_els_xri_aborted(struct lpfc_hba *phba,
+ }
+ }
+ spin_unlock(&phba->sli4_hba.abts_sgl_list_lock);
+- sglq_entry = __lpfc_get_active_sglq(phba, xri);
++ lxri = lpfc_sli4_xri_inrange(phba, xri);
++ if (lxri == NO_XRI) {
++ spin_unlock_irqrestore(&phba->hbalock, iflag);
++ return;
++ }
++ sglq_entry = __lpfc_get_active_sglq(phba, lxri);
+ if (!sglq_entry || (sglq_entry->sli4_xritag != xri)) {
+ spin_unlock_irqrestore(&phba->hbalock, iflag);
+ return;
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 18d0dbf..bef17e3 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -2247,7 +2247,6 @@ read_next_fcf:
+ spin_lock_irq(&phba->hbalock);
+ phba->fcf.fcf_flag |= FCF_REDISC_FOV;
+ spin_unlock_irq(&phba->hbalock);
+- lpfc_sli4_mbox_cmd_free(phba, mboxq);
+ lpfc_sli4_fcf_scan_read_fcf_rec(phba,
+ LPFC_FCOE_FCF_GET_FIRST);
+ return;
+@@ -2645,6 +2644,7 @@ lpfc_mbx_cmpl_reg_vfi(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
+ vport->vpi_state |= LPFC_VPI_REGISTERED;
+ vport->fc_flag |= FC_VFI_REGISTERED;
+ vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
++ vport->fc_flag &= ~FC_VPORT_NEEDS_INIT_VPI;
+ spin_unlock_irq(shost->host_lock);
+
+ if (vport->port_state == LPFC_FABRIC_CFG_LINK) {
+diff --git a/drivers/scsi/lpfc/lpfc_hw.h b/drivers/scsi/lpfc/lpfc_hw.h
+index 9059524..df53d10 100644
+--- a/drivers/scsi/lpfc/lpfc_hw.h
++++ b/drivers/scsi/lpfc/lpfc_hw.h
+@@ -3470,11 +3470,16 @@ typedef struct {
+ or CMD_IOCB_RCV_SEQ64_CX (0xB5) */
+
+ struct rcv_sli3 {
+- uint32_t word8Rsvd;
+ #ifdef __BIG_ENDIAN_BITFIELD
++ uint16_t ox_id;
++ uint16_t seq_cnt;
++
+ uint16_t vpi;
+ uint16_t word9Rsvd;
+ #else /* __LITTLE_ENDIAN */
++ uint16_t seq_cnt;
++ uint16_t ox_id;
++
+ uint16_t word9Rsvd;
+ uint16_t vpi;
+ #endif
+diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h
+index 11e26a2..7f8003b 100644
+--- a/drivers/scsi/lpfc/lpfc_hw4.h
++++ b/drivers/scsi/lpfc/lpfc_hw4.h
+@@ -170,15 +170,8 @@ struct lpfc_sli_intf {
+ #define LPFC_PCI_FUNC3 3
+ #define LPFC_PCI_FUNC4 4
+
+-/* SLI4 interface type-2 control register offsets */
+-#define LPFC_CTL_PORT_SEM_OFFSET 0x400
+-#define LPFC_CTL_PORT_STA_OFFSET 0x404
+-#define LPFC_CTL_PORT_CTL_OFFSET 0x408
+-#define LPFC_CTL_PORT_ER1_OFFSET 0x40C
+-#define LPFC_CTL_PORT_ER2_OFFSET 0x410
++/* SLI4 interface type-2 PDEV_CTL register */
+ #define LPFC_CTL_PDEV_CTL_OFFSET 0x414
+-
+-/* Some SLI4 interface type-2 PDEV_CTL register bits */
+ #define LPFC_CTL_PDEV_CTL_DRST 0x00000001
+ #define LPFC_CTL_PDEV_CTL_FRST 0x00000002
+ #define LPFC_CTL_PDEV_CTL_DD 0x00000004
+@@ -337,6 +330,7 @@ struct lpfc_cqe {
+ #define CQE_CODE_RELEASE_WQE 0x2
+ #define CQE_CODE_RECEIVE 0x4
+ #define CQE_CODE_XRI_ABORTED 0x5
++#define CQE_CODE_RECEIVE_V1 0x9
+
+ /* completion queue entry for wqe completions */
+ struct lpfc_wcqe_complete {
+@@ -440,7 +434,10 @@ struct lpfc_rcqe {
+ #define FC_STATUS_RQ_BUF_LEN_EXCEEDED 0x11 /* payload truncated */
+ #define FC_STATUS_INSUFF_BUF_NEED_BUF 0x12 /* Insufficient buffers */
+ #define FC_STATUS_INSUFF_BUF_FRM_DISC 0x13 /* Frame Discard */
+- uint32_t reserved1;
++ uint32_t word1;
++#define lpfc_rcqe_fcf_id_v1_SHIFT 0
++#define lpfc_rcqe_fcf_id_v1_MASK 0x0000003F
++#define lpfc_rcqe_fcf_id_v1_WORD word1
+ uint32_t word2;
+ #define lpfc_rcqe_length_SHIFT 16
+ #define lpfc_rcqe_length_MASK 0x0000FFFF
+@@ -451,6 +448,9 @@ struct lpfc_rcqe {
+ #define lpfc_rcqe_fcf_id_SHIFT 0
+ #define lpfc_rcqe_fcf_id_MASK 0x0000003F
+ #define lpfc_rcqe_fcf_id_WORD word2
++#define lpfc_rcqe_rq_id_v1_SHIFT 0
++#define lpfc_rcqe_rq_id_v1_MASK 0x0000FFFF
++#define lpfc_rcqe_rq_id_v1_WORD word2
+ uint32_t word3;
+ #define lpfc_rcqe_valid_SHIFT lpfc_cqe_valid_SHIFT
+ #define lpfc_rcqe_valid_MASK lpfc_cqe_valid_MASK
+@@ -515,7 +515,7 @@ struct lpfc_register {
+ /* The following BAR0 register sets are defined for if_type 0 and 2 UCNAs. */
+ #define LPFC_SLI_INTF 0x0058
+
+-#define LPFC_SLIPORT_IF2_SMPHR 0x0400
++#define LPFC_CTL_PORT_SEM_OFFSET 0x400
+ #define lpfc_port_smphr_perr_SHIFT 31
+ #define lpfc_port_smphr_perr_MASK 0x1
+ #define lpfc_port_smphr_perr_WORD word0
+@@ -575,7 +575,7 @@ struct lpfc_register {
+ #define LPFC_POST_STAGE_PORT_READY 0xC000
+ #define LPFC_POST_STAGE_PORT_UE 0xF000
+
+-#define LPFC_SLIPORT_STATUS 0x0404
++#define LPFC_CTL_PORT_STA_OFFSET 0x404
+ #define lpfc_sliport_status_err_SHIFT 31
+ #define lpfc_sliport_status_err_MASK 0x1
+ #define lpfc_sliport_status_err_WORD word0
+@@ -593,7 +593,7 @@ struct lpfc_register {
+ #define lpfc_sliport_status_rdy_WORD word0
+ #define MAX_IF_TYPE_2_RESETS 1000
+
+-#define LPFC_SLIPORT_CNTRL 0x0408
++#define LPFC_CTL_PORT_CTL_OFFSET 0x408
+ #define lpfc_sliport_ctrl_end_SHIFT 30
+ #define lpfc_sliport_ctrl_end_MASK 0x1
+ #define lpfc_sliport_ctrl_end_WORD word0
+@@ -604,8 +604,8 @@ struct lpfc_register {
+ #define lpfc_sliport_ctrl_ip_WORD word0
+ #define LPFC_SLIPORT_INIT_PORT 1
+
+-#define LPFC_SLIPORT_ERR_1 0x040C
+-#define LPFC_SLIPORT_ERR_2 0x0410
++#define LPFC_CTL_PORT_ER1_OFFSET 0x40C
++#define LPFC_CTL_PORT_ER2_OFFSET 0x410
+
+ /* The following Registers apply to SLI4 if_type 0 UCNAs. They typically
+ * reside in BAR 2.
+@@ -3198,6 +3198,8 @@ struct lpfc_grp_hdr {
+ #define lpfc_grp_hdr_id_MASK 0x000000FF
+ #define lpfc_grp_hdr_id_WORD word2
+ uint8_t rev_name[128];
++ uint8_t date[12];
++ uint8_t revision[32];
+ };
+
+ #define FCP_COMMAND 0x0
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 148b98d..027b797 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -2927,6 +2927,8 @@ void lpfc_host_attrib_init(struct Scsi_Host *shost)
+ sizeof fc_host_symbolic_name(shost));
+
+ fc_host_supported_speeds(shost) = 0;
++ if (phba->lmt & LMT_16Gb)
++ fc_host_supported_speeds(shost) |= FC_PORTSPEED_16GBIT;
+ if (phba->lmt & LMT_10Gb)
+ fc_host_supported_speeds(shost) |= FC_PORTSPEED_10GBIT;
+ if (phba->lmt & LMT_8Gb)
+@@ -3647,7 +3649,7 @@ lpfc_sli4_async_fip_evt(struct lpfc_hba *phba,
+ " tag 0x%x\n", acqe_fip->index, acqe_fip->event_tag);
+
+ vport = lpfc_find_vport_by_vpid(phba,
+- acqe_fip->index - phba->vpi_base);
++ acqe_fip->index);
+ ndlp = lpfc_sli4_perform_vport_cvl(vport);
+ if (!ndlp)
+ break;
+@@ -4035,6 +4037,34 @@ lpfc_reset_hba(struct lpfc_hba *phba)
+ }
+
+ /**
++ * lpfc_sli_sriov_nr_virtfn_get - Get the number of sr-iov virtual functions
++ * @phba: pointer to lpfc hba data structure.
++ *
++ * This function enables the PCI SR-IOV virtual functions to a physical
++ * function. It invokes the PCI SR-IOV api with the @nr_vfn provided to
++ * enable the number of virtual functions to the physical function. As
++ * not all devices support SR-IOV, the return code from the pci_enable_sriov()
++ * API call does not considered as an error condition for most of the device.
++ **/
++uint16_t
++lpfc_sli_sriov_nr_virtfn_get(struct lpfc_hba *phba)
++{
++ struct pci_dev *pdev = phba->pcidev;
++ uint16_t nr_virtfn;
++ int pos;
++
++ if (!pdev->is_physfn)
++ return 0;
++
++ pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV);
++ if (pos == 0)
++ return 0;
++
++ pci_read_config_word(pdev, pos + PCI_SRIOV_TOTAL_VF, &nr_virtfn);
++ return nr_virtfn;
++}
++
++/**
+ * lpfc_sli_probe_sriov_nr_virtfn - Enable a number of sr-iov virtual functions
+ * @phba: pointer to lpfc hba data structure.
+ * @nr_vfn: number of virtual functions to be enabled.
+@@ -4049,8 +4079,17 @@ int
+ lpfc_sli_probe_sriov_nr_virtfn(struct lpfc_hba *phba, int nr_vfn)
+ {
+ struct pci_dev *pdev = phba->pcidev;
++ uint16_t max_nr_vfn;
+ int rc;
+
++ max_nr_vfn = lpfc_sli_sriov_nr_virtfn_get(phba);
++ if (nr_vfn > max_nr_vfn) {
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++ "3057 Requested vfs (%d) greater than "
++ "supported vfs (%d)", nr_vfn, max_nr_vfn);
++ return -EINVAL;
++ }
++
+ rc = pci_enable_sriov(pdev, nr_vfn);
+ if (rc) {
+ lpfc_printf_log(phba, KERN_WARNING, LOG_INIT,
+@@ -4516,7 +4555,7 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
+ }
+ }
+
+- return rc;
++ return 0;
+
+ out_free_fcp_eq_hdl:
+ kfree(phba->sli4_hba.fcp_eq_hdl);
+@@ -4966,17 +5005,14 @@ out_free_mem:
+ * @phba: pointer to lpfc hba data structure.
+ *
+ * This routine is invoked to post rpi header templates to the
+- * HBA consistent with the SLI-4 interface spec. This routine
++ * port for those SLI4 ports that do not support extents. This routine
+ * posts a PAGE_SIZE memory region to the port to hold up to
+- * PAGE_SIZE modulo 64 rpi context headers.
+- * No locks are held here because this is an initialization routine
+- * called only from probe or lpfc_online when interrupts are not
+- * enabled and the driver is reinitializing the device.
++ * PAGE_SIZE modulo 64 rpi context headers. This is an initialization routine
++ * and should be called only when interrupts are disabled.
+ *
+ * Return codes
+ * 0 - successful
+- * -ENOMEM - No available memory
+- * -EIO - The mailbox failed to complete successfully.
++ * -ERROR - otherwise.
+ **/
+ int
+ lpfc_sli4_init_rpi_hdrs(struct lpfc_hba *phba)
+@@ -5687,17 +5723,22 @@ lpfc_sli4_bar0_register_memmap(struct lpfc_hba *phba, uint32_t if_type)
+ break;
+ case LPFC_SLI_INTF_IF_TYPE_2:
+ phba->sli4_hba.u.if_type2.ERR1regaddr =
+- phba->sli4_hba.conf_regs_memmap_p + LPFC_SLIPORT_ERR_1;
++ phba->sli4_hba.conf_regs_memmap_p +
++ LPFC_CTL_PORT_ER1_OFFSET;
+ phba->sli4_hba.u.if_type2.ERR2regaddr =
+- phba->sli4_hba.conf_regs_memmap_p + LPFC_SLIPORT_ERR_2;
++ phba->sli4_hba.conf_regs_memmap_p +
++ LPFC_CTL_PORT_ER2_OFFSET;
+ phba->sli4_hba.u.if_type2.CTRLregaddr =
+- phba->sli4_hba.conf_regs_memmap_p + LPFC_SLIPORT_CNTRL;
++ phba->sli4_hba.conf_regs_memmap_p +
++ LPFC_CTL_PORT_CTL_OFFSET;
+ phba->sli4_hba.u.if_type2.STATUSregaddr =
+- phba->sli4_hba.conf_regs_memmap_p + LPFC_SLIPORT_STATUS;
++ phba->sli4_hba.conf_regs_memmap_p +
++ LPFC_CTL_PORT_STA_OFFSET;
+ phba->sli4_hba.SLIINTFregaddr =
+ phba->sli4_hba.conf_regs_memmap_p + LPFC_SLI_INTF;
+ phba->sli4_hba.PSMPHRregaddr =
+- phba->sli4_hba.conf_regs_memmap_p + LPFC_SLIPORT_IF2_SMPHR;
++ phba->sli4_hba.conf_regs_memmap_p +
++ LPFC_CTL_PORT_SEM_OFFSET;
+ phba->sli4_hba.RQDBregaddr =
+ phba->sli4_hba.conf_regs_memmap_p + LPFC_RQ_DOORBELL;
+ phba->sli4_hba.WQDBregaddr =
+@@ -8859,11 +8900,11 @@ lpfc_write_firmware(struct lpfc_hba *phba, const struct firmware *fw)
+ return -EINVAL;
+ }
+ lpfc_decode_firmware_rev(phba, fwrev, 1);
+- if (strncmp(fwrev, image->rev_name, strnlen(fwrev, 16))) {
++ if (strncmp(fwrev, image->revision, strnlen(image->revision, 16))) {
+ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+ "3023 Updating Firmware. Current Version:%s "
+ "New Version:%s\n",
+- fwrev, image->rev_name);
++ fwrev, image->revision);
+ for (i = 0; i < LPFC_MBX_WR_CONFIG_MAX_BDE; i++) {
+ dmabuf = kzalloc(sizeof(struct lpfc_dmabuf),
+ GFP_KERNEL);
+@@ -8892,9 +8933,9 @@ lpfc_write_firmware(struct lpfc_hba *phba, const struct firmware *fw)
+ fw->size - offset);
+ break;
+ }
+- temp_offset += SLI4_PAGE_SIZE;
+ memcpy(dmabuf->virt, fw->data + temp_offset,
+ SLI4_PAGE_SIZE);
++ temp_offset += SLI4_PAGE_SIZE;
+ }
+ rc = lpfc_wr_object(phba, &dma_buffer_list,
+ (fw->size - offset), &offset);
+@@ -9483,6 +9524,13 @@ lpfc_io_slot_reset_s4(struct pci_dev *pdev)
+ }
+
+ pci_restore_state(pdev);
++
++ /*
++ * As the new kernel behavior of pci_restore_state() API call clears
++ * device saved_state flag, need to save the restored state again.
++ */
++ pci_save_state(pdev);
++
+ if (pdev->is_busmaster)
+ pci_set_master(pdev);
+
+diff --git a/drivers/scsi/lpfc/lpfc_mbox.c b/drivers/scsi/lpfc/lpfc_mbox.c
+index 5567670..83450cc 100644
+--- a/drivers/scsi/lpfc/lpfc_mbox.c
++++ b/drivers/scsi/lpfc/lpfc_mbox.c
+@@ -2031,7 +2031,7 @@ lpfc_init_vfi(struct lpfcMboxq *mbox, struct lpfc_vport *vport)
+ bf_set(lpfc_init_vfi_vp, init_vfi, 1);
+ bf_set(lpfc_init_vfi_vfi, init_vfi,
+ vport->phba->sli4_hba.vfi_ids[vport->vfi]);
+- bf_set(lpfc_init_vpi_vpi, init_vfi,
++ bf_set(lpfc_init_vfi_vpi, init_vfi,
+ vport->phba->vpi_ids[vport->vpi]);
+ bf_set(lpfc_init_vfi_fcfi, init_vfi,
+ vport->phba->fcf.fcfi);
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 3ccc974..eadd241 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -1302,13 +1302,13 @@ lpfc_sc_to_bg_opcodes(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ case SCSI_PROT_NORMAL:
+ default:
+ lpfc_printf_log(phba, KERN_ERR, LOG_BG,
+- "9063 BLKGRD: Bad op/guard:%d/%d combination\n",
+- scsi_get_prot_op(sc), guard_type);
++ "9063 BLKGRD: Bad op/guard:%d/IP combination\n",
++ scsi_get_prot_op(sc));
+ ret = 1;
+ break;
+
+ }
+- } else if (guard_type == SHOST_DIX_GUARD_CRC) {
++ } else {
+ switch (scsi_get_prot_op(sc)) {
+ case SCSI_PROT_READ_STRIP:
+ case SCSI_PROT_WRITE_INSERT:
+@@ -1324,17 +1324,18 @@ lpfc_sc_to_bg_opcodes(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+
+ case SCSI_PROT_READ_INSERT:
+ case SCSI_PROT_WRITE_STRIP:
++ *txop = BG_OP_IN_CRC_OUT_NODIF;
++ *rxop = BG_OP_IN_NODIF_OUT_CRC;
++ break;
++
+ case SCSI_PROT_NORMAL:
+ default:
+ lpfc_printf_log(phba, KERN_ERR, LOG_BG,
+- "9075 BLKGRD: Bad op/guard:%d/%d combination\n",
+- scsi_get_prot_op(sc), guard_type);
++ "9075 BLKGRD: Bad op/guard:%d/CRC combination\n",
++ scsi_get_prot_op(sc));
+ ret = 1;
+ break;
+ }
+- } else {
+- /* unsupported format */
+- BUG();
+ }
+
+ return ret;
+@@ -1352,45 +1353,6 @@ lpfc_cmd_blksize(struct scsi_cmnd *sc)
+ return sc->device->sector_size;
+ }
+
+-/**
+- * lpfc_get_cmd_dif_parms - Extract DIF parameters from SCSI command
+- * @sc: in: SCSI command
+- * @apptagmask: out: app tag mask
+- * @apptagval: out: app tag value
+- * @reftag: out: ref tag (reference tag)
+- *
+- * Description:
+- * Extract DIF parameters from the command if possible. Otherwise,
+- * use default parameters.
+- *
+- **/
+-static inline void
+-lpfc_get_cmd_dif_parms(struct scsi_cmnd *sc, uint16_t *apptagmask,
+- uint16_t *apptagval, uint32_t *reftag)
+-{
+- struct scsi_dif_tuple *spt;
+- unsigned char op = scsi_get_prot_op(sc);
+- unsigned int protcnt = scsi_prot_sg_count(sc);
+- static int cnt;
+-
+- if (protcnt && (op == SCSI_PROT_WRITE_STRIP ||
+- op == SCSI_PROT_WRITE_PASS)) {
+-
+- cnt++;
+- spt = page_address(sg_page(scsi_prot_sglist(sc))) +
+- scsi_prot_sglist(sc)[0].offset;
+- *apptagmask = 0;
+- *apptagval = 0;
+- *reftag = cpu_to_be32(spt->ref_tag);
+-
+- } else {
+- /* SBC defines ref tag to be lower 32bits of LBA */
+- *reftag = (uint32_t) (0xffffffff & scsi_get_lba(sc));
+- *apptagmask = 0;
+- *apptagval = 0;
+- }
+-}
+-
+ /*
+ * This function sets up buffer list for protection groups of
+ * type LPFC_PG_TYPE_NO_DIF
+@@ -1427,9 +1389,8 @@ lpfc_bg_setup_bpl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ dma_addr_t physaddr;
+ int i = 0, num_bde = 0, status;
+ int datadir = sc->sc_data_direction;
+- unsigned blksize;
+ uint32_t reftag;
+- uint16_t apptagmask, apptagval;
++ unsigned blksize;
+ uint8_t txop, rxop;
+
+ status = lpfc_sc_to_bg_opcodes(phba, sc, &txop, &rxop);
+@@ -1438,17 +1399,16 @@ lpfc_bg_setup_bpl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+
+ /* extract some info from the scsi command for pde*/
+ blksize = lpfc_cmd_blksize(sc);
+- lpfc_get_cmd_dif_parms(sc, &apptagmask, &apptagval, &reftag);
++ reftag = scsi_get_lba(sc) & 0xffffffff;
+
+ /* setup PDE5 with what we have */
+ pde5 = (struct lpfc_pde5 *) bpl;
+ memset(pde5, 0, sizeof(struct lpfc_pde5));
+ bf_set(pde5_type, pde5, LPFC_PDE5_DESCRIPTOR);
+- pde5->reftag = reftag;
+
+ /* Endianness conversion if necessary for PDE5 */
+ pde5->word0 = cpu_to_le32(pde5->word0);
+- pde5->reftag = cpu_to_le32(pde5->reftag);
++ pde5->reftag = cpu_to_le32(reftag);
+
+ /* advance bpl and increment bde count */
+ num_bde++;
+@@ -1463,10 +1423,10 @@ lpfc_bg_setup_bpl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ if (datadir == DMA_FROM_DEVICE) {
+ bf_set(pde6_ce, pde6, 1);
+ bf_set(pde6_re, pde6, 1);
+- bf_set(pde6_ae, pde6, 1);
+ }
+ bf_set(pde6_ai, pde6, 1);
+- bf_set(pde6_apptagval, pde6, apptagval);
++ bf_set(pde6_ae, pde6, 0);
++ bf_set(pde6_apptagval, pde6, 0);
+
+ /* Endianness conversion if necessary for PDE6 */
+ pde6->word0 = cpu_to_le32(pde6->word0);
+@@ -1551,7 +1511,6 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ unsigned char pgdone = 0, alldone = 0;
+ unsigned blksize;
+ uint32_t reftag;
+- uint16_t apptagmask, apptagval;
+ uint8_t txop, rxop;
+ int num_bde = 0;
+
+@@ -1571,7 +1530,7 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+
+ /* extract some info from the scsi command */
+ blksize = lpfc_cmd_blksize(sc);
+- lpfc_get_cmd_dif_parms(sc, &apptagmask, &apptagval, &reftag);
++ reftag = scsi_get_lba(sc) & 0xffffffff;
+
+ split_offset = 0;
+ do {
+@@ -1579,11 +1538,10 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ pde5 = (struct lpfc_pde5 *) bpl;
+ memset(pde5, 0, sizeof(struct lpfc_pde5));
+ bf_set(pde5_type, pde5, LPFC_PDE5_DESCRIPTOR);
+- pde5->reftag = reftag;
+
+ /* Endianness conversion if necessary for PDE5 */
+ pde5->word0 = cpu_to_le32(pde5->word0);
+- pde5->reftag = cpu_to_le32(pde5->reftag);
++ pde5->reftag = cpu_to_le32(reftag);
+
+ /* advance bpl and increment bde count */
+ num_bde++;
+@@ -1597,9 +1555,9 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ bf_set(pde6_oprx, pde6, rxop);
+ bf_set(pde6_ce, pde6, 1);
+ bf_set(pde6_re, pde6, 1);
+- bf_set(pde6_ae, pde6, 1);
+ bf_set(pde6_ai, pde6, 1);
+- bf_set(pde6_apptagval, pde6, apptagval);
++ bf_set(pde6_ae, pde6, 0);
++ bf_set(pde6_apptagval, pde6, 0);
+
+ /* Endianness conversion if necessary for PDE6 */
+ pde6->word0 = cpu_to_le32(pde6->word0);
+@@ -1621,8 +1579,8 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ memset(pde7, 0, sizeof(struct lpfc_pde7));
+ bf_set(pde7_type, pde7, LPFC_PDE7_DESCRIPTOR);
+
+- pde7->addrHigh = le32_to_cpu(putPaddrLow(protphysaddr));
+- pde7->addrLow = le32_to_cpu(putPaddrHigh(protphysaddr));
++ pde7->addrHigh = le32_to_cpu(putPaddrHigh(protphysaddr));
++ pde7->addrLow = le32_to_cpu(putPaddrLow(protphysaddr));
+
+ protgrp_blks = protgroup_len / 8;
+ protgrp_bytes = protgrp_blks * blksize;
+@@ -1632,7 +1590,7 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ protgroup_remainder = 0x1000 - (pde7->addrLow & 0xfff);
+ protgroup_offset += protgroup_remainder;
+ protgrp_blks = protgroup_remainder / 8;
+- protgrp_bytes = protgroup_remainder * blksize;
++ protgrp_bytes = protgrp_blks * blksize;
+ } else {
+ protgroup_offset = 0;
+ curr_prot++;
+@@ -2006,16 +1964,21 @@ lpfc_parse_bg_err(struct lpfc_hba *phba, struct lpfc_scsi_buf *lpfc_cmd,
+ if (lpfc_bgs_get_hi_water_mark_present(bgstat)) {
+ /*
+ * setup sense data descriptor 0 per SPC-4 as an information
+- * field, and put the failing LBA in it
++ * field, and put the failing LBA in it.
++ * This code assumes there was also a guard/app/ref tag error
++ * indication.
+ */
+- cmd->sense_buffer[8] = 0; /* Information */
+- cmd->sense_buffer[9] = 0xa; /* Add. length */
++ cmd->sense_buffer[7] = 0xc; /* Additional sense length */
++ cmd->sense_buffer[8] = 0; /* Information descriptor type */
++ cmd->sense_buffer[9] = 0xa; /* Additional descriptor length */
++ cmd->sense_buffer[10] = 0x80; /* Validity bit */
+ bghm /= cmd->device->sector_size;
+
+ failing_sector = scsi_get_lba(cmd);
+ failing_sector += bghm;
+
+- put_unaligned_be64(failing_sector, &cmd->sense_buffer[10]);
++ /* Descriptor Information */
++ put_unaligned_be64(failing_sector, &cmd->sense_buffer[12]);
+ }
+
+ if (!ret) {
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 98999bb..5b28ea1 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -560,7 +560,7 @@ __lpfc_set_rrq_active(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
+ rrq = mempool_alloc(phba->rrq_pool, GFP_KERNEL);
+ if (rrq) {
+ rrq->send_rrq = send_rrq;
+- rrq->xritag = phba->sli4_hba.xri_ids[xritag];
++ rrq->xritag = xritag;
+ rrq->rrq_stop_time = jiffies + HZ * (phba->fc_ratov + 1);
+ rrq->ndlp = ndlp;
+ rrq->nlp_DID = ndlp->nlp_DID;
+@@ -2452,7 +2452,8 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+
+ /* search continue save q for same XRI */
+ list_for_each_entry(iocbq, &pring->iocb_continue_saveq, clist) {
+- if (iocbq->iocb.ulpContext == saveq->iocb.ulpContext) {
++ if (iocbq->iocb.unsli3.rcvsli3.ox_id ==
++ saveq->iocb.unsli3.rcvsli3.ox_id) {
+ list_add_tail(&saveq->list, &iocbq->list);
+ found = 1;
+ break;
+@@ -3355,6 +3356,7 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
+ irspiocbq);
+ break;
+ case CQE_CODE_RECEIVE:
++ case CQE_CODE_RECEIVE_V1:
+ dmabuf = container_of(cq_event, struct hbq_dmabuf,
+ cq_event);
+ lpfc_sli4_handle_received_buffer(phba, dmabuf);
+@@ -5837,6 +5839,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
+ "Advanced Error Reporting (AER)\n");
+ phba->cfg_aer_support = 0;
+ }
++ rc = 0;
+ }
+
+ if (!(phba->hba_flag & HBA_FCOE_MODE)) {
+@@ -7318,12 +7321,12 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ bf_set(wqe_qosd, &wqe->els_req.wqe_com, 1);
+ bf_set(wqe_lenloc, &wqe->els_req.wqe_com, LPFC_WQE_LENLOC_NONE);
+ bf_set(wqe_ebde_cnt, &wqe->els_req.wqe_com, 0);
+- break;
++ break;
+ case CMD_XMIT_SEQUENCE64_CX:
+ bf_set(wqe_ctxt_tag, &wqe->xmit_sequence.wqe_com,
+ iocbq->iocb.un.ulpWord[3]);
+ bf_set(wqe_rcvoxid, &wqe->xmit_sequence.wqe_com,
+- iocbq->iocb.ulpContext);
++ iocbq->iocb.unsli3.rcvsli3.ox_id);
+ /* The entire sequence is transmitted for this IOCB */
+ xmit_len = total_len;
+ cmnd = CMD_XMIT_SEQUENCE64_CR;
+@@ -7341,7 +7344,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ bf_set(wqe_ebde_cnt, &wqe->xmit_sequence.wqe_com, 0);
+ wqe->xmit_sequence.xmit_len = xmit_len;
+ command_type = OTHER_COMMAND;
+- break;
++ break;
+ case CMD_XMIT_BCAST64_CN:
+ /* word3 iocb=iotag32 wqe=seq_payload_len */
+ wqe->xmit_bcast64.seq_payload_len = xmit_len;
+@@ -7355,7 +7358,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ bf_set(wqe_lenloc, &wqe->xmit_bcast64.wqe_com,
+ LPFC_WQE_LENLOC_WORD3);
+ bf_set(wqe_ebde_cnt, &wqe->xmit_bcast64.wqe_com, 0);
+- break;
++ break;
+ case CMD_FCP_IWRITE64_CR:
+ command_type = FCP_COMMAND_DATA_OUT;
+ /* word3 iocb=iotag wqe=payload_offset_len */
+@@ -7375,7 +7378,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ LPFC_WQE_LENLOC_WORD4);
+ bf_set(wqe_ebde_cnt, &wqe->fcp_iwrite.wqe_com, 0);
+ bf_set(wqe_pu, &wqe->fcp_iwrite.wqe_com, iocbq->iocb.ulpPU);
+- break;
++ break;
+ case CMD_FCP_IREAD64_CR:
+ /* word3 iocb=iotag wqe=payload_offset_len */
+ /* Add the FCP_CMD and FCP_RSP sizes to get the offset */
+@@ -7394,7 +7397,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ LPFC_WQE_LENLOC_WORD4);
+ bf_set(wqe_ebde_cnt, &wqe->fcp_iread.wqe_com, 0);
+ bf_set(wqe_pu, &wqe->fcp_iread.wqe_com, iocbq->iocb.ulpPU);
+- break;
++ break;
+ case CMD_FCP_ICMND64_CR:
+ /* word3 iocb=IO_TAG wqe=reserved */
+ wqe->fcp_icmd.rsrvd3 = 0;
+@@ -7407,7 +7410,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ bf_set(wqe_lenloc, &wqe->fcp_icmd.wqe_com,
+ LPFC_WQE_LENLOC_NONE);
+ bf_set(wqe_ebde_cnt, &wqe->fcp_icmd.wqe_com, 0);
+- break;
++ break;
+ case CMD_GEN_REQUEST64_CR:
+ /* For this command calculate the xmit length of the
+ * request bde.
+@@ -7442,7 +7445,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ bf_set(wqe_lenloc, &wqe->gen_req.wqe_com, LPFC_WQE_LENLOC_NONE);
+ bf_set(wqe_ebde_cnt, &wqe->gen_req.wqe_com, 0);
+ command_type = OTHER_COMMAND;
+- break;
++ break;
+ case CMD_XMIT_ELS_RSP64_CX:
+ ndlp = (struct lpfc_nodelist *)iocbq->context1;
+ /* words0-2 BDE memcpy */
+@@ -7457,7 +7460,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ ((iocbq->iocb.ulpCt_h << 1) | iocbq->iocb.ulpCt_l));
+ bf_set(wqe_pu, &wqe->xmit_els_rsp.wqe_com, iocbq->iocb.ulpPU);
+ bf_set(wqe_rcvoxid, &wqe->xmit_els_rsp.wqe_com,
+- iocbq->iocb.ulpContext);
++ iocbq->iocb.unsli3.rcvsli3.ox_id);
+ if (!iocbq->iocb.ulpCt_h && iocbq->iocb.ulpCt_l)
+ bf_set(wqe_ctxt_tag, &wqe->xmit_els_rsp.wqe_com,
+ phba->vpi_ids[iocbq->vport->vpi]);
+@@ -7470,7 +7473,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ bf_set(wqe_rsp_temp_rpi, &wqe->xmit_els_rsp,
+ phba->sli4_hba.rpi_ids[ndlp->nlp_rpi]);
+ command_type = OTHER_COMMAND;
+- break;
++ break;
+ case CMD_CLOSE_XRI_CN:
+ case CMD_ABORT_XRI_CN:
+ case CMD_ABORT_XRI_CX:
+@@ -7509,7 +7512,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ cmnd = CMD_ABORT_XRI_CX;
+ command_type = OTHER_COMMAND;
+ xritag = 0;
+- break;
++ break;
+ case CMD_XMIT_BLS_RSP64_CX:
+ /* As BLS ABTS RSP WQE is very different from other WQEs,
+ * we re-construct this WQE here based on information in
+@@ -7553,7 +7556,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ bf_get(lpfc_rsn_code, &iocbq->iocb.un.bls_rsp));
+ }
+
+- break;
++ break;
+ case CMD_XRI_ABORTED_CX:
+ case CMD_CREATE_XRI_CR: /* Do we expect to use this? */
+ case CMD_IOCB_FCP_IBIDIR64_CR: /* bidirectional xfer */
+@@ -7565,7 +7568,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
+ "2014 Invalid command 0x%x\n",
+ iocbq->iocb.ulpCommand);
+ return IOCB_ERROR;
+- break;
++ break;
+ }
+
+ bf_set(wqe_xri_tag, &wqe->generic.wqe_com, xritag);
+@@ -10481,10 +10484,14 @@ lpfc_sli4_sp_handle_rcqe(struct lpfc_hba *phba, struct lpfc_rcqe *rcqe)
+ struct lpfc_queue *hrq = phba->sli4_hba.hdr_rq;
+ struct lpfc_queue *drq = phba->sli4_hba.dat_rq;
+ struct hbq_dmabuf *dma_buf;
+- uint32_t status;
++ uint32_t status, rq_id;
+ unsigned long iflags;
+
+- if (bf_get(lpfc_rcqe_rq_id, rcqe) != hrq->queue_id)
++ if (bf_get(lpfc_cqe_code, rcqe) == CQE_CODE_RECEIVE_V1)
++ rq_id = bf_get(lpfc_rcqe_rq_id_v1, rcqe);
++ else
++ rq_id = bf_get(lpfc_rcqe_rq_id, rcqe);
++ if (rq_id != hrq->queue_id)
+ goto out;
+
+ status = bf_get(lpfc_rcqe_status, rcqe);
+@@ -10563,6 +10570,7 @@ lpfc_sli4_sp_handle_cqe(struct lpfc_hba *phba, struct lpfc_queue *cq,
+ (struct sli4_wcqe_xri_aborted *)&cqevt);
+ break;
+ case CQE_CODE_RECEIVE:
++ case CQE_CODE_RECEIVE_V1:
+ /* Process the RQ event */
+ phba->last_completion_time = jiffies;
+ workposted = lpfc_sli4_sp_handle_rcqe(phba,
+@@ -12345,19 +12353,18 @@ lpfc_sli4_post_sgl(struct lpfc_hba *phba,
+ }
+
+ /**
+- * lpfc_sli4_init_rpi_hdrs - Post the rpi header memory region to the port
++ * lpfc_sli4_alloc_xri - Get an available rpi in the device's range
+ * @phba: pointer to lpfc hba data structure.
+ *
+ * This routine is invoked to post rpi header templates to the
+- * port for those SLI4 ports that do not support extents. This routine
+- * posts a PAGE_SIZE memory region to the port to hold up to
+- * PAGE_SIZE modulo 64 rpi context headers. This is an initialization routine
+- * and should be called only when interrupts are disabled.
++ * HBA consistent with the SLI-4 interface spec. This routine
++ * posts a SLI4_PAGE_SIZE memory region to the port to hold up to
++ * SLI4_PAGE_SIZE modulo 64 rpi context headers.
+ *
+- * Return codes
+- * 0 - successful
+- * -ERROR - otherwise.
+- */
++ * Returns
++ * A nonzero rpi defined as rpi_base <= rpi < max_rpi if successful
++ * LPFC_RPI_ALLOC_ERROR if no rpis are available.
++ **/
+ uint16_t
+ lpfc_sli4_alloc_xri(struct lpfc_hba *phba)
+ {
+@@ -13406,7 +13413,7 @@ lpfc_sli4_seq_abort_rsp_cmpl(struct lpfc_hba *phba,
+ * This function validates the xri maps to the known range of XRIs allocated an
+ * used by the driver.
+ **/
+-static uint16_t
++uint16_t
+ lpfc_sli4_xri_inrange(struct lpfc_hba *phba,
+ uint16_t xri)
+ {
+@@ -13643,10 +13650,12 @@ lpfc_seq_complete(struct hbq_dmabuf *dmabuf)
+ static struct lpfc_iocbq *
+ lpfc_prep_seq(struct lpfc_vport *vport, struct hbq_dmabuf *seq_dmabuf)
+ {
++ struct hbq_dmabuf *hbq_buf;
+ struct lpfc_dmabuf *d_buf, *n_buf;
+ struct lpfc_iocbq *first_iocbq, *iocbq;
+ struct fc_frame_header *fc_hdr;
+ uint32_t sid;
++ uint32_t len, tot_len;
+ struct ulp_bde64 *pbde;
+
+ fc_hdr = (struct fc_frame_header *)seq_dmabuf->hbuf.virt;
+@@ -13655,6 +13664,7 @@ lpfc_prep_seq(struct lpfc_vport *vport, struct hbq_dmabuf *seq_dmabuf)
+ lpfc_update_rcv_time_stamp(vport);
+ /* get the Remote Port's SID */
+ sid = sli4_sid_from_fc_hdr(fc_hdr);
++ tot_len = 0;
+ /* Get an iocbq struct to fill in. */
+ first_iocbq = lpfc_sli_get_iocbq(vport->phba);
+ if (first_iocbq) {
+@@ -13662,9 +13672,12 @@ lpfc_prep_seq(struct lpfc_vport *vport, struct hbq_dmabuf *seq_dmabuf)
+ first_iocbq->iocb.unsli3.rcvsli3.acc_len = 0;
+ first_iocbq->iocb.ulpStatus = IOSTAT_SUCCESS;
+ first_iocbq->iocb.ulpCommand = CMD_IOCB_RCV_SEQ64_CX;
+- first_iocbq->iocb.ulpContext = be16_to_cpu(fc_hdr->fh_ox_id);
+- /* iocbq is prepped for internal consumption. Logical vpi. */
+- first_iocbq->iocb.unsli3.rcvsli3.vpi = vport->vpi;
++ first_iocbq->iocb.ulpContext = NO_XRI;
++ first_iocbq->iocb.unsli3.rcvsli3.ox_id =
++ be16_to_cpu(fc_hdr->fh_ox_id);
++ /* iocbq is prepped for internal consumption. Physical vpi. */
++ first_iocbq->iocb.unsli3.rcvsli3.vpi =
++ vport->phba->vpi_ids[vport->vpi];
+ /* put the first buffer into the first IOCBq */
+ first_iocbq->context2 = &seq_dmabuf->dbuf;
+ first_iocbq->context3 = NULL;
+@@ -13672,9 +13685,9 @@ lpfc_prep_seq(struct lpfc_vport *vport, struct hbq_dmabuf *seq_dmabuf)
+ first_iocbq->iocb.un.cont64[0].tus.f.bdeSize =
+ LPFC_DATA_BUF_SIZE;
+ first_iocbq->iocb.un.rcvels.remoteID = sid;
+- first_iocbq->iocb.unsli3.rcvsli3.acc_len +=
+- bf_get(lpfc_rcqe_length,
++ tot_len = bf_get(lpfc_rcqe_length,
+ &seq_dmabuf->cq_event.cqe.rcqe_cmpl);
++ first_iocbq->iocb.unsli3.rcvsli3.acc_len = tot_len;
+ }
+ iocbq = first_iocbq;
+ /*
+@@ -13692,9 +13705,13 @@ lpfc_prep_seq(struct lpfc_vport *vport, struct hbq_dmabuf *seq_dmabuf)
+ pbde = (struct ulp_bde64 *)
+ &iocbq->iocb.unsli3.sli3Words[4];
+ pbde->tus.f.bdeSize = LPFC_DATA_BUF_SIZE;
+- first_iocbq->iocb.unsli3.rcvsli3.acc_len +=
+- bf_get(lpfc_rcqe_length,
+- &seq_dmabuf->cq_event.cqe.rcqe_cmpl);
++
++ /* We need to get the size out of the right CQE */
++ hbq_buf = container_of(d_buf, struct hbq_dmabuf, dbuf);
++ len = bf_get(lpfc_rcqe_length,
++ &hbq_buf->cq_event.cqe.rcqe_cmpl);
++ iocbq->iocb.unsli3.rcvsli3.acc_len += len;
++ tot_len += len;
+ } else {
+ iocbq = lpfc_sli_get_iocbq(vport->phba);
+ if (!iocbq) {
+@@ -13712,9 +13729,14 @@ lpfc_prep_seq(struct lpfc_vport *vport, struct hbq_dmabuf *seq_dmabuf)
+ iocbq->iocb.ulpBdeCount = 1;
+ iocbq->iocb.un.cont64[0].tus.f.bdeSize =
+ LPFC_DATA_BUF_SIZE;
+- first_iocbq->iocb.unsli3.rcvsli3.acc_len +=
+- bf_get(lpfc_rcqe_length,
+- &seq_dmabuf->cq_event.cqe.rcqe_cmpl);
++
++ /* We need to get the size out of the right CQE */
++ hbq_buf = container_of(d_buf, struct hbq_dmabuf, dbuf);
++ len = bf_get(lpfc_rcqe_length,
++ &hbq_buf->cq_event.cqe.rcqe_cmpl);
++ tot_len += len;
++ iocbq->iocb.unsli3.rcvsli3.acc_len = tot_len;
++
+ iocbq->iocb.un.rcvels.remoteID = sid;
+ list_add_tail(&iocbq->list, &first_iocbq->list);
+ }
+@@ -13787,7 +13809,13 @@ lpfc_sli4_handle_received_buffer(struct lpfc_hba *phba,
+ lpfc_in_buf_free(phba, &dmabuf->dbuf);
+ return;
+ }
+- fcfi = bf_get(lpfc_rcqe_fcf_id, &dmabuf->cq_event.cqe.rcqe_cmpl);
++ if ((bf_get(lpfc_cqe_code,
++ &dmabuf->cq_event.cqe.rcqe_cmpl) == CQE_CODE_RECEIVE_V1))
++ fcfi = bf_get(lpfc_rcqe_fcf_id_v1,
++ &dmabuf->cq_event.cqe.rcqe_cmpl);
++ else
++ fcfi = bf_get(lpfc_rcqe_fcf_id,
++ &dmabuf->cq_event.cqe.rcqe_cmpl);
+ vport = lpfc_fc_frame_to_vport(phba, fc_hdr, fcfi);
+ if (!vport || !(vport->vpi_state & LPFC_VPI_REGISTERED)) {
+ /* throw out the frame */
+diff --git a/drivers/scsi/lpfc/lpfc_sli4.h b/drivers/scsi/lpfc/lpfc_sli4.h
+index 4b17035..88387c1 100644
+--- a/drivers/scsi/lpfc/lpfc_sli4.h
++++ b/drivers/scsi/lpfc/lpfc_sli4.h
+@@ -81,6 +81,8 @@
+ (fc_hdr)->fh_f_ctl[1] << 8 | \
+ (fc_hdr)->fh_f_ctl[2])
+
++#define LPFC_FW_RESET_MAXIMUM_WAIT_10MS_CNT 12000
++
+ enum lpfc_sli4_queue_type {
+ LPFC_EQ,
+ LPFC_GCQ,
+diff --git a/drivers/scsi/mpt2sas/mpt2sas_base.c b/drivers/scsi/mpt2sas/mpt2sas_base.c
+index 1da606c..83035bd 100644
+--- a/drivers/scsi/mpt2sas/mpt2sas_base.c
++++ b/drivers/scsi/mpt2sas/mpt2sas_base.c
+@@ -1740,9 +1740,11 @@ _base_display_dell_branding(struct MPT2SAS_ADAPTER *ioc)
+ static void
+ _base_display_intel_branding(struct MPT2SAS_ADAPTER *ioc)
+ {
+- if (ioc->pdev->subsystem_vendor == PCI_VENDOR_ID_INTEL &&
+- ioc->pdev->device == MPI2_MFGPAGE_DEVID_SAS2008) {
++ if (ioc->pdev->subsystem_vendor != PCI_VENDOR_ID_INTEL)
++ return;
+
++ switch (ioc->pdev->device) {
++ case MPI2_MFGPAGE_DEVID_SAS2008:
+ switch (ioc->pdev->subsystem_device) {
+ case MPT2SAS_INTEL_RMS2LL080_SSDID:
+ printk(MPT2SAS_INFO_FMT "%s\n", ioc->name,
+@@ -1752,7 +1754,20 @@ _base_display_intel_branding(struct MPT2SAS_ADAPTER *ioc)
+ printk(MPT2SAS_INFO_FMT "%s\n", ioc->name,
+ MPT2SAS_INTEL_RMS2LL040_BRANDING);
+ break;
++ default:
++ break;
+ }
++ case MPI2_MFGPAGE_DEVID_SAS2308_2:
++ switch (ioc->pdev->subsystem_device) {
++ case MPT2SAS_INTEL_RS25GB008_SSDID:
++ printk(MPT2SAS_INFO_FMT "%s\n", ioc->name,
++ MPT2SAS_INTEL_RS25GB008_BRANDING);
++ break;
++ default:
++ break;
++ }
++ default:
++ break;
+ }
+ }
+
+diff --git a/drivers/scsi/mpt2sas/mpt2sas_base.h b/drivers/scsi/mpt2sas/mpt2sas_base.h
+index 451dc1c..41a57a7 100644
+--- a/drivers/scsi/mpt2sas/mpt2sas_base.h
++++ b/drivers/scsi/mpt2sas/mpt2sas_base.h
+@@ -161,12 +161,15 @@
+ "Intel Integrated RAID Module RMS2LL080"
+ #define MPT2SAS_INTEL_RMS2LL040_BRANDING \
+ "Intel Integrated RAID Module RMS2LL040"
++#define MPT2SAS_INTEL_RS25GB008_BRANDING \
++ "Intel(R) RAID Controller RS25GB008"
+
+ /*
+ * Intel HBA SSDIDs
+ */
+ #define MPT2SAS_INTEL_RMS2LL080_SSDID 0x350E
+ #define MPT2SAS_INTEL_RMS2LL040_SSDID 0x350F
++#define MPT2SAS_INTEL_RS25GB008_SSDID 0x3000
+
+
+ /*
+diff --git a/drivers/scsi/mpt2sas/mpt2sas_scsih.c b/drivers/scsi/mpt2sas/mpt2sas_scsih.c
+index e327a3c..8dc2ad4 100644
+--- a/drivers/scsi/mpt2sas/mpt2sas_scsih.c
++++ b/drivers/scsi/mpt2sas/mpt2sas_scsih.c
+@@ -3698,7 +3698,7 @@ _scsih_qcmd_lck(struct scsi_cmnd *scmd, void (*done)(struct scsi_cmnd *))
+ return 0;
+ }
+
+- if (ioc->pci_error_recovery) {
++ if (ioc->pci_error_recovery || ioc->remove_host) {
+ scmd->result = DID_NO_CONNECT << 16;
+ scmd->scsi_done(scmd);
+ return 0;
+@@ -7211,7 +7211,6 @@ _scsih_remove(struct pci_dev *pdev)
+ }
+
+ sas_remove_host(shost);
+- _scsih_shutdown(pdev);
+ list_del(&ioc->list);
+ scsi_remove_host(shost);
+ scsi_host_put(shost);
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 920b76b..b2df2f9 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -3822,15 +3822,12 @@ qla2x00_loop_resync(scsi_qla_host_t *vha)
+ req = vha->req;
+ rsp = req->rsp;
+
+- atomic_set(&vha->loop_state, LOOP_UPDATE);
+ clear_bit(ISP_ABORT_RETRY, &vha->dpc_flags);
+ if (vha->flags.online) {
+ if (!(rval = qla2x00_fw_ready(vha))) {
+ /* Wait at most MAX_TARGET RSCNs for a stable link. */
+ wait_time = 256;
+ do {
+- atomic_set(&vha->loop_state, LOOP_UPDATE);
+-
+ /* Issue a marker after FW becomes ready. */
+ qla2x00_marker(vha, req, rsp, 0, 0,
+ MK_SYNC_ALL);
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 1b60a95..e0fa877 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -736,7 +736,6 @@ skip_rio:
+ vha->flags.rscn_queue_overflow = 1;
+ }
+
+- atomic_set(&vha->loop_state, LOOP_UPDATE);
+ atomic_set(&vha->loop_down_timer, 0);
+ vha->flags.management_server_logged_in = 0;
+
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index 98b6e3b..e809e9d 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -446,8 +446,19 @@ static inline void legacy_pty_init(void) { }
+ int pty_limit = NR_UNIX98_PTY_DEFAULT;
+ static int pty_limit_min;
+ static int pty_limit_max = NR_UNIX98_PTY_MAX;
++static int tty_count;
+ static int pty_count;
+
++static inline void pty_inc_count(void)
++{
++ pty_count = (++tty_count) / 2;
++}
++
++static inline void pty_dec_count(void)
++{
++ pty_count = (--tty_count) / 2;
++}
++
+ static struct cdev ptmx_cdev;
+
+ static struct ctl_table pty_table[] = {
+@@ -542,6 +553,7 @@ static struct tty_struct *pts_unix98_lookup(struct tty_driver *driver,
+
+ static void pty_unix98_shutdown(struct tty_struct *tty)
+ {
++ tty_driver_remove_tty(tty->driver, tty);
+ /* We have our own method as we don't use the tty index */
+ kfree(tty->termios);
+ }
+@@ -588,7 +600,8 @@ static int pty_unix98_install(struct tty_driver *driver, struct tty_struct *tty)
+ */
+ tty_driver_kref_get(driver);
+ tty->count++;
+- pty_count++;
++ pty_inc_count(); /* tty */
++ pty_inc_count(); /* tty->link */
+ return 0;
+ err_free_mem:
+ deinitialize_tty_struct(o_tty);
+@@ -602,7 +615,7 @@ err_free_tty:
+
+ static void pty_unix98_remove(struct tty_driver *driver, struct tty_struct *tty)
+ {
+- pty_count--;
++ pty_dec_count();
+ }
+
+ static const struct tty_operations ptm_unix98_ops = {
+diff --git a/drivers/tty/serial/8250.c b/drivers/tty/serial/8250.c
+index d32b5bb..762ce72 100644
+--- a/drivers/tty/serial/8250.c
++++ b/drivers/tty/serial/8250.c
+@@ -1819,6 +1819,8 @@ static void serial8250_backup_timeout(unsigned long data)
+ unsigned int iir, ier = 0, lsr;
+ unsigned long flags;
+
++ spin_lock_irqsave(&up->port.lock, flags);
++
+ /*
+ * Must disable interrupts or else we risk racing with the interrupt
+ * based handler.
+@@ -1836,10 +1838,8 @@ static void serial8250_backup_timeout(unsigned long data)
+ * the "Diva" UART used on the management processor on many HP
+ * ia64 and parisc boxes.
+ */
+- spin_lock_irqsave(&up->port.lock, flags);
+ lsr = serial_in(up, UART_LSR);
+ up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
+- spin_unlock_irqrestore(&up->port.lock, flags);
+ if ((iir & UART_IIR_NO_INT) && (up->ier & UART_IER_THRI) &&
+ (!uart_circ_empty(&up->port.state->xmit) || up->port.x_char) &&
+ (lsr & UART_LSR_THRE)) {
+@@ -1848,11 +1848,13 @@ static void serial8250_backup_timeout(unsigned long data)
+ }
+
+ if (!(iir & UART_IIR_NO_INT))
+- serial8250_handle_port(up);
++ transmit_chars(up);
+
+ if (is_real_interrupt(up->port.irq))
+ serial_out(up, UART_IER, ier);
+
++ spin_unlock_irqrestore(&up->port.lock, flags);
++
+ /* Standard timer interval plus 0.2s to keep the port running */
+ mod_timer(&up->timer,
+ jiffies + uart_poll_timeout(&up->port) + HZ / 5);
+diff --git a/drivers/tty/serial/8250_pci.c b/drivers/tty/serial/8250_pci.c
+index f41b425..ff48fdb 100644
+--- a/drivers/tty/serial/8250_pci.c
++++ b/drivers/tty/serial/8250_pci.c
+@@ -3886,7 +3886,7 @@ static struct pci_device_id serial_pci_tbl[] = {
+ 0, 0, pbn_b0_1_115200 },
+
+ /*
+- * Best Connectivity PCI Multi I/O cards
++ * Best Connectivity and Rosewill PCI Multi I/O cards
+ */
+
+ { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9865,
+@@ -3894,6 +3894,10 @@ static struct pci_device_id serial_pci_tbl[] = {
+ 0, 0, pbn_b0_1_115200 },
+
+ { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9865,
++ 0xA000, 0x3002,
++ 0, 0, pbn_b0_bt_2_115200 },
++
++ { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9865,
+ 0xA000, 0x3004,
+ 0, 0, pbn_b0_bt_4_115200 },
+ /* Intel CE4100 */
+diff --git a/drivers/tty/serial/8250_pnp.c b/drivers/tty/serial/8250_pnp.c
+index fc301f6..a2f2365 100644
+--- a/drivers/tty/serial/8250_pnp.c
++++ b/drivers/tty/serial/8250_pnp.c
+@@ -109,6 +109,9 @@ static const struct pnp_device_id pnp_dev_table[] = {
+ /* IBM */
+ /* IBM Thinkpad 701 Internal Modem Voice */
+ { "IBM0033", 0 },
++ /* Intermec */
++ /* Intermec CV60 touchscreen port */
++ { "PNP4972", 0 },
+ /* Intertex */
+ /* Intertex 28k8 33k6 Voice EXT PnP */
+ { "IXDC801", 0 },
+diff --git a/drivers/tty/serial/max3107-aava.c b/drivers/tty/serial/max3107-aava.c
+index a1fe304..d73aadd 100644
+--- a/drivers/tty/serial/max3107-aava.c
++++ b/drivers/tty/serial/max3107-aava.c
+@@ -340,5 +340,5 @@ module_exit(max3107_exit);
+
+ MODULE_DESCRIPTION("MAX3107 driver");
+ MODULE_AUTHOR("Aavamobile");
+-MODULE_ALIAS("aava-max3107-spi");
++MODULE_ALIAS("spi:aava-max3107");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/tty/serial/max3107.c b/drivers/tty/serial/max3107.c
+index 750b4f6..a816460 100644
+--- a/drivers/tty/serial/max3107.c
++++ b/drivers/tty/serial/max3107.c
+@@ -1209,5 +1209,5 @@ module_exit(max3107_exit);
+
+ MODULE_DESCRIPTION("MAX3107 driver");
+ MODULE_AUTHOR("Aavamobile");
+-MODULE_ALIAS("max3107-spi");
++MODULE_ALIAS("spi:max3107");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/tty/serial/mrst_max3110.c b/drivers/tty/serial/mrst_max3110.c
+index a764bf9..23bc743 100644
+--- a/drivers/tty/serial/mrst_max3110.c
++++ b/drivers/tty/serial/mrst_max3110.c
+@@ -917,4 +917,4 @@ module_init(serial_m3110_init);
+ module_exit(serial_m3110_exit);
+
+ MODULE_LICENSE("GPL v2");
+-MODULE_ALIAS("max3110-uart");
++MODULE_ALIAS("spi:max3110-uart");
+diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
+index 47cadf4..6d3ec14 100644
+--- a/drivers/tty/serial/omap-serial.c
++++ b/drivers/tty/serial/omap-serial.c
+@@ -806,8 +806,7 @@ serial_omap_set_termios(struct uart_port *port, struct ktermios *termios,
+
+ serial_omap_set_mctrl(&up->port, up->port.mctrl);
+ /* Software Flow Control Configuration */
+- if (termios->c_iflag & (IXON | IXOFF))
+- serial_omap_configure_xonxoff(up, termios);
++ serial_omap_configure_xonxoff(up, termios);
+
+ spin_unlock_irqrestore(&up->port.lock, flags);
+ dev_dbg(up->port.dev, "serial_omap_set_termios+%d\n", up->pdev->id);
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 6556f74..b6f92d3 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -1294,8 +1294,7 @@ static int tty_driver_install_tty(struct tty_driver *driver,
+ *
+ * Locking: tty_mutex for now
+ */
+-static void tty_driver_remove_tty(struct tty_driver *driver,
+- struct tty_struct *tty)
++void tty_driver_remove_tty(struct tty_driver *driver, struct tty_struct *tty)
+ {
+ if (driver->ops->remove)
+ driver->ops->remove(driver, tty);
+diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
+index 88cfb8f..0f3a724 100644
+--- a/drivers/usb/host/ehci-hub.c
++++ b/drivers/usb/host/ehci-hub.c
+@@ -343,7 +343,7 @@ static int ehci_bus_resume (struct usb_hcd *hcd)
+ u32 temp;
+ u32 power_okay;
+ int i;
+- u8 resume_needed = 0;
++ unsigned long resume_needed = 0;
+
+ if (time_before (jiffies, ehci->next_statechange))
+ msleep(5);
+@@ -416,7 +416,7 @@ static int ehci_bus_resume (struct usb_hcd *hcd)
+ if (test_bit(i, &ehci->bus_suspended) &&
+ (temp & PORT_SUSPEND)) {
+ temp |= PORT_RESUME;
+- resume_needed = 1;
++ set_bit(i, &resume_needed);
+ }
+ ehci_writel(ehci, temp, &ehci->regs->port_status [i]);
+ }
+@@ -431,8 +431,7 @@ static int ehci_bus_resume (struct usb_hcd *hcd)
+ i = HCS_N_PORTS (ehci->hcs_params);
+ while (i--) {
+ temp = ehci_readl(ehci, &ehci->regs->port_status [i]);
+- if (test_bit(i, &ehci->bus_suspended) &&
+- (temp & PORT_SUSPEND)) {
++ if (test_bit(i, &resume_needed)) {
+ temp &= ~(PORT_RWC_BITS | PORT_RESUME);
+ ehci_writel(ehci, temp, &ehci->regs->port_status [i]);
+ ehci_vdbg (ehci, "resumed port %d\n", i + 1);
+diff --git a/drivers/usb/host/ehci-s5p.c b/drivers/usb/host/ehci-s5p.c
+index e3374c8..491a209 100644
+--- a/drivers/usb/host/ehci-s5p.c
++++ b/drivers/usb/host/ehci-s5p.c
+@@ -86,6 +86,7 @@ static int __devinit s5p_ehci_probe(struct platform_device *pdev)
+ goto fail_hcd;
+ }
+
++ s5p_ehci->hcd = hcd;
+ s5p_ehci->clk = clk_get(&pdev->dev, "usbhost");
+
+ if (IS_ERR(s5p_ehci->clk)) {
+diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c
+index e9f004e..629a968 100644
+--- a/drivers/usb/host/pci-quirks.c
++++ b/drivers/usb/host/pci-quirks.c
+@@ -535,20 +535,34 @@ static void __devinit quirk_usb_handoff_ohci(struct pci_dev *pdev)
+ iounmap(base);
+ }
+
++static const struct dmi_system_id __devinitconst ehci_dmi_nohandoff_table[] = {
++ {
++ /* Pegatron Lucid (ExoPC) */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "EXOPG06411"),
++ DMI_MATCH(DMI_BIOS_VERSION, "Lucid-CE-133"),
++ },
++ },
++ {
++ /* Pegatron Lucid (Ordissimo AIRIS) */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "M11JB"),
++ DMI_MATCH(DMI_BIOS_VERSION, "Lucid-GE-133"),
++ },
++ },
++ { }
++};
++
+ static void __devinit ehci_bios_handoff(struct pci_dev *pdev,
+ void __iomem *op_reg_base,
+ u32 cap, u8 offset)
+ {
+ int try_handoff = 1, tried_handoff = 0;
+
+- /* The Pegatron Lucid (ExoPC) tablet sporadically waits for 90
+- * seconds trying the handoff on its unused controller. Skip
+- * it. */
++ /* The Pegatron Lucid tablet sporadically waits for 98 seconds trying
++ * the handoff on its unused controller. Skip it. */
+ if (pdev->vendor == 0x8086 && pdev->device == 0x283a) {
+- const char *dmi_bn = dmi_get_system_info(DMI_BOARD_NAME);
+- const char *dmi_bv = dmi_get_system_info(DMI_BIOS_VERSION);
+- if (dmi_bn && !strcmp(dmi_bn, "EXOPG06411") &&
+- dmi_bv && !strcmp(dmi_bv, "Lucid-CE-133"))
++ if (dmi_check_system(ehci_dmi_nohandoff_table))
+ try_handoff = 0;
+ }
+
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 0be788c..723f823 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -463,11 +463,12 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ && (temp & PORT_POWER))
+ status |= USB_PORT_STAT_SUSPEND;
+ }
+- if ((temp & PORT_PLS_MASK) == XDEV_RESUME) {
++ if ((temp & PORT_PLS_MASK) == XDEV_RESUME &&
++ !DEV_SUPERSPEED(temp)) {
+ if ((temp & PORT_RESET) || !(temp & PORT_PE))
+ goto error;
+- if (!DEV_SUPERSPEED(temp) && time_after_eq(jiffies,
+- bus_state->resume_done[wIndex])) {
++ if (time_after_eq(jiffies,
++ bus_state->resume_done[wIndex])) {
+ xhci_dbg(xhci, "Resume USB2 port %d\n",
+ wIndex + 1);
+ bus_state->resume_done[wIndex] = 0;
+@@ -487,6 +488,14 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ xhci_ring_device(xhci, slot_id);
+ bus_state->port_c_suspend |= 1 << wIndex;
+ bus_state->suspended_ports &= ~(1 << wIndex);
++ } else {
++ /*
++ * The resume has been signaling for less than
++ * 20ms. Report the port status as SUSPEND,
++ * let the usbcore check port status again
++ * and clear resume signaling later.
++ */
++ status |= USB_PORT_STAT_SUSPEND;
+ }
+ }
+ if ((temp & PORT_PLS_MASK) == XDEV_U0
+@@ -664,7 +673,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ xhci_dbg(xhci, "PORTSC %04x\n", temp);
+ if (temp & PORT_RESET)
+ goto error;
+- if (temp & XDEV_U3) {
++ if ((temp & PORT_PLS_MASK) == XDEV_U3) {
+ if ((temp & PORT_PE) == 0)
+ goto error;
+
+@@ -752,7 +761,7 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
+ memset(buf, 0, retval);
+ status = 0;
+
+- mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC;
++ mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC;
+
+ spin_lock_irqsave(&xhci->lock, flags);
+ /* For each port, did anything change? If so, set that bit in buf. */
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 70cacbb..d0871ea 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -516,8 +516,12 @@ void xhci_find_new_dequeue_state(struct xhci_hcd *xhci,
+ (unsigned long long) addr);
+ }
+
++/* flip_cycle means flip the cycle bit of all but the first and last TRB.
++ * (The last TRB actually points to the ring enqueue pointer, which is not part
++ * of this TD.) This is used to remove partially enqueued isoc TDs from a ring.
++ */
+ static void td_to_noop(struct xhci_hcd *xhci, struct xhci_ring *ep_ring,
+- struct xhci_td *cur_td)
++ struct xhci_td *cur_td, bool flip_cycle)
+ {
+ struct xhci_segment *cur_seg;
+ union xhci_trb *cur_trb;
+@@ -531,6 +535,12 @@ static void td_to_noop(struct xhci_hcd *xhci, struct xhci_ring *ep_ring,
+ * leave the pointers intact.
+ */
+ cur_trb->generic.field[3] &= cpu_to_le32(~TRB_CHAIN);
++ /* Flip the cycle bit (link TRBs can't be the first
++ * or last TRB).
++ */
++ if (flip_cycle)
++ cur_trb->generic.field[3] ^=
++ cpu_to_le32(TRB_CYCLE);
+ xhci_dbg(xhci, "Cancel (unchain) link TRB\n");
+ xhci_dbg(xhci, "Address = %p (0x%llx dma); "
+ "in seg %p (0x%llx dma)\n",
+@@ -544,6 +554,11 @@ static void td_to_noop(struct xhci_hcd *xhci, struct xhci_ring *ep_ring,
+ cur_trb->generic.field[2] = 0;
+ /* Preserve only the cycle bit of this TRB */
+ cur_trb->generic.field[3] &= cpu_to_le32(TRB_CYCLE);
++ /* Flip the cycle bit except on the first or last TRB */
++ if (flip_cycle && cur_trb != cur_td->first_trb &&
++ cur_trb != cur_td->last_trb)
++ cur_trb->generic.field[3] ^=
++ cpu_to_le32(TRB_CYCLE);
+ cur_trb->generic.field[3] |= cpu_to_le32(
+ TRB_TYPE(TRB_TR_NOOP));
+ xhci_dbg(xhci, "Cancel TRB %p (0x%llx dma) "
+@@ -722,14 +737,14 @@ static void handle_stopped_endpoint(struct xhci_hcd *xhci,
+ cur_td->urb->stream_id,
+ cur_td, &deq_state);
+ else
+- td_to_noop(xhci, ep_ring, cur_td);
++ td_to_noop(xhci, ep_ring, cur_td, false);
+ remove_finished_td:
+ /*
+ * The event handler won't see a completion for this TD anymore,
+ * so remove it from the endpoint ring's TD list. Keep it in
+ * the cancelled TD list for URB completion later.
+ */
+- list_del(&cur_td->td_list);
++ list_del_init(&cur_td->td_list);
+ }
+ last_unlinked_td = cur_td;
+ xhci_stop_watchdog_timer_in_irq(xhci, ep);
+@@ -757,7 +772,7 @@ remove_finished_td:
+ do {
+ cur_td = list_entry(ep->cancelled_td_list.next,
+ struct xhci_td, cancelled_td_list);
+- list_del(&cur_td->cancelled_td_list);
++ list_del_init(&cur_td->cancelled_td_list);
+
+ /* Clean up the cancelled URB */
+ /* Doesn't matter what we pass for status, since the core will
+@@ -865,9 +880,9 @@ void xhci_stop_endpoint_command_watchdog(unsigned long arg)
+ cur_td = list_first_entry(&ring->td_list,
+ struct xhci_td,
+ td_list);
+- list_del(&cur_td->td_list);
++ list_del_init(&cur_td->td_list);
+ if (!list_empty(&cur_td->cancelled_td_list))
+- list_del(&cur_td->cancelled_td_list);
++ list_del_init(&cur_td->cancelled_td_list);
+ xhci_giveback_urb_in_irq(xhci, cur_td,
+ -ESHUTDOWN, "killed");
+ }
+@@ -876,7 +891,7 @@ void xhci_stop_endpoint_command_watchdog(unsigned long arg)
+ &temp_ep->cancelled_td_list,
+ struct xhci_td,
+ cancelled_td_list);
+- list_del(&cur_td->cancelled_td_list);
++ list_del_init(&cur_td->cancelled_td_list);
+ xhci_giveback_urb_in_irq(xhci, cur_td,
+ -ESHUTDOWN, "killed");
+ }
+@@ -1567,10 +1582,10 @@ td_cleanup:
+ else
+ *status = 0;
+ }
+- list_del(&td->td_list);
++ list_del_init(&td->td_list);
+ /* Was this TD slated to be cancelled but completed anyway? */
+ if (!list_empty(&td->cancelled_td_list))
+- list_del(&td->cancelled_td_list);
++ list_del_init(&td->cancelled_td_list);
+
+ urb_priv->td_cnt++;
+ /* Giveback the urb when all the tds are completed */
+@@ -2508,11 +2523,8 @@ static int prepare_transfer(struct xhci_hcd *xhci,
+
+ if (td_index == 0) {
+ ret = usb_hcd_link_urb_to_ep(bus_to_hcd(urb->dev->bus), urb);
+- if (unlikely(ret)) {
+- xhci_urb_free_priv(xhci, urb_priv);
+- urb->hcpriv = NULL;
++ if (unlikely(ret))
+ return ret;
+- }
+ }
+
+ td->urb = urb;
+@@ -2680,6 +2692,10 @@ static u32 xhci_v1_0_td_remainder(int running_total, int trb_buff_len,
+ {
+ int packets_transferred;
+
++ /* One TRB with a zero-length data packet. */
++ if (running_total == 0 && trb_buff_len == 0)
++ return 0;
++
+ /* All the TRB queueing functions don't count the current TRB in
+ * running_total.
+ */
+@@ -3121,20 +3137,15 @@ static int count_isoc_trbs_needed(struct xhci_hcd *xhci,
+ struct urb *urb, int i)
+ {
+ int num_trbs = 0;
+- u64 addr, td_len, running_total;
++ u64 addr, td_len;
+
+ addr = (u64) (urb->transfer_dma + urb->iso_frame_desc[i].offset);
+ td_len = urb->iso_frame_desc[i].length;
+
+- running_total = TRB_MAX_BUFF_SIZE - (addr & (TRB_MAX_BUFF_SIZE - 1));
+- running_total &= TRB_MAX_BUFF_SIZE - 1;
+- if (running_total != 0)
+- num_trbs++;
+-
+- while (running_total < td_len) {
++ num_trbs = DIV_ROUND_UP(td_len + (addr & (TRB_MAX_BUFF_SIZE - 1)),
++ TRB_MAX_BUFF_SIZE);
++ if (num_trbs == 0)
+ num_trbs++;
+- running_total += TRB_MAX_BUFF_SIZE;
+- }
+
+ return num_trbs;
+ }
+@@ -3234,6 +3245,7 @@ static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ start_trb = &ep_ring->enqueue->generic;
+ start_cycle = ep_ring->cycle_state;
+
++ urb_priv = urb->hcpriv;
+ /* Queue the first TRB, even if it's zero-length */
+ for (i = 0; i < num_tds; i++) {
+ unsigned int total_packet_count;
+@@ -3245,9 +3257,11 @@ static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ addr = start_addr + urb->iso_frame_desc[i].offset;
+ td_len = urb->iso_frame_desc[i].length;
+ td_remain_len = td_len;
+- /* FIXME: Ignoring zero-length packets, can those happen? */
+ total_packet_count = roundup(td_len,
+ le16_to_cpu(urb->ep->desc.wMaxPacketSize));
++ /* A zero-length transfer still involves at least one packet. */
++ if (total_packet_count == 0)
++ total_packet_count++;
+ burst_count = xhci_get_burst_count(xhci, urb->dev, urb,
+ total_packet_count);
+ residue = xhci_get_last_burst_packet_count(xhci,
+@@ -3257,12 +3271,13 @@ static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+
+ ret = prepare_transfer(xhci, xhci->devs[slot_id], ep_index,
+ urb->stream_id, trbs_per_td, urb, i, mem_flags);
+- if (ret < 0)
+- return ret;
++ if (ret < 0) {
++ if (i == 0)
++ return ret;
++ goto cleanup;
++ }
+
+- urb_priv = urb->hcpriv;
+ td = urb_priv->td[i];
+-
+ for (j = 0; j < trbs_per_td; j++) {
+ u32 remainder = 0;
+ field = TRB_TBC(burst_count) | TRB_TLBPC(residue);
+@@ -3352,6 +3367,27 @@ static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
+ start_cycle, start_trb);
+ return 0;
++cleanup:
++ /* Clean up a partially enqueued isoc transfer. */
++
++ for (i--; i >= 0; i--)
++ list_del_init(&urb_priv->td[i]->td_list);
++
++ /* Use the first TD as a temporary variable to turn the TDs we've queued
++ * into No-ops with a software-owned cycle bit. That way the hardware
++ * won't accidentally start executing bogus TDs when we partially
++ * overwrite them. td->first_trb and td->start_seg are already set.
++ */
++ urb_priv->td[0]->last_trb = ep_ring->enqueue;
++ /* Every TRB except the first & last will have its cycle bit flipped. */
++ td_to_noop(xhci, ep_ring, urb_priv->td[0], true);
++
++ /* Reset the ring enqueue back to the first TRB and its cycle bit. */
++ ep_ring->enqueue = urb_priv->td[0]->first_trb;
++ ep_ring->enq_seg = urb_priv->td[0]->start_seg;
++ ep_ring->cycle_state = start_cycle;
++ usb_hcd_unlink_urb_from_ep(bus_to_hcd(urb->dev->bus), urb);
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 9824761..7ea48b3 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1085,8 +1085,11 @@ int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
+ if (urb->dev->speed == USB_SPEED_FULL) {
+ ret = xhci_check_maxpacket(xhci, slot_id,
+ ep_index, urb);
+- if (ret < 0)
++ if (ret < 0) {
++ xhci_urb_free_priv(xhci, urb_priv);
++ urb->hcpriv = NULL;
+ return ret;
++ }
+ }
+
+ /* We have a spinlock and interrupts disabled, so we must pass
+@@ -1097,6 +1100,8 @@ int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
+ goto dying;
+ ret = xhci_queue_ctrl_tx(xhci, GFP_ATOMIC, urb,
+ slot_id, ep_index);
++ if (ret)
++ goto free_priv;
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ } else if (usb_endpoint_xfer_bulk(&urb->ep->desc)) {
+ spin_lock_irqsave(&xhci->lock, flags);
+@@ -1117,6 +1122,8 @@ int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
+ ret = xhci_queue_bulk_tx(xhci, GFP_ATOMIC, urb,
+ slot_id, ep_index);
+ }
++ if (ret)
++ goto free_priv;
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ } else if (usb_endpoint_xfer_int(&urb->ep->desc)) {
+ spin_lock_irqsave(&xhci->lock, flags);
+@@ -1124,6 +1131,8 @@ int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
+ goto dying;
+ ret = xhci_queue_intr_tx(xhci, GFP_ATOMIC, urb,
+ slot_id, ep_index);
++ if (ret)
++ goto free_priv;
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ } else {
+ spin_lock_irqsave(&xhci->lock, flags);
+@@ -1131,18 +1140,22 @@ int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
+ goto dying;
+ ret = xhci_queue_isoc_tx_prepare(xhci, GFP_ATOMIC, urb,
+ slot_id, ep_index);
++ if (ret)
++ goto free_priv;
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ }
+ exit:
+ return ret;
+ dying:
+- xhci_urb_free_priv(xhci, urb_priv);
+- urb->hcpriv = NULL;
+ xhci_dbg(xhci, "Ep 0x%x: URB %p submitted for "
+ "non-responsive xHCI host.\n",
+ urb->ep->desc.bEndpointAddress, urb);
++ ret = -ESHUTDOWN;
++free_priv:
++ xhci_urb_free_priv(xhci, urb_priv);
++ urb->hcpriv = NULL;
+ spin_unlock_irqrestore(&xhci->lock, flags);
+- return -ESHUTDOWN;
++ return ret;
+ }
+
+ /* Get the right ring for the given URB.
+@@ -1239,6 +1252,13 @@ int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ if (temp == 0xffffffff || (xhci->xhc_state & XHCI_STATE_HALTED)) {
+ xhci_dbg(xhci, "HW died, freeing TD.\n");
+ urb_priv = urb->hcpriv;
++ for (i = urb_priv->td_cnt; i < urb_priv->length; i++) {
++ td = urb_priv->td[i];
++ if (!list_empty(&td->td_list))
++ list_del_init(&td->td_list);
++ if (!list_empty(&td->cancelled_td_list))
++ list_del_init(&td->cancelled_td_list);
++ }
+
+ usb_hcd_unlink_urb_from_ep(hcd, urb);
+ spin_unlock_irqrestore(&xhci->lock, flags);
+diff --git a/drivers/usb/musb/cppi_dma.c b/drivers/usb/musb/cppi_dma.c
+index 149f3f3..318fb4e 100644
+--- a/drivers/usb/musb/cppi_dma.c
++++ b/drivers/usb/musb/cppi_dma.c
+@@ -226,8 +226,10 @@ static int cppi_controller_stop(struct dma_controller *c)
+ struct cppi *controller;
+ void __iomem *tibase;
+ int i;
++ struct musb *musb;
+
+ controller = container_of(c, struct cppi, controller);
++ musb = controller->musb;
+
+ tibase = controller->tibase;
+ /* DISABLE INDIVIDUAL CHANNEL Interrupts */
+@@ -289,9 +291,11 @@ cppi_channel_allocate(struct dma_controller *c,
+ u8 index;
+ struct cppi_channel *cppi_ch;
+ void __iomem *tibase;
++ struct musb *musb;
+
+ controller = container_of(c, struct cppi, controller);
+ tibase = controller->tibase;
++ musb = controller->musb;
+
+ /* ep0 doesn't use DMA; remember cppi indices are 0..N-1 */
+ index = ep->epnum - 1;
+@@ -339,7 +343,8 @@ static void cppi_channel_release(struct dma_channel *channel)
+ c = container_of(channel, struct cppi_channel, channel);
+ tibase = c->controller->tibase;
+ if (!c->hw_ep)
+- dev_dbg(musb->controller, "releasing idle DMA channel %p\n", c);
++ dev_dbg(c->controller->musb->controller,
++ "releasing idle DMA channel %p\n", c);
+ else if (!c->transmit)
+ core_rxirq_enable(tibase, c->index + 1);
+
+@@ -357,10 +362,11 @@ cppi_dump_rx(int level, struct cppi_channel *c, const char *tag)
+
+ musb_ep_select(base, c->index + 1);
+
+- DBG(level, "RX DMA%d%s: %d left, csr %04x, "
+- "%08x H%08x S%08x C%08x, "
+- "B%08x L%08x %08x .. %08x"
+- "\n",
++ dev_dbg(c->controller->musb->controller,
++ "RX DMA%d%s: %d left, csr %04x, "
++ "%08x H%08x S%08x C%08x, "
++ "B%08x L%08x %08x .. %08x"
++ "\n",
+ c->index, tag,
+ musb_readl(c->controller->tibase,
+ DAVINCI_RXCPPI_BUFCNT0_REG + 4 * c->index),
+@@ -387,10 +393,11 @@ cppi_dump_tx(int level, struct cppi_channel *c, const char *tag)
+
+ musb_ep_select(base, c->index + 1);
+
+- DBG(level, "TX DMA%d%s: csr %04x, "
+- "H%08x S%08x C%08x %08x, "
+- "F%08x L%08x .. %08x"
+- "\n",
++ dev_dbg(c->controller->musb->controller,
++ "TX DMA%d%s: csr %04x, "
++ "H%08x S%08x C%08x %08x, "
++ "F%08x L%08x .. %08x"
++ "\n",
+ c->index, tag,
+ musb_readw(c->hw_ep->regs, MUSB_TXCSR),
+
+@@ -1022,6 +1029,7 @@ static bool cppi_rx_scan(struct cppi *cppi, unsigned ch)
+ int i;
+ dma_addr_t safe2ack;
+ void __iomem *regs = rx->hw_ep->regs;
++ struct musb *musb = cppi->musb;
+
+ cppi_dump_rx(6, rx, "/K");
+
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 9afb361..f968a3d 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -101,6 +101,7 @@ static int ftdi_jtag_probe(struct usb_serial *serial);
+ static int ftdi_mtxorb_hack_setup(struct usb_serial *serial);
+ static int ftdi_NDI_device_setup(struct usb_serial *serial);
+ static int ftdi_stmclite_probe(struct usb_serial *serial);
++static int ftdi_8u2232c_probe(struct usb_serial *serial);
+ static void ftdi_USB_UIRT_setup(struct ftdi_private *priv);
+ static void ftdi_HE_TIRA1_setup(struct ftdi_private *priv);
+
+@@ -128,6 +129,10 @@ static struct ftdi_sio_quirk ftdi_stmclite_quirk = {
+ .probe = ftdi_stmclite_probe,
+ };
+
++static struct ftdi_sio_quirk ftdi_8u2232c_quirk = {
++ .probe = ftdi_8u2232c_probe,
++};
++
+ /*
+ * The 8U232AM has the same API as the sio except for:
+ * - it can support MUCH higher baudrates; up to:
+@@ -177,7 +182,8 @@ static struct usb_device_id id_table_combined [] = {
+ { USB_DEVICE(FTDI_VID, FTDI_8U232AM_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_8U232AM_ALT_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_232RL_PID) },
+- { USB_DEVICE(FTDI_VID, FTDI_8U2232C_PID) },
++ { USB_DEVICE(FTDI_VID, FTDI_8U2232C_PID) ,
++ .driver_info = (kernel_ulong_t)&ftdi_8u2232c_quirk },
+ { USB_DEVICE(FTDI_VID, FTDI_4232H_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_232H_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_MICRO_CHAMELEON_PID) },
+@@ -1733,6 +1739,18 @@ static int ftdi_jtag_probe(struct usb_serial *serial)
+ return 0;
+ }
+
++static int ftdi_8u2232c_probe(struct usb_serial *serial)
++{
++ struct usb_device *udev = serial->dev;
++
++ dbg("%s", __func__);
++
++ if (strcmp(udev->manufacturer, "CALAO Systems") == 0)
++ return ftdi_jtag_probe(serial);
++
++ return 0;
++}
++
+ /*
+ * First and second port on STMCLiteadaptors is reserved for JTAG interface
+ * and the forth port for pio
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 8156561..fe22e90 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -148,6 +148,8 @@ static void option_instat_callback(struct urb *urb);
+ #define HUAWEI_PRODUCT_K4505 0x1464
+ #define HUAWEI_PRODUCT_K3765 0x1465
+ #define HUAWEI_PRODUCT_E14AC 0x14AC
++#define HUAWEI_PRODUCT_K3806 0x14AE
++#define HUAWEI_PRODUCT_K4605 0x14C6
+ #define HUAWEI_PRODUCT_K3770 0x14C9
+ #define HUAWEI_PRODUCT_K3771 0x14CA
+ #define HUAWEI_PRODUCT_K4510 0x14CB
+@@ -416,6 +418,56 @@ static void option_instat_callback(struct urb *urb);
+ #define SAMSUNG_VENDOR_ID 0x04e8
+ #define SAMSUNG_PRODUCT_GT_B3730 0x6889
+
++/* YUGA products www.yuga-info.com*/
++#define YUGA_VENDOR_ID 0x257A
++#define YUGA_PRODUCT_CEM600 0x1601
++#define YUGA_PRODUCT_CEM610 0x1602
++#define YUGA_PRODUCT_CEM500 0x1603
++#define YUGA_PRODUCT_CEM510 0x1604
++#define YUGA_PRODUCT_CEM800 0x1605
++#define YUGA_PRODUCT_CEM900 0x1606
++
++#define YUGA_PRODUCT_CEU818 0x1607
++#define YUGA_PRODUCT_CEU816 0x1608
++#define YUGA_PRODUCT_CEU828 0x1609
++#define YUGA_PRODUCT_CEU826 0x160A
++#define YUGA_PRODUCT_CEU518 0x160B
++#define YUGA_PRODUCT_CEU516 0x160C
++#define YUGA_PRODUCT_CEU528 0x160D
++#define YUGA_PRODUCT_CEU526 0x160F
++
++#define YUGA_PRODUCT_CWM600 0x2601
++#define YUGA_PRODUCT_CWM610 0x2602
++#define YUGA_PRODUCT_CWM500 0x2603
++#define YUGA_PRODUCT_CWM510 0x2604
++#define YUGA_PRODUCT_CWM800 0x2605
++#define YUGA_PRODUCT_CWM900 0x2606
++
++#define YUGA_PRODUCT_CWU718 0x2607
++#define YUGA_PRODUCT_CWU716 0x2608
++#define YUGA_PRODUCT_CWU728 0x2609
++#define YUGA_PRODUCT_CWU726 0x260A
++#define YUGA_PRODUCT_CWU518 0x260B
++#define YUGA_PRODUCT_CWU516 0x260C
++#define YUGA_PRODUCT_CWU528 0x260D
++#define YUGA_PRODUCT_CWU526 0x260F
++
++#define YUGA_PRODUCT_CLM600 0x2601
++#define YUGA_PRODUCT_CLM610 0x2602
++#define YUGA_PRODUCT_CLM500 0x2603
++#define YUGA_PRODUCT_CLM510 0x2604
++#define YUGA_PRODUCT_CLM800 0x2605
++#define YUGA_PRODUCT_CLM900 0x2606
++
++#define YUGA_PRODUCT_CLU718 0x2607
++#define YUGA_PRODUCT_CLU716 0x2608
++#define YUGA_PRODUCT_CLU728 0x2609
++#define YUGA_PRODUCT_CLU726 0x260A
++#define YUGA_PRODUCT_CLU518 0x260B
++#define YUGA_PRODUCT_CLU516 0x260C
++#define YUGA_PRODUCT_CLU528 0x260D
++#define YUGA_PRODUCT_CLU526 0x260F
++
+ /* some devices interfaces need special handling due to a number of reasons */
+ enum option_blacklist_reason {
+ OPTION_BLACKLIST_NONE = 0,
+@@ -551,6 +603,8 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K3765, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_ETS1220, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E14AC, 0xff, 0xff, 0xff) },
++ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K3806, 0xff, 0xff, 0xff) },
++ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K4605, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K3770, 0xff, 0x02, 0x31) },
+ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K3770, 0xff, 0x02, 0x32) },
+ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K3771, 0xff, 0x02, 0x31) },
+@@ -1005,6 +1059,48 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */
+ { USB_DEVICE(ONDA_VENDOR_ID, ONDA_MT825UP) }, /* ONDA MT825UP modem */
+ { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730 LTE USB modem.*/
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM600) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM610) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM500) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM510) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM800) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM900) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEU818) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEU816) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEU828) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEU826) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEU518) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEU516) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEU528) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEU526) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWM600) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWM610) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWM500) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWM510) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWM800) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWM900) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWU718) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWU716) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWU728) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWU726) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWU518) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWU516) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWU528) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CWU526) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLM600) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLM610) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLM500) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLM510) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLM800) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLM900) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU718) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU716) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU728) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU726) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU518) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU516) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU528) },
++ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU526) },
+ { } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+@@ -1134,11 +1230,13 @@ static int option_probe(struct usb_serial *serial,
+ serial->interface->cur_altsetting->desc.bInterfaceClass != 0xff)
+ return -ENODEV;
+
+- /* Don't bind network interfaces on Huawei K3765 & K4505 */
++ /* Don't bind network interfaces on Huawei K3765, K4505 & K4605 */
+ if (serial->dev->descriptor.idVendor == HUAWEI_VENDOR_ID &&
+ (serial->dev->descriptor.idProduct == HUAWEI_PRODUCT_K3765 ||
+- serial->dev->descriptor.idProduct == HUAWEI_PRODUCT_K4505) &&
+- serial->interface->cur_altsetting->desc.bInterfaceNumber == 1)
++ serial->dev->descriptor.idProduct == HUAWEI_PRODUCT_K4505 ||
++ serial->dev->descriptor.idProduct == HUAWEI_PRODUCT_K4605) &&
++ (serial->interface->cur_altsetting->desc.bInterfaceNumber == 1 ||
++ serial->interface->cur_altsetting->desc.bInterfaceNumber == 2))
+ return -ENODEV;
+
+ /* Don't bind network interface on Samsung GT-B3730, it is handled by a separate module */
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 0c20831..1d33260 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -343,10 +343,28 @@ static void pl2303_set_termios(struct tty_struct *tty,
+ baud = 6000000;
+ }
+ dbg("%s - baud set = %d", __func__, baud);
+- buf[0] = baud & 0xff;
+- buf[1] = (baud >> 8) & 0xff;
+- buf[2] = (baud >> 16) & 0xff;
+- buf[3] = (baud >> 24) & 0xff;
++ if (baud <= 115200) {
++ buf[0] = baud & 0xff;
++ buf[1] = (baud >> 8) & 0xff;
++ buf[2] = (baud >> 16) & 0xff;
++ buf[3] = (baud >> 24) & 0xff;
++ } else {
++ /* apparently the formula for higher speeds is:
++ * baudrate = 12M * 32 / (2^buf[1]) / buf[0]
++ */
++ unsigned tmp = 12*1000*1000*32 / baud;
++ buf[3] = 0x80;
++ buf[2] = 0;
++ buf[1] = (tmp >= 256);
++ while (tmp >= 256) {
++ tmp >>= 2;
++ buf[1] <<= 1;
++ }
++ if (tmp > 256) {
++ tmp %= 256;
++ }
++ buf[0] = tmp;
++ }
+ }
+
+ /* For reference buf[4]=0 is 1 stop bits */
+diff --git a/drivers/video/savage/savagefb.h b/drivers/video/savage/savagefb.h
+index 32549d1..dcaab90 100644
+--- a/drivers/video/savage/savagefb.h
++++ b/drivers/video/savage/savagefb.h
+@@ -55,7 +55,7 @@
+
+ #define S3_SAVAGE3D_SERIES(chip) ((chip>=S3_SAVAGE3D) && (chip<=S3_SAVAGE_MX))
+
+-#define S3_SAVAGE4_SERIES(chip) ((chip>=S3_SAVAGE4) || (chip<=S3_PROSAVAGEDDR))
++#define S3_SAVAGE4_SERIES(chip) ((chip>=S3_SAVAGE4) && (chip<=S3_PROSAVAGEDDR))
+
+ #define S3_SAVAGE_MOBILE_SERIES(chip) ((chip==S3_SAVAGE_MX) || (chip==S3_SUPERSAVAGE))
+
+diff --git a/drivers/zorro/zorro.c b/drivers/zorro/zorro.c
+index e0c2807..181fa81 100644
+--- a/drivers/zorro/zorro.c
++++ b/drivers/zorro/zorro.c
+@@ -148,10 +148,10 @@ static int __init amiga_zorro_probe(struct platform_device *pdev)
+ }
+ platform_set_drvdata(pdev, bus);
+
+- /* Register all devices */
+ pr_info("Zorro: Probing AutoConfig expansion devices: %u device%s\n",
+ zorro_num_autocon, zorro_num_autocon == 1 ? "" : "s");
+
++ /* First identify all devices ... */
+ for (i = 0; i < zorro_num_autocon; i++) {
+ z = &zorro_autocon[i];
+ z->id = (z->rom.er_Manufacturer<<16) | (z->rom.er_Product<<8);
+@@ -172,6 +172,11 @@ static int __init amiga_zorro_probe(struct platform_device *pdev)
+ dev_set_name(&z->dev, "%02x", i);
+ z->dev.parent = &bus->dev;
+ z->dev.bus = &zorro_bus_type;
++ }
++
++ /* ... then register them */
++ for (i = 0; i < zorro_num_autocon; i++) {
++ z = &zorro_autocon[i];
+ error = device_register(&z->dev);
+ if (error) {
+ dev_err(&bus->dev, "Error registering device %s\n",
+diff --git a/fs/9p/acl.c b/fs/9p/acl.c
+index 535ab6e..4a866cd 100644
+--- a/fs/9p/acl.c
++++ b/fs/9p/acl.c
+@@ -185,12 +185,15 @@ int v9fs_acl_chmod(struct dentry *dentry)
+ }
+
+ int v9fs_set_create_acl(struct dentry *dentry,
+- struct posix_acl *dpacl, struct posix_acl *pacl)
++ struct posix_acl **dpacl, struct posix_acl **pacl)
+ {
+- v9fs_set_acl(dentry, ACL_TYPE_DEFAULT, dpacl);
+- v9fs_set_acl(dentry, ACL_TYPE_ACCESS, pacl);
+- posix_acl_release(dpacl);
+- posix_acl_release(pacl);
++ if (dentry) {
++ v9fs_set_acl(dentry, ACL_TYPE_DEFAULT, *dpacl);
++ v9fs_set_acl(dentry, ACL_TYPE_ACCESS, *pacl);
++ }
++ posix_acl_release(*dpacl);
++ posix_acl_release(*pacl);
++ *dpacl = *pacl = NULL;
+ return 0;
+ }
+
+@@ -212,11 +215,11 @@ int v9fs_acl_mode(struct inode *dir, mode_t *modep,
+ struct posix_acl *clone;
+
+ if (S_ISDIR(mode))
+- *dpacl = acl;
++ *dpacl = posix_acl_dup(acl);
+ clone = posix_acl_clone(acl, GFP_NOFS);
+- retval = -ENOMEM;
++ posix_acl_release(acl);
+ if (!clone)
+- goto cleanup;
++ return -ENOMEM;
+
+ retval = posix_acl_create_masq(clone, &mode);
+ if (retval < 0) {
+@@ -225,11 +228,12 @@ int v9fs_acl_mode(struct inode *dir, mode_t *modep,
+ }
+ if (retval > 0)
+ *pacl = clone;
++ else
++ posix_acl_release(clone);
+ }
+ *modep = mode;
+ return 0;
+ cleanup:
+- posix_acl_release(acl);
+ return retval;
+
+ }
+diff --git a/fs/9p/acl.h b/fs/9p/acl.h
+index 7ef3ac9..c47ea9c 100644
+--- a/fs/9p/acl.h
++++ b/fs/9p/acl.h
+@@ -19,7 +19,7 @@ extern int v9fs_get_acl(struct inode *, struct p9_fid *);
+ extern int v9fs_check_acl(struct inode *inode, int mask, unsigned int flags);
+ extern int v9fs_acl_chmod(struct dentry *);
+ extern int v9fs_set_create_acl(struct dentry *,
+- struct posix_acl *, struct posix_acl *);
++ struct posix_acl **, struct posix_acl **);
+ extern int v9fs_acl_mode(struct inode *dir, mode_t *modep,
+ struct posix_acl **dpacl, struct posix_acl **pacl);
+ #else
+@@ -33,8 +33,8 @@ static inline int v9fs_acl_chmod(struct dentry *dentry)
+ return 0;
+ }
+ static inline int v9fs_set_create_acl(struct dentry *dentry,
+- struct posix_acl *dpacl,
+- struct posix_acl *pacl)
++ struct posix_acl **dpacl,
++ struct posix_acl **pacl)
+ {
+ return 0;
+ }
+diff --git a/fs/9p/cache.c b/fs/9p/cache.c
+index 5b335c5..945aa5f 100644
+--- a/fs/9p/cache.c
++++ b/fs/9p/cache.c
+@@ -108,11 +108,10 @@ static uint16_t v9fs_cache_inode_get_key(const void *cookie_netfs_data,
+ void *buffer, uint16_t bufmax)
+ {
+ const struct v9fs_inode *v9inode = cookie_netfs_data;
+- memcpy(buffer, &v9inode->fscache_key->path,
+- sizeof(v9inode->fscache_key->path));
++ memcpy(buffer, &v9inode->qid.path, sizeof(v9inode->qid.path));
+ P9_DPRINTK(P9_DEBUG_FSC, "inode %p get key %llu", &v9inode->vfs_inode,
+- v9inode->fscache_key->path);
+- return sizeof(v9inode->fscache_key->path);
++ v9inode->qid.path);
++ return sizeof(v9inode->qid.path);
+ }
+
+ static void v9fs_cache_inode_get_attr(const void *cookie_netfs_data,
+@@ -129,11 +128,10 @@ static uint16_t v9fs_cache_inode_get_aux(const void *cookie_netfs_data,
+ void *buffer, uint16_t buflen)
+ {
+ const struct v9fs_inode *v9inode = cookie_netfs_data;
+- memcpy(buffer, &v9inode->fscache_key->version,
+- sizeof(v9inode->fscache_key->version));
++ memcpy(buffer, &v9inode->qid.version, sizeof(v9inode->qid.version));
+ P9_DPRINTK(P9_DEBUG_FSC, "inode %p get aux %u", &v9inode->vfs_inode,
+- v9inode->fscache_key->version);
+- return sizeof(v9inode->fscache_key->version);
++ v9inode->qid.version);
++ return sizeof(v9inode->qid.version);
+ }
+
+ static enum
+@@ -143,11 +141,11 @@ fscache_checkaux v9fs_cache_inode_check_aux(void *cookie_netfs_data,
+ {
+ const struct v9fs_inode *v9inode = cookie_netfs_data;
+
+- if (buflen != sizeof(v9inode->fscache_key->version))
++ if (buflen != sizeof(v9inode->qid.version))
+ return FSCACHE_CHECKAUX_OBSOLETE;
+
+- if (memcmp(buffer, &v9inode->fscache_key->version,
+- sizeof(v9inode->fscache_key->version)))
++ if (memcmp(buffer, &v9inode->qid.version,
++ sizeof(v9inode->qid.version)))
+ return FSCACHE_CHECKAUX_OBSOLETE;
+
+ return FSCACHE_CHECKAUX_OKAY;
+diff --git a/fs/9p/cache.h b/fs/9p/cache.h
+index 049507a..40cc54c 100644
+--- a/fs/9p/cache.h
++++ b/fs/9p/cache.h
+@@ -93,15 +93,6 @@ static inline void v9fs_uncache_page(struct inode *inode, struct page *page)
+ BUG_ON(PageFsCache(page));
+ }
+
+-static inline void v9fs_fscache_set_key(struct inode *inode,
+- struct p9_qid *qid)
+-{
+- struct v9fs_inode *v9inode = V9FS_I(inode);
+- spin_lock(&v9inode->fscache_lock);
+- v9inode->fscache_key = qid;
+- spin_unlock(&v9inode->fscache_lock);
+-}
+-
+ static inline void v9fs_fscache_wait_on_page_write(struct inode *inode,
+ struct page *page)
+ {
+diff --git a/fs/9p/v9fs.c b/fs/9p/v9fs.c
+index c82b017..ef96618 100644
+--- a/fs/9p/v9fs.c
++++ b/fs/9p/v9fs.c
+@@ -78,6 +78,25 @@ static const match_table_t tokens = {
+ {Opt_err, NULL}
+ };
+
++/* Interpret mount options for cache mode */
++static int get_cache_mode(char *s)
++{
++ int version = -EINVAL;
++
++ if (!strcmp(s, "loose")) {
++ version = CACHE_LOOSE;
++ P9_DPRINTK(P9_DEBUG_9P, "Cache mode: loose\n");
++ } else if (!strcmp(s, "fscache")) {
++ version = CACHE_FSCACHE;
++ P9_DPRINTK(P9_DEBUG_9P, "Cache mode: fscache\n");
++ } else if (!strcmp(s, "none")) {
++ version = CACHE_NONE;
++ P9_DPRINTK(P9_DEBUG_9P, "Cache mode: none\n");
++ } else
++ printk(KERN_INFO "9p: Unknown Cache mode %s.\n", s);
++ return version;
++}
++
+ /**
+ * v9fs_parse_options - parse mount options into session structure
+ * @v9ses: existing v9fs session information
+@@ -97,7 +116,7 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
+ /* setup defaults */
+ v9ses->afid = ~0;
+ v9ses->debug = 0;
+- v9ses->cache = 0;
++ v9ses->cache = CACHE_NONE;
+ #ifdef CONFIG_9P_FSCACHE
+ v9ses->cachetag = NULL;
+ #endif
+@@ -171,13 +190,13 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
+ "problem allocating copy of cache arg\n");
+ goto free_and_return;
+ }
++ ret = get_cache_mode(s);
++ if (ret == -EINVAL) {
++ kfree(s);
++ goto free_and_return;
++ }
+
+- if (strcmp(s, "loose") == 0)
+- v9ses->cache = CACHE_LOOSE;
+- else if (strcmp(s, "fscache") == 0)
+- v9ses->cache = CACHE_FSCACHE;
+- else
+- v9ses->cache = CACHE_NONE;
++ v9ses->cache = ret;
+ kfree(s);
+ break;
+
+@@ -200,9 +219,15 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
+ } else {
+ v9ses->flags |= V9FS_ACCESS_SINGLE;
+ v9ses->uid = simple_strtoul(s, &e, 10);
+- if (*e != '\0')
+- v9ses->uid = ~0;
++ if (*e != '\0') {
++ ret = -EINVAL;
++ printk(KERN_INFO "9p: Unknown access "
++ "argument %s.\n", s);
++ kfree(s);
++ goto free_and_return;
++ }
+ }
++
+ kfree(s);
+ break;
+
+@@ -487,8 +512,8 @@ static void v9fs_inode_init_once(void *foo)
+ struct v9fs_inode *v9inode = (struct v9fs_inode *)foo;
+ #ifdef CONFIG_9P_FSCACHE
+ v9inode->fscache = NULL;
+- v9inode->fscache_key = NULL;
+ #endif
++ memset(&v9inode->qid, 0, sizeof(v9inode->qid));
+ inode_init_once(&v9inode->vfs_inode);
+ }
+
+diff --git a/fs/9p/v9fs.h b/fs/9p/v9fs.h
+index e5ebedf..e78956c 100644
+--- a/fs/9p/v9fs.h
++++ b/fs/9p/v9fs.h
+@@ -125,8 +125,8 @@ struct v9fs_inode {
+ #ifdef CONFIG_9P_FSCACHE
+ spinlock_t fscache_lock;
+ struct fscache_cookie *fscache;
+- struct p9_qid *fscache_key;
+ #endif
++ struct p9_qid qid;
+ unsigned int cache_validity;
+ struct p9_fid *writeback_fid;
+ struct mutex v_mutex;
+@@ -153,13 +153,13 @@ extern void v9fs_vfs_put_link(struct dentry *dentry, struct nameidata *nd,
+ void *p);
+ extern struct inode *v9fs_inode_from_fid(struct v9fs_session_info *v9ses,
+ struct p9_fid *fid,
+- struct super_block *sb);
++ struct super_block *sb, int new);
+ extern const struct inode_operations v9fs_dir_inode_operations_dotl;
+ extern const struct inode_operations v9fs_file_inode_operations_dotl;
+ extern const struct inode_operations v9fs_symlink_inode_operations_dotl;
+ extern struct inode *v9fs_inode_from_fid_dotl(struct v9fs_session_info *v9ses,
+ struct p9_fid *fid,
+- struct super_block *sb);
++ struct super_block *sb, int new);
+
+ /* other default globals */
+ #define V9FS_PORT 564
+@@ -201,8 +201,27 @@ v9fs_get_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid,
+ struct super_block *sb)
+ {
+ if (v9fs_proto_dotl(v9ses))
+- return v9fs_inode_from_fid_dotl(v9ses, fid, sb);
++ return v9fs_inode_from_fid_dotl(v9ses, fid, sb, 0);
+ else
+- return v9fs_inode_from_fid(v9ses, fid, sb);
++ return v9fs_inode_from_fid(v9ses, fid, sb, 0);
+ }
++
++/**
++ * v9fs_get_new_inode_from_fid - Helper routine to populate an inode by
++ * issuing a attribute request
++ * @v9ses: session information
++ * @fid: fid to issue attribute request for
++ * @sb: superblock on which to create inode
++ *
++ */
++static inline struct inode *
++v9fs_get_new_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid,
++ struct super_block *sb)
++{
++ if (v9fs_proto_dotl(v9ses))
++ return v9fs_inode_from_fid_dotl(v9ses, fid, sb, 1);
++ else
++ return v9fs_inode_from_fid(v9ses, fid, sb, 1);
++}
++
+ #endif
+diff --git a/fs/9p/v9fs_vfs.h b/fs/9p/v9fs_vfs.h
+index 4014160..f9a28ea 100644
+--- a/fs/9p/v9fs_vfs.h
++++ b/fs/9p/v9fs_vfs.h
+@@ -54,9 +54,9 @@ extern struct kmem_cache *v9fs_inode_cache;
+
+ struct inode *v9fs_alloc_inode(struct super_block *sb);
+ void v9fs_destroy_inode(struct inode *inode);
+-struct inode *v9fs_get_inode(struct super_block *sb, int mode);
++struct inode *v9fs_get_inode(struct super_block *sb, int mode, dev_t);
+ int v9fs_init_inode(struct v9fs_session_info *v9ses,
+- struct inode *inode, int mode);
++ struct inode *inode, int mode, dev_t);
+ void v9fs_evict_inode(struct inode *inode);
+ ino_t v9fs_qid2ino(struct p9_qid *qid);
+ void v9fs_stat2inode(struct p9_wstat *, struct inode *, struct super_block *);
+@@ -82,4 +82,6 @@ static inline void v9fs_invalidate_inode_attr(struct inode *inode)
+ v9inode->cache_validity |= V9FS_INO_INVALID_ATTR;
+ return;
+ }
++
++int v9fs_open_to_dotl_flags(int flags);
+ #endif
+diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
+index ffed558..9d6e168 100644
+--- a/fs/9p/vfs_file.c
++++ b/fs/9p/vfs_file.c
+@@ -65,7 +65,7 @@ int v9fs_file_open(struct inode *inode, struct file *file)
+ v9inode = V9FS_I(inode);
+ v9ses = v9fs_inode2v9ses(inode);
+ if (v9fs_proto_dotl(v9ses))
+- omode = file->f_flags;
++ omode = v9fs_open_to_dotl_flags(file->f_flags);
+ else
+ omode = v9fs_uflags2omode(file->f_flags,
+ v9fs_proto_dotu(v9ses));
+@@ -169,7 +169,18 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
+
+ /* convert posix lock to p9 tlock args */
+ memset(&flock, 0, sizeof(flock));
+- flock.type = fl->fl_type;
++ /* map the lock type */
++ switch (fl->fl_type) {
++ case F_RDLCK:
++ flock.type = P9_LOCK_TYPE_RDLCK;
++ break;
++ case F_WRLCK:
++ flock.type = P9_LOCK_TYPE_WRLCK;
++ break;
++ case F_UNLCK:
++ flock.type = P9_LOCK_TYPE_UNLCK;
++ break;
++ }
+ flock.start = fl->fl_start;
+ if (fl->fl_end == OFFSET_MAX)
+ flock.length = 0;
+@@ -245,7 +256,7 @@ static int v9fs_file_getlock(struct file *filp, struct file_lock *fl)
+
+ /* convert posix lock to p9 tgetlock args */
+ memset(&glock, 0, sizeof(glock));
+- glock.type = fl->fl_type;
++ glock.type = P9_LOCK_TYPE_UNLCK;
+ glock.start = fl->fl_start;
+ if (fl->fl_end == OFFSET_MAX)
+ glock.length = 0;
+@@ -257,17 +268,26 @@ static int v9fs_file_getlock(struct file *filp, struct file_lock *fl)
+ res = p9_client_getlock_dotl(fid, &glock);
+ if (res < 0)
+ return res;
+- if (glock.type != F_UNLCK) {
+- fl->fl_type = glock.type;
++ /* map 9p lock type to os lock type */
++ switch (glock.type) {
++ case P9_LOCK_TYPE_RDLCK:
++ fl->fl_type = F_RDLCK;
++ break;
++ case P9_LOCK_TYPE_WRLCK:
++ fl->fl_type = F_WRLCK;
++ break;
++ case P9_LOCK_TYPE_UNLCK:
++ fl->fl_type = F_UNLCK;
++ break;
++ }
++ if (glock.type != P9_LOCK_TYPE_UNLCK) {
+ fl->fl_start = glock.start;
+ if (glock.length == 0)
+ fl->fl_end = OFFSET_MAX;
+ else
+ fl->fl_end = glock.start + glock.length - 1;
+ fl->fl_pid = glock.proc_id;
+- } else
+- fl->fl_type = F_UNLCK;
+-
++ }
+ return res;
+ }
+
+diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
+index 7f6c677..c72e20c 100644
+--- a/fs/9p/vfs_inode.c
++++ b/fs/9p/vfs_inode.c
+@@ -95,15 +95,18 @@ static int unixmode2p9mode(struct v9fs_session_info *v9ses, int mode)
+ /**
+ * p9mode2unixmode- convert plan9 mode bits to unix mode bits
+ * @v9ses: v9fs session information
+- * @mode: mode to convert
++ * @stat: p9_wstat from which mode need to be derived
++ * @rdev: major number, minor number in case of device files.
+ *
+ */
+-
+-static int p9mode2unixmode(struct v9fs_session_info *v9ses, int mode)
++static int p9mode2unixmode(struct v9fs_session_info *v9ses,
++ struct p9_wstat *stat, dev_t *rdev)
+ {
+ int res;
++ int mode = stat->mode;
+
+- res = mode & 0777;
++ res = mode & S_IALLUGO;
++ *rdev = 0;
+
+ if ((mode & P9_DMDIR) == P9_DMDIR)
+ res |= S_IFDIR;
+@@ -116,9 +119,26 @@ static int p9mode2unixmode(struct v9fs_session_info *v9ses, int mode)
+ && (v9ses->nodev == 0))
+ res |= S_IFIFO;
+ else if ((mode & P9_DMDEVICE) && (v9fs_proto_dotu(v9ses))
+- && (v9ses->nodev == 0))
+- res |= S_IFBLK;
+- else
++ && (v9ses->nodev == 0)) {
++ char type = 0, ext[32];
++ int major = -1, minor = -1;
++
++ strncpy(ext, stat->extension, sizeof(ext));
++ sscanf(ext, "%c %u %u", &type, &major, &minor);
++ switch (type) {
++ case 'c':
++ res |= S_IFCHR;
++ break;
++ case 'b':
++ res |= S_IFBLK;
++ break;
++ default:
++ P9_DPRINTK(P9_DEBUG_ERROR,
++ "Unknown special type %c %s\n", type,
++ stat->extension);
++ };
++ *rdev = MKDEV(major, minor);
++ } else
+ res |= S_IFREG;
+
+ if (v9fs_proto_dotu(v9ses)) {
+@@ -131,7 +151,6 @@ static int p9mode2unixmode(struct v9fs_session_info *v9ses, int mode)
+ if ((mode & P9_DMSETVTX) == P9_DMSETVTX)
+ res |= S_ISVTX;
+ }
+-
+ return res;
+ }
+
+@@ -216,7 +235,6 @@ struct inode *v9fs_alloc_inode(struct super_block *sb)
+ return NULL;
+ #ifdef CONFIG_9P_FSCACHE
+ v9inode->fscache = NULL;
+- v9inode->fscache_key = NULL;
+ spin_lock_init(&v9inode->fscache_lock);
+ #endif
+ v9inode->writeback_fid = NULL;
+@@ -243,13 +261,13 @@ void v9fs_destroy_inode(struct inode *inode)
+ }
+
+ int v9fs_init_inode(struct v9fs_session_info *v9ses,
+- struct inode *inode, int mode)
++ struct inode *inode, int mode, dev_t rdev)
+ {
+ int err = 0;
+
+ inode_init_owner(inode, NULL, mode);
+ inode->i_blocks = 0;
+- inode->i_rdev = 0;
++ inode->i_rdev = rdev;
+ inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mapping->a_ops = &v9fs_addr_operations;
+
+@@ -336,7 +354,7 @@ error:
+ *
+ */
+
+-struct inode *v9fs_get_inode(struct super_block *sb, int mode)
++struct inode *v9fs_get_inode(struct super_block *sb, int mode, dev_t rdev)
+ {
+ int err;
+ struct inode *inode;
+@@ -349,7 +367,7 @@ struct inode *v9fs_get_inode(struct super_block *sb, int mode)
+ P9_EPRINTK(KERN_WARNING, "Problem allocating inode\n");
+ return ERR_PTR(-ENOMEM);
+ }
+- err = v9fs_init_inode(v9ses, inode, mode);
++ err = v9fs_init_inode(v9ses, inode, mode, rdev);
+ if (err) {
+ iput(inode);
+ return ERR_PTR(err);
+@@ -433,17 +451,62 @@ void v9fs_evict_inode(struct inode *inode)
+ }
+ }
+
++static int v9fs_test_inode(struct inode *inode, void *data)
++{
++ int umode;
++ dev_t rdev;
++ struct v9fs_inode *v9inode = V9FS_I(inode);
++ struct p9_wstat *st = (struct p9_wstat *)data;
++ struct v9fs_session_info *v9ses = v9fs_inode2v9ses(inode);
++
++ umode = p9mode2unixmode(v9ses, st, &rdev);
++ /* don't match inode of different type */
++ if ((inode->i_mode & S_IFMT) != (umode & S_IFMT))
++ return 0;
++
++ /* compare qid details */
++ if (memcmp(&v9inode->qid.version,
++ &st->qid.version, sizeof(v9inode->qid.version)))
++ return 0;
++
++ if (v9inode->qid.type != st->qid.type)
++ return 0;
++ return 1;
++}
++
++static int v9fs_test_new_inode(struct inode *inode, void *data)
++{
++ return 0;
++}
++
++static int v9fs_set_inode(struct inode *inode, void *data)
++{
++ struct v9fs_inode *v9inode = V9FS_I(inode);
++ struct p9_wstat *st = (struct p9_wstat *)data;
++
++ memcpy(&v9inode->qid, &st->qid, sizeof(st->qid));
++ return 0;
++}
++
+ static struct inode *v9fs_qid_iget(struct super_block *sb,
+ struct p9_qid *qid,
+- struct p9_wstat *st)
++ struct p9_wstat *st,
++ int new)
+ {
++ dev_t rdev;
+ int retval, umode;
+ unsigned long i_ino;
+ struct inode *inode;
+ struct v9fs_session_info *v9ses = sb->s_fs_info;
++ int (*test)(struct inode *, void *);
++
++ if (new)
++ test = v9fs_test_new_inode;
++ else
++ test = v9fs_test_inode;
+
+ i_ino = v9fs_qid2ino(qid);
+- inode = iget_locked(sb, i_ino);
++ inode = iget5_locked(sb, i_ino, test, v9fs_set_inode, st);
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
+ if (!(inode->i_state & I_NEW))
+@@ -453,14 +516,14 @@ static struct inode *v9fs_qid_iget(struct super_block *sb,
+ * FIXME!! we may need support for stale inodes
+ * later.
+ */
+- umode = p9mode2unixmode(v9ses, st->mode);
+- retval = v9fs_init_inode(v9ses, inode, umode);
++ inode->i_ino = i_ino;
++ umode = p9mode2unixmode(v9ses, st, &rdev);
++ retval = v9fs_init_inode(v9ses, inode, umode, rdev);
+ if (retval)
+ goto error;
+
+ v9fs_stat2inode(st, inode, sb);
+ #ifdef CONFIG_9P_FSCACHE
+- v9fs_fscache_set_key(inode, &st->qid);
+ v9fs_cache_inode_get_cookie(inode);
+ #endif
+ unlock_new_inode(inode);
+@@ -474,7 +537,7 @@ error:
+
+ struct inode *
+ v9fs_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid,
+- struct super_block *sb)
++ struct super_block *sb, int new)
+ {
+ struct p9_wstat *st;
+ struct inode *inode = NULL;
+@@ -483,7 +546,7 @@ v9fs_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid,
+ if (IS_ERR(st))
+ return ERR_CAST(st);
+
+- inode = v9fs_qid_iget(sb, &st->qid, st);
++ inode = v9fs_qid_iget(sb, &st->qid, st, new);
+ p9stat_free(st);
+ kfree(st);
+ return inode;
+@@ -585,19 +648,17 @@ v9fs_create(struct v9fs_session_info *v9ses, struct inode *dir,
+ }
+
+ /* instantiate inode and assign the unopened fid to the dentry */
+- inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ P9_DPRINTK(P9_DEBUG_VFS, "inode creation failed %d\n", err);
+ goto error;
+ }
+- d_instantiate(dentry, inode);
+ err = v9fs_fid_add(dentry, fid);
+ if (err < 0)
+ goto error;
+-
++ d_instantiate(dentry, inode);
+ return ofid;
+-
+ error:
+ if (ofid)
+ p9_client_clunk(ofid);
+@@ -738,6 +799,7 @@ static int v9fs_vfs_mkdir(struct inode *dir, struct dentry *dentry, int mode)
+ struct dentry *v9fs_vfs_lookup(struct inode *dir, struct dentry *dentry,
+ struct nameidata *nameidata)
+ {
++ struct dentry *res;
+ struct super_block *sb;
+ struct v9fs_session_info *v9ses;
+ struct p9_fid *dfid, *fid;
+@@ -769,22 +831,35 @@ struct dentry *v9fs_vfs_lookup(struct inode *dir, struct dentry *dentry,
+
+ return ERR_PTR(result);
+ }
+-
+- inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb);
++ /*
++ * Make sure we don't use a wrong inode due to parallel
++ * unlink. For cached mode create calls request for new
++ * inode. But with cache disabled, lookup should do this.
++ */
++ if (v9ses->cache)
++ inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb);
++ else
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ if (IS_ERR(inode)) {
+ result = PTR_ERR(inode);
+ inode = NULL;
+ goto error;
+ }
+-
+ result = v9fs_fid_add(dentry, fid);
+ if (result < 0)
+ goto error_iput;
+-
+ inst_out:
+- d_add(dentry, inode);
+- return NULL;
+-
++ /*
++ * If we had a rename on the server and a parallel lookup
++ * for the new name, then make sure we instantiate with
++ * the new name. ie look up for a/b, while on server somebody
++ * moved b under k and client parallely did a lookup for
++ * k/b.
++ */
++ res = d_materialise_unique(dentry, inode);
++ if (!IS_ERR(res))
++ return res;
++ result = PTR_ERR(res);
+ error_iput:
+ iput(inode);
+ error:
+@@ -950,7 +1025,7 @@ v9fs_vfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
+ return PTR_ERR(st);
+
+ v9fs_stat2inode(st, dentry->d_inode, dentry->d_inode->i_sb);
+- generic_fillattr(dentry->d_inode, stat);
++ generic_fillattr(dentry->d_inode, stat);
+
+ p9stat_free(st);
+ kfree(st);
+@@ -1034,6 +1109,7 @@ void
+ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
+ struct super_block *sb)
+ {
++ mode_t mode;
+ char ext[32];
+ char tag_name[14];
+ unsigned int i_nlink;
+@@ -1069,31 +1145,9 @@ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
+ inode->i_nlink = i_nlink;
+ }
+ }
+- inode->i_mode = p9mode2unixmode(v9ses, stat->mode);
+- if ((S_ISBLK(inode->i_mode)) || (S_ISCHR(inode->i_mode))) {
+- char type = 0;
+- int major = -1;
+- int minor = -1;
+-
+- strncpy(ext, stat->extension, sizeof(ext));
+- sscanf(ext, "%c %u %u", &type, &major, &minor);
+- switch (type) {
+- case 'c':
+- inode->i_mode &= ~S_IFBLK;
+- inode->i_mode |= S_IFCHR;
+- break;
+- case 'b':
+- break;
+- default:
+- P9_DPRINTK(P9_DEBUG_ERROR,
+- "Unknown special type %c %s\n", type,
+- stat->extension);
+- };
+- inode->i_rdev = MKDEV(major, minor);
+- init_special_inode(inode, inode->i_mode, inode->i_rdev);
+- } else
+- inode->i_rdev = 0;
+-
++ mode = stat->mode & S_IALLUGO;
++ mode |= inode->i_mode & ~S_IALLUGO;
++ inode->i_mode = mode;
+ i_size_write(inode, stat->length);
+
+ /* not real number of blocks, but 512 byte ones ... */
+@@ -1359,6 +1413,8 @@ v9fs_vfs_mknod(struct inode *dir, struct dentry *dentry, int mode, dev_t rdev)
+
+ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
+ {
++ int umode;
++ dev_t rdev;
+ loff_t i_size;
+ struct p9_wstat *st;
+ struct v9fs_session_info *v9ses;
+@@ -1367,6 +1423,12 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
+ st = p9_client_stat(fid);
+ if (IS_ERR(st))
+ return PTR_ERR(st);
++ /*
++ * Don't update inode if the file type is different
++ */
++ umode = p9mode2unixmode(v9ses, st, &rdev);
++ if ((inode->i_mode & S_IFMT) != (umode & S_IFMT))
++ goto out;
+
+ spin_lock(&inode->i_lock);
+ /*
+@@ -1378,6 +1440,7 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
+ if (v9ses->cache)
+ inode->i_size = i_size;
+ spin_unlock(&inode->i_lock);
++out:
+ p9stat_free(st);
+ kfree(st);
+ return 0;
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index 691c78f..c873172 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -86,18 +86,63 @@ static struct dentry *v9fs_dentry_from_dir_inode(struct inode *inode)
+ return dentry;
+ }
+
++static int v9fs_test_inode_dotl(struct inode *inode, void *data)
++{
++ struct v9fs_inode *v9inode = V9FS_I(inode);
++ struct p9_stat_dotl *st = (struct p9_stat_dotl *)data;
++
++ /* don't match inode of different type */
++ if ((inode->i_mode & S_IFMT) != (st->st_mode & S_IFMT))
++ return 0;
++
++ if (inode->i_generation != st->st_gen)
++ return 0;
++
++ /* compare qid details */
++ if (memcmp(&v9inode->qid.version,
++ &st->qid.version, sizeof(v9inode->qid.version)))
++ return 0;
++
++ if (v9inode->qid.type != st->qid.type)
++ return 0;
++ return 1;
++}
++
++/* Always get a new inode */
++static int v9fs_test_new_inode_dotl(struct inode *inode, void *data)
++{
++ return 0;
++}
++
++static int v9fs_set_inode_dotl(struct inode *inode, void *data)
++{
++ struct v9fs_inode *v9inode = V9FS_I(inode);
++ struct p9_stat_dotl *st = (struct p9_stat_dotl *)data;
++
++ memcpy(&v9inode->qid, &st->qid, sizeof(st->qid));
++ inode->i_generation = st->st_gen;
++ return 0;
++}
++
+ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb,
+ struct p9_qid *qid,
+ struct p9_fid *fid,
+- struct p9_stat_dotl *st)
++ struct p9_stat_dotl *st,
++ int new)
+ {
+ int retval;
+ unsigned long i_ino;
+ struct inode *inode;
+ struct v9fs_session_info *v9ses = sb->s_fs_info;
++ int (*test)(struct inode *, void *);
++
++ if (new)
++ test = v9fs_test_new_inode_dotl;
++ else
++ test = v9fs_test_inode_dotl;
+
+ i_ino = v9fs_qid2ino(qid);
+- inode = iget_locked(sb, i_ino);
++ inode = iget5_locked(sb, i_ino, test, v9fs_set_inode_dotl, st);
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
+ if (!(inode->i_state & I_NEW))
+@@ -107,13 +152,14 @@ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb,
+ * FIXME!! we may need support for stale inodes
+ * later.
+ */
+- retval = v9fs_init_inode(v9ses, inode, st->st_mode);
++ inode->i_ino = i_ino;
++ retval = v9fs_init_inode(v9ses, inode,
++ st->st_mode, new_decode_dev(st->st_rdev));
+ if (retval)
+ goto error;
+
+ v9fs_stat2inode_dotl(st, inode);
+ #ifdef CONFIG_9P_FSCACHE
+- v9fs_fscache_set_key(inode, &st->qid);
+ v9fs_cache_inode_get_cookie(inode);
+ #endif
+ retval = v9fs_get_acl(inode, fid);
+@@ -131,20 +177,72 @@ error:
+
+ struct inode *
+ v9fs_inode_from_fid_dotl(struct v9fs_session_info *v9ses, struct p9_fid *fid,
+- struct super_block *sb)
++ struct super_block *sb, int new)
+ {
+ struct p9_stat_dotl *st;
+ struct inode *inode = NULL;
+
+- st = p9_client_getattr_dotl(fid, P9_STATS_BASIC);
++ st = p9_client_getattr_dotl(fid, P9_STATS_BASIC | P9_STATS_GEN);
+ if (IS_ERR(st))
+ return ERR_CAST(st);
+
+- inode = v9fs_qid_iget_dotl(sb, &st->qid, fid, st);
++ inode = v9fs_qid_iget_dotl(sb, &st->qid, fid, st, new);
+ kfree(st);
+ return inode;
+ }
+
++struct dotl_openflag_map {
++ int open_flag;
++ int dotl_flag;
++};
++
++static int v9fs_mapped_dotl_flags(int flags)
++{
++ int i;
++ int rflags = 0;
++ struct dotl_openflag_map dotl_oflag_map[] = {
++ { O_CREAT, P9_DOTL_CREATE },
++ { O_EXCL, P9_DOTL_EXCL },
++ { O_NOCTTY, P9_DOTL_NOCTTY },
++ { O_TRUNC, P9_DOTL_TRUNC },
++ { O_APPEND, P9_DOTL_APPEND },
++ { O_NONBLOCK, P9_DOTL_NONBLOCK },
++ { O_DSYNC, P9_DOTL_DSYNC },
++ { FASYNC, P9_DOTL_FASYNC },
++ { O_DIRECT, P9_DOTL_DIRECT },
++ { O_LARGEFILE, P9_DOTL_LARGEFILE },
++ { O_DIRECTORY, P9_DOTL_DIRECTORY },
++ { O_NOFOLLOW, P9_DOTL_NOFOLLOW },
++ { O_NOATIME, P9_DOTL_NOATIME },
++ { O_CLOEXEC, P9_DOTL_CLOEXEC },
++ { O_SYNC, P9_DOTL_SYNC},
++ };
++ for (i = 0; i < ARRAY_SIZE(dotl_oflag_map); i++) {
++ if (flags & dotl_oflag_map[i].open_flag)
++ rflags |= dotl_oflag_map[i].dotl_flag;
++ }
++ return rflags;
++}
++
++/**
++ * v9fs_open_to_dotl_flags- convert Linux specific open flags to
++ * plan 9 open flag.
++ * @flags: flags to convert
++ */
++int v9fs_open_to_dotl_flags(int flags)
++{
++ int rflags = 0;
++
++ /*
++ * We have same bits for P9_DOTL_READONLY, P9_DOTL_WRONLY
++ * and P9_DOTL_NOACCESS
++ */
++ rflags |= flags & O_ACCMODE;
++ rflags |= v9fs_mapped_dotl_flags(flags);
++
++ return rflags;
++}
++
+ /**
+ * v9fs_vfs_create_dotl - VFS hook to create files for 9P2000.L protocol.
+ * @dir: directory inode that is being created
+@@ -213,7 +311,8 @@ v9fs_vfs_create_dotl(struct inode *dir, struct dentry *dentry, int omode,
+ "Failed to get acl values in creat %d\n", err);
+ goto error;
+ }
+- err = p9_client_create_dotl(ofid, name, flags, mode, gid, &qid);
++ err = p9_client_create_dotl(ofid, name, v9fs_open_to_dotl_flags(flags),
++ mode, gid, &qid);
+ if (err < 0) {
+ P9_DPRINTK(P9_DEBUG_VFS,
+ "p9_client_open_dotl failed in creat %d\n",
+@@ -230,19 +329,19 @@ v9fs_vfs_create_dotl(struct inode *dir, struct dentry *dentry, int omode,
+ fid = NULL;
+ goto error;
+ }
+- inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ P9_DPRINTK(P9_DEBUG_VFS, "inode creation failed %d\n", err);
+ goto error;
+ }
+- d_instantiate(dentry, inode);
+ err = v9fs_fid_add(dentry, fid);
+ if (err < 0)
+ goto error;
++ d_instantiate(dentry, inode);
+
+ /* Now set the ACL based on the default value */
+- v9fs_set_create_acl(dentry, dacl, pacl);
++ v9fs_set_create_acl(dentry, &dacl, &pacl);
+
+ v9inode = V9FS_I(inode);
+ mutex_lock(&v9inode->v_mutex);
+@@ -283,6 +382,7 @@ error:
+ err_clunk_old_fid:
+ if (ofid)
+ p9_client_clunk(ofid);
++ v9fs_set_create_acl(NULL, &dacl, &pacl);
+ return err;
+ }
+
+@@ -350,17 +450,17 @@ static int v9fs_vfs_mkdir_dotl(struct inode *dir,
+ goto error;
+ }
+
+- inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ P9_DPRINTK(P9_DEBUG_VFS, "inode creation failed %d\n",
+ err);
+ goto error;
+ }
+- d_instantiate(dentry, inode);
+ err = v9fs_fid_add(dentry, fid);
+ if (err < 0)
+ goto error;
++ d_instantiate(dentry, inode);
+ fid = NULL;
+ } else {
+ /*
+@@ -368,7 +468,7 @@ static int v9fs_vfs_mkdir_dotl(struct inode *dir,
+ * inode with stat. We need to get an inode
+ * so that we can set the acl with dentry
+ */
+- inode = v9fs_get_inode(dir->i_sb, mode);
++ inode = v9fs_get_inode(dir->i_sb, mode, 0);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ goto error;
+@@ -376,12 +476,13 @@ static int v9fs_vfs_mkdir_dotl(struct inode *dir,
+ d_instantiate(dentry, inode);
+ }
+ /* Now set the ACL based on the default value */
+- v9fs_set_create_acl(dentry, dacl, pacl);
++ v9fs_set_create_acl(dentry, &dacl, &pacl);
+ inc_nlink(dir);
+ v9fs_invalidate_inode_attr(dir);
+ error:
+ if (fid)
+ p9_client_clunk(fid);
++ v9fs_set_create_acl(NULL, &dacl, &pacl);
+ return err;
+ }
+
+@@ -493,6 +594,7 @@ int v9fs_vfs_setattr_dotl(struct dentry *dentry, struct iattr *iattr)
+ void
+ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
+ {
++ mode_t mode;
+ struct v9fs_inode *v9inode = V9FS_I(inode);
+
+ if ((stat->st_result_mask & P9_STATS_BASIC) == P9_STATS_BASIC) {
+@@ -505,11 +607,10 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
+ inode->i_uid = stat->st_uid;
+ inode->i_gid = stat->st_gid;
+ inode->i_nlink = stat->st_nlink;
+- inode->i_mode = stat->st_mode;
+- inode->i_rdev = new_decode_dev(stat->st_rdev);
+
+- if ((S_ISBLK(inode->i_mode)) || (S_ISCHR(inode->i_mode)))
+- init_special_inode(inode, inode->i_mode, inode->i_rdev);
++ mode = stat->st_mode & S_IALLUGO;
++ mode |= inode->i_mode & ~S_IALLUGO;
++ inode->i_mode = mode;
+
+ i_size_write(inode, stat->st_size);
+ inode->i_blocks = stat->st_blocks;
+@@ -547,7 +648,7 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
+ inode->i_blocks = stat->st_blocks;
+ }
+ if (stat->st_result_mask & P9_STATS_GEN)
+- inode->i_generation = stat->st_gen;
++ inode->i_generation = stat->st_gen;
+
+ /* Currently we don't support P9_STATS_BTIME and P9_STATS_DATA_VERSION
+ * because the inode structure does not have fields for them.
+@@ -603,21 +704,21 @@ v9fs_vfs_symlink_dotl(struct inode *dir, struct dentry *dentry,
+ }
+
+ /* instantiate inode and assign the unopened fid to dentry */
+- inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ P9_DPRINTK(P9_DEBUG_VFS, "inode creation failed %d\n",
+ err);
+ goto error;
+ }
+- d_instantiate(dentry, inode);
+ err = v9fs_fid_add(dentry, fid);
+ if (err < 0)
+ goto error;
++ d_instantiate(dentry, inode);
+ fid = NULL;
+ } else {
+ /* Not in cached mode. No need to populate inode with stat */
+- inode = v9fs_get_inode(dir->i_sb, S_IFLNK);
++ inode = v9fs_get_inode(dir->i_sb, S_IFLNK, 0);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ goto error;
+@@ -756,24 +857,24 @@ v9fs_vfs_mknod_dotl(struct inode *dir, struct dentry *dentry, int omode,
+ goto error;
+ }
+
+- inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb);
++ inode = v9fs_get_new_inode_from_fid(v9ses, fid, dir->i_sb);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ P9_DPRINTK(P9_DEBUG_VFS, "inode creation failed %d\n",
+ err);
+ goto error;
+ }
+- d_instantiate(dentry, inode);
+ err = v9fs_fid_add(dentry, fid);
+ if (err < 0)
+ goto error;
++ d_instantiate(dentry, inode);
+ fid = NULL;
+ } else {
+ /*
+ * Not in cached mode. No need to populate inode with stat.
+ * socket syscall returns a fd, so we need instantiate
+ */
+- inode = v9fs_get_inode(dir->i_sb, mode);
++ inode = v9fs_get_inode(dir->i_sb, mode, rdev);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ goto error;
+@@ -781,10 +882,11 @@ v9fs_vfs_mknod_dotl(struct inode *dir, struct dentry *dentry, int omode,
+ d_instantiate(dentry, inode);
+ }
+ /* Now set the ACL based on the default value */
+- v9fs_set_create_acl(dentry, dacl, pacl);
++ v9fs_set_create_acl(dentry, &dacl, &pacl);
+ error:
+ if (fid)
+ p9_client_clunk(fid);
++ v9fs_set_create_acl(NULL, &dacl, &pacl);
+ return err;
+ }
+
+@@ -838,6 +940,11 @@ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
+ st = p9_client_getattr_dotl(fid, P9_STATS_ALL);
+ if (IS_ERR(st))
+ return PTR_ERR(st);
++ /*
++ * Don't update inode if the file type is different
++ */
++ if ((inode->i_mode & S_IFMT) != (st->st_mode & S_IFMT))
++ goto out;
+
+ spin_lock(&inode->i_lock);
+ /*
+@@ -849,6 +956,7 @@ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
+ if (v9ses->cache)
+ inode->i_size = i_size;
+ spin_unlock(&inode->i_lock);
++out:
+ kfree(st);
+ return 0;
+ }
+diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c
+index feef6cd..c70251d 100644
+--- a/fs/9p/vfs_super.c
++++ b/fs/9p/vfs_super.c
+@@ -149,7 +149,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags,
+ else
+ sb->s_d_op = &v9fs_dentry_operations;
+
+- inode = v9fs_get_inode(sb, S_IFDIR | mode);
++ inode = v9fs_get_inode(sb, S_IFDIR | mode, 0);
+ if (IS_ERR(inode)) {
+ retval = PTR_ERR(inode);
+ goto release_sb;
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 610e8e0..194cf66 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -1419,6 +1419,11 @@ static int __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
+ WARN_ON_ONCE(bdev->bd_holders);
+ sync_blockdev(bdev);
+ kill_bdev(bdev);
++ /* ->release can cause the old bdi to disappear,
++ * so must switch it out first
++ */
++ bdev_inode_switch_bdi(bdev->bd_inode,
++ &default_backing_dev_info);
+ }
+ if (bdev->bd_contains == bdev) {
+ if (disk->fops->release)
+@@ -1432,8 +1437,6 @@ static int __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
+ disk_put_part(bdev->bd_part);
+ bdev->bd_part = NULL;
+ bdev->bd_disk = NULL;
+- bdev_inode_switch_bdi(bdev->bd_inode,
+- &default_backing_dev_info);
+ if (bdev != bdev->bd_contains)
+ victim = bdev->bd_contains;
+ bdev->bd_contains = NULL;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 3601f0a..d42e6bf 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4124,7 +4124,8 @@ static int btrfs_real_readdir(struct file *filp, void *dirent,
+
+ /* special case for "." */
+ if (filp->f_pos == 0) {
+- over = filldir(dirent, ".", 1, 1, btrfs_ino(inode), DT_DIR);
++ over = filldir(dirent, ".", 1,
++ filp->f_pos, btrfs_ino(inode), DT_DIR);
+ if (over)
+ return 0;
+ filp->f_pos = 1;
+@@ -4133,7 +4134,7 @@ static int btrfs_real_readdir(struct file *filp, void *dirent,
+ if (filp->f_pos == 1) {
+ u64 pino = parent_ino(filp->f_path.dentry);
+ over = filldir(dirent, "..", 2,
+- 2, pino, DT_DIR);
++ filp->f_pos, pino, DT_DIR);
+ if (over)
+ return 0;
+ filp->f_pos = 2;
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 1a9fe7f..07132c4 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -4079,7 +4079,8 @@ int CIFSFindNext(const int xid, struct cifs_tcon *tcon,
+ T2_FNEXT_RSP_PARMS *parms;
+ char *response_data;
+ int rc = 0;
+- int bytes_returned, name_len;
++ int bytes_returned;
++ unsigned int name_len;
+ __u16 params, byte_count;
+
+ cFYI(1, "In FindNext");
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index e0ea721..2451627 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1258,7 +1258,7 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ /* ignore */
+ } else if (strnicmp(data, "guest", 5) == 0) {
+ /* ignore */
+- } else if (strnicmp(data, "rw", 2) == 0) {
++ } else if (strnicmp(data, "rw", 2) == 0 && strlen(data) == 2) {
+ /* ignore */
+ } else if (strnicmp(data, "ro", 2) == 0) {
+ /* ignore */
+@@ -1361,7 +1361,7 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ vol->server_ino = 1;
+ } else if (strnicmp(data, "noserverino", 9) == 0) {
+ vol->server_ino = 0;
+- } else if (strnicmp(data, "rwpidforward", 4) == 0) {
++ } else if (strnicmp(data, "rwpidforward", 12) == 0) {
+ vol->rwpidforward = 1;
+ } else if (strnicmp(data, "cifsacl", 7) == 0) {
+ vol->cifs_acl = 1;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index b864839..c94774c 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -2756,7 +2756,7 @@ static int write_cache_pages_da(struct address_space *mapping,
+ index = wbc->range_start >> PAGE_CACHE_SHIFT;
+ end = wbc->range_end >> PAGE_CACHE_SHIFT;
+
+- if (wbc->sync_mode == WB_SYNC_ALL)
++ if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
+ tag = PAGECACHE_TAG_TOWRITE;
+ else
+ tag = PAGECACHE_TAG_DIRTY;
+@@ -2988,7 +2988,7 @@ static int ext4_da_writepages(struct address_space *mapping,
+ }
+
+ retry:
+- if (wbc->sync_mode == WB_SYNC_ALL)
++ if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
+ tag_pages_for_writeback(mapping, index, end);
+
+ while (!ret && wbc->nr_to_write > 0) {
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 0f015a0..fe190a8 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -36,6 +36,7 @@ struct wb_writeback_work {
+ long nr_pages;
+ struct super_block *sb;
+ enum writeback_sync_modes sync_mode;
++ unsigned int tagged_writepages:1;
+ unsigned int for_kupdate:1;
+ unsigned int range_cyclic:1;
+ unsigned int for_background:1;
+@@ -418,6 +419,15 @@ writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
+ spin_lock(&inode->i_lock);
+ inode->i_state &= ~I_SYNC;
+ if (!(inode->i_state & I_FREEING)) {
++ /*
++ * Sync livelock prevention. Each inode is tagged and synced in
++ * one shot. If still dirty, it will be redirty_tail()'ed below.
++ * Update the dirty time to prevent enqueue and sync it again.
++ */
++ if ((inode->i_state & I_DIRTY) &&
++ (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages))
++ inode->dirtied_when = jiffies;
++
+ if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) {
+ /*
+ * We didn't write back all the pages. nfs_writepages()
+@@ -650,6 +660,7 @@ static long wb_writeback(struct bdi_writeback *wb,
+ {
+ struct writeback_control wbc = {
+ .sync_mode = work->sync_mode,
++ .tagged_writepages = work->tagged_writepages,
+ .older_than_this = NULL,
+ .for_kupdate = work->for_kupdate,
+ .for_background = work->for_background,
+@@ -657,7 +668,7 @@ static long wb_writeback(struct bdi_writeback *wb,
+ };
+ unsigned long oldest_jif;
+ long wrote = 0;
+- long write_chunk;
++ long write_chunk = MAX_WRITEBACK_PAGES;
+ struct inode *inode;
+
+ if (wbc.for_kupdate) {
+@@ -683,9 +694,7 @@ static long wb_writeback(struct bdi_writeback *wb,
+ * (quickly) tag currently dirty pages
+ * (maybe slowly) sync all tagged pages
+ */
+- if (wbc.sync_mode == WB_SYNC_NONE)
+- write_chunk = MAX_WRITEBACK_PAGES;
+- else
++ if (wbc.sync_mode == WB_SYNC_ALL || wbc.tagged_writepages)
+ write_chunk = LONG_MAX;
+
+ wbc.wb_start = jiffies; /* livelock avoidance */
+@@ -1188,10 +1197,11 @@ void writeback_inodes_sb_nr(struct super_block *sb, unsigned long nr)
+ {
+ DECLARE_COMPLETION_ONSTACK(done);
+ struct wb_writeback_work work = {
+- .sb = sb,
+- .sync_mode = WB_SYNC_NONE,
+- .done = &done,
+- .nr_pages = nr,
++ .sb = sb,
++ .sync_mode = WB_SYNC_NONE,
++ .tagged_writepages = 1,
++ .done = &done,
++ .nr_pages = nr,
+ };
+
+ WARN_ON(!rwsem_is_locked(&sb->s_umount));
+diff --git a/fs/namei.c b/fs/namei.c
+index 14ab8d3..b456c7a 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -2582,6 +2582,7 @@ int vfs_rmdir(struct inode *dir, struct dentry *dentry)
+ if (!dir->i_op->rmdir)
+ return -EPERM;
+
++ dget(dentry);
+ mutex_lock(&dentry->d_inode->i_mutex);
+
+ error = -EBUSY;
+@@ -2602,6 +2603,7 @@ int vfs_rmdir(struct inode *dir, struct dentry *dentry)
+
+ out:
+ mutex_unlock(&dentry->d_inode->i_mutex);
++ dput(dentry);
+ if (!error)
+ d_delete(dentry);
+ return error;
+@@ -3005,6 +3007,7 @@ static int vfs_rename_dir(struct inode *old_dir, struct dentry *old_dentry,
+ if (error)
+ return error;
+
++ dget(new_dentry);
+ if (target)
+ mutex_lock(&target->i_mutex);
+
+@@ -3025,6 +3028,7 @@ static int vfs_rename_dir(struct inode *old_dir, struct dentry *old_dentry,
+ out:
+ if (target)
+ mutex_unlock(&target->i_mutex);
++ dput(new_dentry);
+ if (!error)
+ if (!(old_dir->i_sb->s_type->fs_flags & FS_RENAME_DOES_D_MOVE))
+ d_move(old_dentry,new_dentry);
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 25b6a88..5afaa58 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -877,30 +877,54 @@ struct numa_maps_private {
+ struct numa_maps md;
+ };
+
+-static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty)
++static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty,
++ unsigned long nr_pages)
+ {
+ int count = page_mapcount(page);
+
+- md->pages++;
++ md->pages += nr_pages;
+ if (pte_dirty || PageDirty(page))
+- md->dirty++;
++ md->dirty += nr_pages;
+
+ if (PageSwapCache(page))
+- md->swapcache++;
++ md->swapcache += nr_pages;
+
+ if (PageActive(page) || PageUnevictable(page))
+- md->active++;
++ md->active += nr_pages;
+
+ if (PageWriteback(page))
+- md->writeback++;
++ md->writeback += nr_pages;
+
+ if (PageAnon(page))
+- md->anon++;
++ md->anon += nr_pages;
+
+ if (count > md->mapcount_max)
+ md->mapcount_max = count;
+
+- md->node[page_to_nid(page)]++;
++ md->node[page_to_nid(page)] += nr_pages;
++}
++
++static struct page *can_gather_numa_stats(pte_t pte, struct vm_area_struct *vma,
++ unsigned long addr)
++{
++ struct page *page;
++ int nid;
++
++ if (!pte_present(pte))
++ return NULL;
++
++ page = vm_normal_page(vma, addr, pte);
++ if (!page)
++ return NULL;
++
++ if (PageReserved(page))
++ return NULL;
++
++ nid = page_to_nid(page);
++ if (!node_isset(nid, node_states[N_HIGH_MEMORY]))
++ return NULL;
++
++ return page;
+ }
+
+ static int gather_pte_stats(pmd_t *pmd, unsigned long addr,
+@@ -912,26 +936,32 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long addr,
+ pte_t *pte;
+
+ md = walk->private;
+- orig_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+- do {
+- struct page *page;
+- int nid;
++ spin_lock(&walk->mm->page_table_lock);
++ if (pmd_trans_huge(*pmd)) {
++ if (pmd_trans_splitting(*pmd)) {
++ spin_unlock(&walk->mm->page_table_lock);
++ wait_split_huge_page(md->vma->anon_vma, pmd);
++ } else {
++ pte_t huge_pte = *(pte_t *)pmd;
++ struct page *page;
+
+- if (!pte_present(*pte))
+- continue;
++ page = can_gather_numa_stats(huge_pte, md->vma, addr);
++ if (page)
++ gather_stats(page, md, pte_dirty(huge_pte),
++ HPAGE_PMD_SIZE/PAGE_SIZE);
++ spin_unlock(&walk->mm->page_table_lock);
++ return 0;
++ }
++ } else {
++ spin_unlock(&walk->mm->page_table_lock);
++ }
+
+- page = vm_normal_page(md->vma, addr, *pte);
++ orig_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
++ do {
++ struct page *page = can_gather_numa_stats(*pte, md->vma, addr);
+ if (!page)
+ continue;
+-
+- if (PageReserved(page))
+- continue;
+-
+- nid = page_to_nid(page);
+- if (!node_isset(nid, node_states[N_HIGH_MEMORY]))
+- continue;
+-
+- gather_stats(page, md, pte_dirty(*pte));
++ gather_stats(page, md, pte_dirty(*pte), 1);
+
+ } while (pte++, addr += PAGE_SIZE, addr != end);
+ pte_unmap_unlock(orig_pte, ptl);
+@@ -952,7 +982,7 @@ static int gather_hugetbl_stats(pte_t *pte, unsigned long hmask,
+ return 0;
+
+ md = walk->private;
+- gather_stats(page, md, pte_dirty(*pte));
++ gather_stats(page, md, pte_dirty(*pte), 1);
+ return 0;
+ }
+
+diff --git a/include/linux/mfd/wm8994/pdata.h b/include/linux/mfd/wm8994/pdata.h
+index d12f8d6..97cf4f2 100644
+--- a/include/linux/mfd/wm8994/pdata.h
++++ b/include/linux/mfd/wm8994/pdata.h
+@@ -26,7 +26,7 @@ struct wm8994_ldo_pdata {
+ struct regulator_init_data *init_data;
+ };
+
+-#define WM8994_CONFIGURE_GPIO 0x8000
++#define WM8994_CONFIGURE_GPIO 0x10000
+
+ #define WM8994_DRC_REGS 5
+ #define WM8994_EQ_REGS 20
+diff --git a/include/linux/rio_regs.h b/include/linux/rio_regs.h
+index 9026b30..218168a 100644
+--- a/include/linux/rio_regs.h
++++ b/include/linux/rio_regs.h
+@@ -36,12 +36,12 @@
+ #define RIO_PEF_PROCESSOR 0x20000000 /* [I] Processor */
+ #define RIO_PEF_SWITCH 0x10000000 /* [I] Switch */
+ #define RIO_PEF_MULTIPORT 0x08000000 /* [VI, 2.1] Multiport */
+-#define RIO_PEF_INB_MBOX 0x00f00000 /* [II] Mailboxes */
+-#define RIO_PEF_INB_MBOX0 0x00800000 /* [II] Mailbox 0 */
+-#define RIO_PEF_INB_MBOX1 0x00400000 /* [II] Mailbox 1 */
+-#define RIO_PEF_INB_MBOX2 0x00200000 /* [II] Mailbox 2 */
+-#define RIO_PEF_INB_MBOX3 0x00100000 /* [II] Mailbox 3 */
+-#define RIO_PEF_INB_DOORBELL 0x00080000 /* [II] Doorbells */
++#define RIO_PEF_INB_MBOX 0x00f00000 /* [II, <= 1.2] Mailboxes */
++#define RIO_PEF_INB_MBOX0 0x00800000 /* [II, <= 1.2] Mailbox 0 */
++#define RIO_PEF_INB_MBOX1 0x00400000 /* [II, <= 1.2] Mailbox 1 */
++#define RIO_PEF_INB_MBOX2 0x00200000 /* [II, <= 1.2] Mailbox 2 */
++#define RIO_PEF_INB_MBOX3 0x00100000 /* [II, <= 1.2] Mailbox 3 */
++#define RIO_PEF_INB_DOORBELL 0x00080000 /* [II, <= 1.2] Doorbells */
+ #define RIO_PEF_EXT_RT 0x00000200 /* [III, 1.3] Extended route table support */
+ #define RIO_PEF_STD_RT 0x00000100 /* [III, 1.3] Standard route table support */
+ #define RIO_PEF_CTLS 0x00000010 /* [III] CTLS */
+@@ -102,7 +102,7 @@
+ #define RIO_SWITCH_RT_LIMIT 0x34 /* [III, 1.3] Switch Route Table Destination ID Limit CAR */
+ #define RIO_RT_MAX_DESTID 0x0000ffff
+
+-#define RIO_MBOX_CSR 0x40 /* [II] Mailbox CSR */
++#define RIO_MBOX_CSR 0x40 /* [II, <= 1.2] Mailbox CSR */
+ #define RIO_MBOX0_AVAIL 0x80000000 /* [II] Mbox 0 avail */
+ #define RIO_MBOX0_FULL 0x40000000 /* [II] Mbox 0 full */
+ #define RIO_MBOX0_EMPTY 0x20000000 /* [II] Mbox 0 empty */
+@@ -128,8 +128,8 @@
+ #define RIO_MBOX3_FAIL 0x00000008 /* [II] Mbox 3 fail */
+ #define RIO_MBOX3_ERROR 0x00000004 /* [II] Mbox 3 error */
+
+-#define RIO_WRITE_PORT_CSR 0x44 /* [I] Write Port CSR */
+-#define RIO_DOORBELL_CSR 0x44 /* [II] Doorbell CSR */
++#define RIO_WRITE_PORT_CSR 0x44 /* [I, <= 1.2] Write Port CSR */
++#define RIO_DOORBELL_CSR 0x44 /* [II, <= 1.2] Doorbell CSR */
+ #define RIO_DOORBELL_AVAIL 0x80000000 /* [II] Doorbell avail */
+ #define RIO_DOORBELL_FULL 0x40000000 /* [II] Doorbell full */
+ #define RIO_DOORBELL_EMPTY 0x20000000 /* [II] Doorbell empty */
+diff --git a/include/linux/rtc.h b/include/linux/rtc.h
+index b27ebea..93f4d03 100644
+--- a/include/linux/rtc.h
++++ b/include/linux/rtc.h
+@@ -97,6 +97,9 @@ struct rtc_pll_info {
+ #define RTC_AF 0x20 /* Alarm interrupt */
+ #define RTC_UF 0x10 /* Update interrupt for 1Hz RTC */
+
++
++#define RTC_MAX_FREQ 8192
++
+ #ifdef __KERNEL__
+
+ #include <linux/types.h>
+diff --git a/include/linux/tty.h b/include/linux/tty.h
+index d6f0529..6660c41 100644
+--- a/include/linux/tty.h
++++ b/include/linux/tty.h
+@@ -420,6 +420,8 @@ extern void tty_driver_flush_buffer(struct tty_struct *tty);
+ extern void tty_throttle(struct tty_struct *tty);
+ extern void tty_unthrottle(struct tty_struct *tty);
+ extern int tty_do_resize(struct tty_struct *tty, struct winsize *ws);
++extern void tty_driver_remove_tty(struct tty_driver *driver,
++ struct tty_struct *tty);
+ extern void tty_shutdown(struct tty_struct *tty);
+ extern void tty_free_termios(struct tty_struct *tty);
+ extern int is_current_pgrp_orphaned(void);
+diff --git a/include/linux/tty_driver.h b/include/linux/tty_driver.h
+index 9deeac8..ecdaeb9 100644
+--- a/include/linux/tty_driver.h
++++ b/include/linux/tty_driver.h
+@@ -47,6 +47,9 @@
+ *
+ * This routine is called synchronously when a particular tty device
+ * is closed for the last time freeing up the resources.
++ * Note that tty_shutdown() is not called if ops->shutdown is defined.
++ * This means one is responsible to take care of calling ops->remove (e.g.
++ * via tty_driver_remove_tty) and releasing tty->termios.
+ *
+ *
+ * void (*cleanup)(struct tty_struct * tty);
+diff --git a/include/linux/writeback.h b/include/linux/writeback.h
+index 17e7ccc..3f6542c 100644
+--- a/include/linux/writeback.h
++++ b/include/linux/writeback.h
+@@ -47,6 +47,7 @@ struct writeback_control {
+ unsigned encountered_congestion:1; /* An output: a queue is full */
+ unsigned for_kupdate:1; /* A kupdate writeback */
+ unsigned for_background:1; /* A background writeback */
++ unsigned tagged_writepages:1; /* tag-and-write to avoid livelock */
+ unsigned for_reclaim:1; /* Invoked from the page allocator */
+ unsigned range_cyclic:1; /* range_start is cyclic */
+ unsigned more_io:1; /* more io to be dispatched */
+diff --git a/include/net/9p/9p.h b/include/net/9p/9p.h
+index 008711e..32f67c3 100644
+--- a/include/net/9p/9p.h
++++ b/include/net/9p/9p.h
+@@ -278,6 +278,30 @@ enum p9_perm_t {
+ P9_DMSETVTX = 0x00010000,
+ };
+
++/* 9p2000.L open flags */
++#define P9_DOTL_RDONLY 00000000
++#define P9_DOTL_WRONLY 00000001
++#define P9_DOTL_RDWR 00000002
++#define P9_DOTL_NOACCESS 00000003
++#define P9_DOTL_CREATE 00000100
++#define P9_DOTL_EXCL 00000200
++#define P9_DOTL_NOCTTY 00000400
++#define P9_DOTL_TRUNC 00001000
++#define P9_DOTL_APPEND 00002000
++#define P9_DOTL_NONBLOCK 00004000
++#define P9_DOTL_DSYNC 00010000
++#define P9_DOTL_FASYNC 00020000
++#define P9_DOTL_DIRECT 00040000
++#define P9_DOTL_LARGEFILE 00100000
++#define P9_DOTL_DIRECTORY 00200000
++#define P9_DOTL_NOFOLLOW 00400000
++#define P9_DOTL_NOATIME 01000000
++#define P9_DOTL_CLOEXEC 02000000
++#define P9_DOTL_SYNC 04000000
++
++/* 9p2000.L at flags */
++#define P9_DOTL_AT_REMOVEDIR 0x200
++
+ /**
+ * enum p9_qid_t - QID types
+ * @P9_QTDIR: directory
+@@ -320,6 +344,11 @@ enum p9_qid_t {
+ /* Room for readdir header */
+ #define P9_READDIRHDRSZ 24
+
++/* 9p2000.L lock type */
++#define P9_LOCK_TYPE_RDLCK 0
++#define P9_LOCK_TYPE_WRLCK 1
++#define P9_LOCK_TYPE_UNLCK 2
++
+ /**
+ * struct p9_str - length prefixed string type
+ * @len: length of the string
+diff --git a/ipc/mqueue.c b/ipc/mqueue.c
+index 14fb6d6..ed049ea 100644
+--- a/ipc/mqueue.c
++++ b/ipc/mqueue.c
+@@ -113,72 +113,75 @@ static struct inode *mqueue_get_inode(struct super_block *sb,
+ {
+ struct user_struct *u = current_user();
+ struct inode *inode;
++ int ret = -ENOMEM;
+
+ inode = new_inode(sb);
+- if (inode) {
+- inode->i_ino = get_next_ino();
+- inode->i_mode = mode;
+- inode->i_uid = current_fsuid();
+- inode->i_gid = current_fsgid();
+- inode->i_mtime = inode->i_ctime = inode->i_atime =
+- CURRENT_TIME;
++ if (!inode)
++ goto err;
+
+- if (S_ISREG(mode)) {
+- struct mqueue_inode_info *info;
+- struct task_struct *p = current;
+- unsigned long mq_bytes, mq_msg_tblsz;
+-
+- inode->i_fop = &mqueue_file_operations;
+- inode->i_size = FILENT_SIZE;
+- /* mqueue specific info */
+- info = MQUEUE_I(inode);
+- spin_lock_init(&info->lock);
+- init_waitqueue_head(&info->wait_q);
+- INIT_LIST_HEAD(&info->e_wait_q[0].list);
+- INIT_LIST_HEAD(&info->e_wait_q[1].list);
+- info->notify_owner = NULL;
+- info->qsize = 0;
+- info->user = NULL; /* set when all is ok */
+- memset(&info->attr, 0, sizeof(info->attr));
+- info->attr.mq_maxmsg = ipc_ns->mq_msg_max;
+- info->attr.mq_msgsize = ipc_ns->mq_msgsize_max;
+- if (attr) {
+- info->attr.mq_maxmsg = attr->mq_maxmsg;
+- info->attr.mq_msgsize = attr->mq_msgsize;
+- }
+- mq_msg_tblsz = info->attr.mq_maxmsg * sizeof(struct msg_msg *);
+- info->messages = kmalloc(mq_msg_tblsz, GFP_KERNEL);
+- if (!info->messages)
+- goto out_inode;
+-
+- mq_bytes = (mq_msg_tblsz +
+- (info->attr.mq_maxmsg * info->attr.mq_msgsize));
+-
+- spin_lock(&mq_lock);
+- if (u->mq_bytes + mq_bytes < u->mq_bytes ||
+- u->mq_bytes + mq_bytes >
+- task_rlimit(p, RLIMIT_MSGQUEUE)) {
+- spin_unlock(&mq_lock);
+- /* mqueue_evict_inode() releases info->messages */
+- goto out_inode;
+- }
+- u->mq_bytes += mq_bytes;
+- spin_unlock(&mq_lock);
++ inode->i_ino = get_next_ino();
++ inode->i_mode = mode;
++ inode->i_uid = current_fsuid();
++ inode->i_gid = current_fsgid();
++ inode->i_mtime = inode->i_ctime = inode->i_atime = CURRENT_TIME;
++
++ if (S_ISREG(mode)) {
++ struct mqueue_inode_info *info;
++ struct task_struct *p = current;
++ unsigned long mq_bytes, mq_msg_tblsz;
++
++ inode->i_fop = &mqueue_file_operations;
++ inode->i_size = FILENT_SIZE;
++ /* mqueue specific info */
++ info = MQUEUE_I(inode);
++ spin_lock_init(&info->lock);
++ init_waitqueue_head(&info->wait_q);
++ INIT_LIST_HEAD(&info->e_wait_q[0].list);
++ INIT_LIST_HEAD(&info->e_wait_q[1].list);
++ info->notify_owner = NULL;
++ info->qsize = 0;
++ info->user = NULL; /* set when all is ok */
++ memset(&info->attr, 0, sizeof(info->attr));
++ info->attr.mq_maxmsg = ipc_ns->mq_msg_max;
++ info->attr.mq_msgsize = ipc_ns->mq_msgsize_max;
++ if (attr) {
++ info->attr.mq_maxmsg = attr->mq_maxmsg;
++ info->attr.mq_msgsize = attr->mq_msgsize;
++ }
++ mq_msg_tblsz = info->attr.mq_maxmsg * sizeof(struct msg_msg *);
++ info->messages = kmalloc(mq_msg_tblsz, GFP_KERNEL);
++ if (!info->messages)
++ goto out_inode;
+
+- /* all is ok */
+- info->user = get_uid(u);
+- } else if (S_ISDIR(mode)) {
+- inc_nlink(inode);
+- /* Some things misbehave if size == 0 on a directory */
+- inode->i_size = 2 * DIRENT_SIZE;
+- inode->i_op = &mqueue_dir_inode_operations;
+- inode->i_fop = &simple_dir_operations;
++ mq_bytes = (mq_msg_tblsz +
++ (info->attr.mq_maxmsg * info->attr.mq_msgsize));
++
++ spin_lock(&mq_lock);
++ if (u->mq_bytes + mq_bytes < u->mq_bytes ||
++ u->mq_bytes + mq_bytes > task_rlimit(p, RLIMIT_MSGQUEUE)) {
++ spin_unlock(&mq_lock);
++ /* mqueue_evict_inode() releases info->messages */
++ ret = -EMFILE;
++ goto out_inode;
+ }
++ u->mq_bytes += mq_bytes;
++ spin_unlock(&mq_lock);
++
++ /* all is ok */
++ info->user = get_uid(u);
++ } else if (S_ISDIR(mode)) {
++ inc_nlink(inode);
++ /* Some things misbehave if size == 0 on a directory */
++ inode->i_size = 2 * DIRENT_SIZE;
++ inode->i_op = &mqueue_dir_inode_operations;
++ inode->i_fop = &simple_dir_operations;
+ }
++
+ return inode;
+ out_inode:
+ iput(inode);
+- return NULL;
++err:
++ return ERR_PTR(ret);
+ }
+
+ static int mqueue_fill_super(struct super_block *sb, void *data, int silent)
+@@ -194,8 +197,8 @@ static int mqueue_fill_super(struct super_block *sb, void *data, int silent)
+
+ inode = mqueue_get_inode(sb, ns, S_IFDIR | S_ISVTX | S_IRWXUGO,
+ NULL);
+- if (!inode) {
+- error = -ENOMEM;
++ if (IS_ERR(inode)) {
++ error = PTR_ERR(inode);
+ goto out;
+ }
+
+@@ -315,8 +318,8 @@ static int mqueue_create(struct inode *dir, struct dentry *dentry,
+ spin_unlock(&mq_lock);
+
+ inode = mqueue_get_inode(dir->i_sb, ipc_ns, mode, attr);
+- if (!inode) {
+- error = -ENOMEM;
++ if (IS_ERR(inode)) {
++ error = PTR_ERR(inode);
+ spin_lock(&mq_lock);
+ ipc_ns->mq_queues_count--;
+ goto out_unlock;
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index d5a3009..dc5114b 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -178,7 +178,7 @@ void irq_shutdown(struct irq_desc *desc)
+ desc->depth = 1;
+ if (desc->irq_data.chip->irq_shutdown)
+ desc->irq_data.chip->irq_shutdown(&desc->irq_data);
+- if (desc->irq_data.chip->irq_disable)
++ else if (desc->irq_data.chip->irq_disable)
+ desc->irq_data.chip->irq_disable(&desc->irq_data);
+ else
+ desc->irq_data.chip->irq_mask(&desc->irq_data);
+diff --git a/kernel/printk.c b/kernel/printk.c
+index 3518539..084982f 100644
+--- a/kernel/printk.c
++++ b/kernel/printk.c
+@@ -1584,7 +1584,7 @@ static int __init printk_late_init(void)
+ struct console *con;
+
+ for_each_console(con) {
+- if (con->flags & CON_BOOT) {
++ if (!keep_bootcon && con->flags & CON_BOOT) {
+ printk(KERN_INFO "turn off boot console %s%d\n",
+ con->name, con->index);
+ unregister_console(con);
+diff --git a/kernel/sched.c b/kernel/sched.c
+index fde6ff9..8b37360 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -4242,9 +4242,9 @@ pick_next_task(struct rq *rq)
+ }
+
+ /*
+- * schedule() is the main scheduler function.
++ * __schedule() is the main scheduler function.
+ */
+-asmlinkage void __sched schedule(void)
++static void __sched __schedule(void)
+ {
+ struct task_struct *prev, *next;
+ unsigned long *switch_count;
+@@ -4285,16 +4285,6 @@ need_resched:
+ if (to_wakeup)
+ try_to_wake_up_local(to_wakeup);
+ }
+-
+- /*
+- * If we are going to sleep and we have plugged IO
+- * queued, make sure to submit it to avoid deadlocks.
+- */
+- if (blk_needs_flush_plug(prev)) {
+- raw_spin_unlock(&rq->lock);
+- blk_schedule_flush_plug(prev);
+- raw_spin_lock(&rq->lock);
+- }
+ }
+ switch_count = &prev->nvcsw;
+ }
+@@ -4332,6 +4322,26 @@ need_resched:
+ if (need_resched())
+ goto need_resched;
+ }
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++ if (!tsk->state)
++ return;
++ /*
++ * If we are going to sleep and we have plugged IO queued,
++ * make sure to submit it to avoid deadlocks.
++ */
++ if (blk_needs_flush_plug(tsk))
++ blk_schedule_flush_plug(tsk);
++}
++
++asmlinkage void schedule(void)
++{
++ struct task_struct *tsk = current;
++
++ sched_submit_work(tsk);
++ __schedule();
++}
+ EXPORT_SYMBOL(schedule);
+
+ #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
+@@ -4405,7 +4415,7 @@ asmlinkage void __sched notrace preempt_schedule(void)
+
+ do {
+ add_preempt_count_notrace(PREEMPT_ACTIVE);
+- schedule();
++ __schedule();
+ sub_preempt_count_notrace(PREEMPT_ACTIVE);
+
+ /*
+@@ -4433,7 +4443,7 @@ asmlinkage void __sched preempt_schedule_irq(void)
+ do {
+ add_preempt_count(PREEMPT_ACTIVE);
+ local_irq_enable();
+- schedule();
++ __schedule();
+ local_irq_disable();
+ sub_preempt_count(PREEMPT_ACTIVE);
+
+@@ -5558,7 +5568,7 @@ static inline int should_resched(void)
+ static void __cond_resched(void)
+ {
+ add_preempt_count(PREEMPT_ACTIVE);
+- schedule();
++ __schedule();
+ sub_preempt_count(PREEMPT_ACTIVE);
+ }
+
+@@ -7413,6 +7423,7 @@ static void __sdt_free(const struct cpumask *cpu_map)
+ struct sched_domain *sd = *per_cpu_ptr(sdd->sd, j);
+ if (sd && (sd->flags & SD_OVERLAP))
+ free_sched_groups(sd->groups, 0);
++ kfree(*per_cpu_ptr(sdd->sd, j));
+ kfree(*per_cpu_ptr(sdd->sg, j));
+ kfree(*per_cpu_ptr(sdd->sgp, j));
+ }
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index 59f369f..ea5e1a9 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -441,6 +441,8 @@ static int alarm_timer_create(struct k_itimer *new_timer)
+ static void alarm_timer_get(struct k_itimer *timr,
+ struct itimerspec *cur_setting)
+ {
++ memset(cur_setting, 0, sizeof(struct itimerspec));
++
+ cur_setting->it_interval =
+ ktime_to_timespec(timr->it.alarmtimer.period);
+ cur_setting->it_value =
+@@ -479,11 +481,17 @@ static int alarm_timer_set(struct k_itimer *timr, int flags,
+ if (!rtcdev)
+ return -ENOTSUPP;
+
+- /* Save old values */
+- old_setting->it_interval =
+- ktime_to_timespec(timr->it.alarmtimer.period);
+- old_setting->it_value =
+- ktime_to_timespec(timr->it.alarmtimer.node.expires);
++ /*
++ * XXX HACK! Currently we can DOS a system if the interval
++ * period on alarmtimers is too small. Cap the interval here
++ * to 100us and solve this properly in a future patch! -jstultz
++ */
++ if ((new_setting->it_interval.tv_sec == 0) &&
++ (new_setting->it_interval.tv_nsec < 100000))
++ new_setting->it_interval.tv_nsec = 100000;
++
++ if (old_setting)
++ alarm_timer_get(timr, old_setting);
+
+ /* If the timer was already set, cancel it */
+ alarm_cancel(&timr->it.alarmtimer);
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 0400553..aec02b6 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3026,8 +3026,13 @@ reflush:
+
+ for_each_cwq_cpu(cpu, wq) {
+ struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
++ bool drained;
+
+- if (!cwq->nr_active && list_empty(&cwq->delayed_works))
++ spin_lock_irq(&cwq->gcwq->lock);
++ drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
++ spin_unlock_irq(&cwq->gcwq->lock);
++
++ if (drained)
+ continue;
+
+ if (++flush_cnt == 10 ||
+diff --git a/lib/xz/xz_dec_bcj.c b/lib/xz/xz_dec_bcj.c
+index e51e255..a768e6d 100644
+--- a/lib/xz/xz_dec_bcj.c
++++ b/lib/xz/xz_dec_bcj.c
+@@ -441,8 +441,12 @@ XZ_EXTERN enum xz_ret xz_dec_bcj_run(struct xz_dec_bcj *s,
+ * next filter in the chain. Apply the BCJ filter on the new data
+ * in the output buffer. If everything cannot be filtered, copy it
+ * to temp and rewind the output buffer position accordingly.
++ *
++ * This needs to be always run when temp.size == 0 to handle a special
++ * case where the output buffer is full and the next filter has no
++ * more output coming but hasn't returned XZ_STREAM_END yet.
+ */
+- if (s->temp.size < b->out_size - b->out_pos) {
++ if (s->temp.size < b->out_size - b->out_pos || s->temp.size == 0) {
+ out_start = b->out_pos;
+ memcpy(b->out + b->out_pos, s->temp.buf, s->temp.size);
+ b->out_pos += s->temp.size;
+@@ -465,16 +469,25 @@ XZ_EXTERN enum xz_ret xz_dec_bcj_run(struct xz_dec_bcj *s,
+ s->temp.size = b->out_pos - out_start;
+ b->out_pos -= s->temp.size;
+ memcpy(s->temp.buf, b->out + b->out_pos, s->temp.size);
++
++ /*
++ * If there wasn't enough input to the next filter to fill
++ * the output buffer with unfiltered data, there's no point
++ * to try decoding more data to temp.
++ */
++ if (b->out_pos + s->temp.size < b->out_size)
++ return XZ_OK;
+ }
+
+ /*
+- * If we have unfiltered data in temp, try to fill by decoding more
+- * data from the next filter. Apply the BCJ filter on temp. Then we
+- * hopefully can fill the actual output buffer by copying filtered
+- * data from temp. A mix of filtered and unfiltered data may be left
+- * in temp; it will be taken care on the next call to this function.
++ * We have unfiltered data in temp. If the output buffer isn't full
++ * yet, try to fill the temp buffer by decoding more data from the
++ * next filter. Apply the BCJ filter on temp. Then we hopefully can
++ * fill the actual output buffer by copying filtered data from temp.
++ * A mix of filtered and unfiltered data may be left in temp; it will
++ * be taken care on the next call to this function.
+ */
+- if (s->temp.size > 0) {
++ if (b->out_pos < b->out_size) {
+ /* Make b->out{,_pos,_size} temporarily point to s->temp. */
+ s->out = b->out;
+ s->out_pos = b->out_pos;
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index 31f6988..955fe35 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -892,12 +892,12 @@ int write_cache_pages(struct address_space *mapping,
+ range_whole = 1;
+ cycled = 1; /* ignore range_cyclic tests */
+ }
+- if (wbc->sync_mode == WB_SYNC_ALL)
++ if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
+ tag = PAGECACHE_TAG_TOWRITE;
+ else
+ tag = PAGECACHE_TAG_DIRTY;
+ retry:
+- if (wbc->sync_mode == WB_SYNC_ALL)
++ if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
+ tag_pages_for_writeback(mapping, index, end);
+ done_index = index;
+ while (!done && (index <= end)) {
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 4e8985a..0f50cdb 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1616,6 +1616,21 @@ static void zlc_mark_zone_full(struct zonelist *zonelist, struct zoneref *z)
+ set_bit(i, zlc->fullzones);
+ }
+
++/*
++ * clear all zones full, called after direct reclaim makes progress so that
++ * a zone that was recently full is not skipped over for up to a second
++ */
++static void zlc_clear_zones_full(struct zonelist *zonelist)
++{
++ struct zonelist_cache *zlc; /* cached zonelist speedup info */
++
++ zlc = zonelist->zlcache_ptr;
++ if (!zlc)
++ return;
++
++ bitmap_zero(zlc->fullzones, MAX_ZONES_PER_ZONELIST);
++}
++
+ #else /* CONFIG_NUMA */
+
+ static nodemask_t *zlc_setup(struct zonelist *zonelist, int alloc_flags)
+@@ -1632,6 +1647,10 @@ static int zlc_zone_worth_trying(struct zonelist *zonelist, struct zoneref *z,
+ static void zlc_mark_zone_full(struct zonelist *zonelist, struct zoneref *z)
+ {
+ }
++
++static void zlc_clear_zones_full(struct zonelist *zonelist)
++{
++}
+ #endif /* CONFIG_NUMA */
+
+ /*
+@@ -1664,7 +1683,7 @@ zonelist_scan:
+ continue;
+ if ((alloc_flags & ALLOC_CPUSET) &&
+ !cpuset_zone_allowed_softwall(zone, gfp_mask))
+- goto try_next_zone;
++ continue;
+
+ BUILD_BUG_ON(ALLOC_NO_WATERMARKS < NR_WMARK);
+ if (!(alloc_flags & ALLOC_NO_WATERMARKS)) {
+@@ -1676,17 +1695,36 @@ zonelist_scan:
+ classzone_idx, alloc_flags))
+ goto try_this_zone;
+
++ if (NUMA_BUILD && !did_zlc_setup && nr_online_nodes > 1) {
++ /*
++ * we do zlc_setup if there are multiple nodes
++ * and before considering the first zone allowed
++ * by the cpuset.
++ */
++ allowednodes = zlc_setup(zonelist, alloc_flags);
++ zlc_active = 1;
++ did_zlc_setup = 1;
++ }
++
+ if (zone_reclaim_mode == 0)
+ goto this_zone_full;
+
++ /*
++ * As we may have just activated ZLC, check if the first
++ * eligible zone has failed zone_reclaim recently.
++ */
++ if (NUMA_BUILD && zlc_active &&
++ !zlc_zone_worth_trying(zonelist, z, allowednodes))
++ continue;
++
+ ret = zone_reclaim(zone, gfp_mask, order);
+ switch (ret) {
+ case ZONE_RECLAIM_NOSCAN:
+ /* did not scan */
+- goto try_next_zone;
++ continue;
+ case ZONE_RECLAIM_FULL:
+ /* scanned but unreclaimable */
+- goto this_zone_full;
++ continue;
+ default:
+ /* did we reclaim enough */
+ if (!zone_watermark_ok(zone, order, mark,
+@@ -1703,16 +1741,6 @@ try_this_zone:
+ this_zone_full:
+ if (NUMA_BUILD)
+ zlc_mark_zone_full(zonelist, z);
+-try_next_zone:
+- if (NUMA_BUILD && !did_zlc_setup && nr_online_nodes > 1) {
+- /*
+- * we do zlc_setup after the first zone is tried but only
+- * if there are multiple nodes make it worthwhile
+- */
+- allowednodes = zlc_setup(zonelist, alloc_flags);
+- zlc_active = 1;
+- did_zlc_setup = 1;
+- }
+ }
+
+ if (unlikely(NUMA_BUILD && page == NULL && zlc_active)) {
+@@ -1954,6 +1982,10 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
+ if (unlikely(!(*did_some_progress)))
+ return NULL;
+
++ /* After successful reclaim, reconsider all zones for allocation */
++ if (NUMA_BUILD)
++ zlc_clear_zones_full(zonelist);
++
+ retry:
+ page = get_page_from_freelist(gfp_mask, nodemask, order,
+ zonelist, high_zoneidx,
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index d3d451b..45ece89 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2154,6 +2154,14 @@ struct vm_struct *alloc_vm_area(size_t size)
+ return NULL;
+ }
+
++ /*
++ * If the allocated address space is passed to a hypercall
++ * before being used then we cannot rely on a page fault to
++ * trigger an update of the page tables. So sync all the page
++ * tables here.
++ */
++ vmalloc_sync_all();
++
+ return area;
+ }
+ EXPORT_SYMBOL_GPL(alloc_vm_area);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index d036e59..6072d74 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1748,6 +1748,7 @@ static void get_scan_count(struct zone *zone, struct scan_control *sc,
+ enum lru_list l;
+ int noswap = 0;
+ int force_scan = 0;
++ unsigned long nr_force_scan[2];
+
+
+ anon = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_ANON) +
+@@ -1770,6 +1771,8 @@ static void get_scan_count(struct zone *zone, struct scan_control *sc,
+ fraction[0] = 0;
+ fraction[1] = 1;
+ denominator = 1;
++ nr_force_scan[0] = 0;
++ nr_force_scan[1] = SWAP_CLUSTER_MAX;
+ goto out;
+ }
+
+@@ -1781,6 +1784,8 @@ static void get_scan_count(struct zone *zone, struct scan_control *sc,
+ fraction[0] = 1;
+ fraction[1] = 0;
+ denominator = 1;
++ nr_force_scan[0] = SWAP_CLUSTER_MAX;
++ nr_force_scan[1] = 0;
+ goto out;
+ }
+ }
+@@ -1829,6 +1834,11 @@ static void get_scan_count(struct zone *zone, struct scan_control *sc,
+ fraction[0] = ap;
+ fraction[1] = fp;
+ denominator = ap + fp + 1;
++ if (force_scan) {
++ unsigned long scan = SWAP_CLUSTER_MAX;
++ nr_force_scan[0] = div64_u64(scan * ap, denominator);
++ nr_force_scan[1] = div64_u64(scan * fp, denominator);
++ }
+ out:
+ for_each_evictable_lru(l) {
+ int file = is_file_lru(l);
+@@ -1849,12 +1859,8 @@ out:
+ * memcg, priority drop can cause big latency. So, it's better
+ * to scan small amount. See may_noscan above.
+ */
+- if (!scan && force_scan) {
+- if (file)
+- scan = SWAP_CLUSTER_MAX;
+- else if (!noswap)
+- scan = SWAP_CLUSTER_MAX;
+- }
++ if (!scan && force_scan)
++ scan = nr_force_scan[file];
+ nr[l] = scan;
+ }
+ }
+diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c
+index fcc6846..27263fb 100644
+--- a/net/8021q/vlan_core.c
++++ b/net/8021q/vlan_core.c
+@@ -171,6 +171,8 @@ struct sk_buff *vlan_untag(struct sk_buff *skb)
+ if (unlikely(!skb))
+ goto err_free;
+
++ skb_reset_network_header(skb);
++ skb_reset_transport_header(skb);
+ return skb;
+
+ err_free:
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 9e3b0e6..5532710 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -280,7 +280,8 @@ struct p9_req_t *p9_tag_lookup(struct p9_client *c, u16 tag)
+ * buffer to read the data into */
+ tag++;
+
+- BUG_ON(tag >= c->max_tag);
++ if(tag >= c->max_tag)
++ return NULL;
+
+ row = tag / P9_ROW_MAXTAG;
+ col = tag % P9_ROW_MAXTAG;
+@@ -821,8 +822,8 @@ struct p9_client *p9_client_create(const char *dev_name, char *options)
+ if (err)
+ goto destroy_fidpool;
+
+- if ((clnt->msize+P9_IOHDRSZ) > clnt->trans_mod->maxsize)
+- clnt->msize = clnt->trans_mod->maxsize-P9_IOHDRSZ;
++ if (clnt->msize > clnt->trans_mod->maxsize)
++ clnt->msize = clnt->trans_mod->maxsize;
+
+ err = p9_client_version(clnt);
+ if (err)
+@@ -1249,9 +1250,11 @@ int p9_client_clunk(struct p9_fid *fid)
+ P9_DPRINTK(P9_DEBUG_9P, "<<< RCLUNK fid %d\n", fid->fid);
+
+ p9_free_req(clnt, req);
+- p9_fid_destroy(fid);
+-
+ error:
++ /*
++ * Fid is not valid even after a failed clunk
++ */
++ p9_fid_destroy(fid);
+ return err;
+ }
+ EXPORT_SYMBOL(p9_client_clunk);
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index 244e707..e317583 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -263,7 +263,6 @@ p9_virtio_request(struct p9_client *client, struct p9_req_t *req)
+ {
+ int in, out, inp, outp;
+ struct virtio_chan *chan = client->trans;
+- char *rdata = (char *)req->rc+sizeof(struct p9_fcall);
+ unsigned long flags;
+ size_t pdata_off = 0;
+ struct trans_rpage_info *rpinfo = NULL;
+@@ -346,7 +345,8 @@ req_retry_pinned:
+ * Arrange in such a way that server places header in the
+ * alloced memory and payload onto the user buffer.
+ */
+- inp = pack_sg_list(chan->sg, out, VIRTQUEUE_NUM, rdata, 11);
++ inp = pack_sg_list(chan->sg, out,
++ VIRTQUEUE_NUM, req->rc->sdata, 11);
+ /*
+ * Running executables in the filesystem may result in
+ * a read request with kernel buffer as opposed to user buffer.
+@@ -366,8 +366,8 @@ req_retry_pinned:
+ }
+ in += inp;
+ } else {
+- in = pack_sg_list(chan->sg, out, VIRTQUEUE_NUM, rdata,
+- client->msize);
++ in = pack_sg_list(chan->sg, out, VIRTQUEUE_NUM,
++ req->rc->sdata, req->rc->capacity);
+ }
+
+ err = virtqueue_add_buf(chan->vq, chan->sg, out, in, req->tc);
+@@ -592,7 +592,14 @@ static struct p9_trans_module p9_virtio_trans = {
+ .close = p9_virtio_close,
+ .request = p9_virtio_request,
+ .cancel = p9_virtio_cancel,
+- .maxsize = PAGE_SIZE*16,
++
++ /*
++ * We leave one entry for input and one entry for response
++ * headers. We also skip one more entry to accomodate, address
++ * that are not at page boundary, that can result in an extra
++ * page in zero copy.
++ */
++ .maxsize = PAGE_SIZE * (VIRTQUEUE_NUM - 3),
+ .pref = P9_TRANS_PREF_PAYLOAD_SEP,
+ .def = 0,
+ .owner = THIS_MODULE,
+diff --git a/net/atm/br2684.c b/net/atm/br2684.c
+index 52cfd0c..d07223c 100644
+--- a/net/atm/br2684.c
++++ b/net/atm/br2684.c
+@@ -558,12 +558,13 @@ static int br2684_regvcc(struct atm_vcc *atmvcc, void __user * arg)
+ spin_unlock_irqrestore(&rq->lock, flags);
+
+ skb_queue_walk_safe(&queue, skb, tmp) {
+- struct net_device *dev = skb->dev;
++ struct net_device *dev;
++
++ br2684_push(atmvcc, skb);
++ dev = skb->dev;
+
+ dev->stats.rx_bytes -= skb->len;
+ dev->stats.rx_packets--;
+-
+- br2684_push(atmvcc, skb);
+ }
+
+ /* initialize netdev carrier state */
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 77930aa..01aa7e7 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -56,8 +56,8 @@ static void hci_cc_inquiry_cancel(struct hci_dev *hdev, struct sk_buff *skb)
+ if (status)
+ return;
+
+- if (test_bit(HCI_MGMT, &hdev->flags) &&
+- test_and_clear_bit(HCI_INQUIRY, &hdev->flags))
++ if (test_and_clear_bit(HCI_INQUIRY, &hdev->flags) &&
++ test_bit(HCI_MGMT, &hdev->flags))
+ mgmt_discovering(hdev->id, 0);
+
+ hci_req_complete(hdev, HCI_OP_INQUIRY_CANCEL, status);
+@@ -74,8 +74,8 @@ static void hci_cc_exit_periodic_inq(struct hci_dev *hdev, struct sk_buff *skb)
+ if (status)
+ return;
+
+- if (test_bit(HCI_MGMT, &hdev->flags) &&
+- test_and_clear_bit(HCI_INQUIRY, &hdev->flags))
++ if (test_and_clear_bit(HCI_INQUIRY, &hdev->flags) &&
++ test_bit(HCI_MGMT, &hdev->flags))
+ mgmt_discovering(hdev->id, 0);
+
+ hci_conn_check_pending(hdev);
+@@ -851,9 +851,8 @@ static inline void hci_cs_inquiry(struct hci_dev *hdev, __u8 status)
+ return;
+ }
+
+- if (test_bit(HCI_MGMT, &hdev->flags) &&
+- !test_and_set_bit(HCI_INQUIRY,
+- &hdev->flags))
++ if (!test_and_set_bit(HCI_INQUIRY, &hdev->flags) &&
++ test_bit(HCI_MGMT, &hdev->flags))
+ mgmt_discovering(hdev->id, 1);
+ }
+
+@@ -1225,8 +1224,8 @@ static inline void hci_inquiry_complete_evt(struct hci_dev *hdev, struct sk_buff
+
+ BT_DBG("%s status %d", hdev->name, status);
+
+- if (test_bit(HCI_MGMT, &hdev->flags) &&
+- test_and_clear_bit(HCI_INQUIRY, &hdev->flags))
++ if (test_and_clear_bit(HCI_INQUIRY, &hdev->flags) &&
++ test_bit(HCI_MGMT, &hdev->flags))
+ mgmt_discovering(hdev->id, 0);
+
+ hci_req_complete(hdev, HCI_OP_INQUIRY, status);
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index 1bacca4..6f156c1 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -231,6 +231,7 @@ static struct net_bridge_port *new_nbp(struct net_bridge *br,
+ int br_add_bridge(struct net *net, const char *name)
+ {
+ struct net_device *dev;
++ int res;
+
+ dev = alloc_netdev(sizeof(struct net_bridge), name,
+ br_dev_setup);
+@@ -240,7 +241,10 @@ int br_add_bridge(struct net *net, const char *name)
+
+ dev_net_set(dev, net);
+
+- return register_netdev(dev);
++ res = register_netdev(dev);
++ if (res)
++ free_netdev(dev);
++ return res;
+ }
+
+ int br_del_bridge(struct net *net, const char *name)
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 2d85ca7..995cbe0 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1456,7 +1456,7 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
+ {
+ struct sk_buff *skb2;
+ const struct ipv6hdr *ip6h;
+- struct icmp6hdr *icmp6h;
++ u8 icmp6_type;
+ u8 nexthdr;
+ unsigned len;
+ int offset;
+@@ -1502,9 +1502,9 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
+ __skb_pull(skb2, offset);
+ skb_reset_transport_header(skb2);
+
+- icmp6h = icmp6_hdr(skb2);
++ icmp6_type = icmp6_hdr(skb2)->icmp6_type;
+
+- switch (icmp6h->icmp6_type) {
++ switch (icmp6_type) {
+ case ICMPV6_MGM_QUERY:
+ case ICMPV6_MGM_REPORT:
+ case ICMPV6_MGM_REDUCTION:
+@@ -1520,16 +1520,23 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
+ err = pskb_trim_rcsum(skb2, len);
+ if (err)
+ goto out;
++ err = -EINVAL;
+ }
+
++ ip6h = ipv6_hdr(skb2);
++
+ switch (skb2->ip_summed) {
+ case CHECKSUM_COMPLETE:
+- if (!csum_fold(skb2->csum))
++ if (!csum_ipv6_magic(&ip6h->saddr, &ip6h->daddr, skb2->len,
++ IPPROTO_ICMPV6, skb2->csum))
+ break;
+ /*FALLTHROUGH*/
+ case CHECKSUM_NONE:
+- skb2->csum = 0;
+- if (skb_checksum_complete(skb2))
++ skb2->csum = ~csum_unfold(csum_ipv6_magic(&ip6h->saddr,
++ &ip6h->daddr,
++ skb2->len,
++ IPPROTO_ICMPV6, 0));
++ if (__skb_checksum_complete(skb2))
+ goto out;
+ }
+
+@@ -1537,7 +1544,7 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
+
+ BR_INPUT_SKB_CB(skb)->igmp = 1;
+
+- switch (icmp6h->icmp6_type) {
++ switch (icmp6_type) {
+ case ICMPV6_MGM_REPORT:
+ {
+ struct mld_msg *mld;
+diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
+index 008dc70..f39ef5c 100644
+--- a/net/core/fib_rules.c
++++ b/net/core/fib_rules.c
+@@ -384,8 +384,8 @@ static int fib_nl_newrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg)
+ */
+ list_for_each_entry(r, &ops->rules_list, list) {
+ if (r->action == FR_ACT_GOTO &&
+- r->target == rule->pref) {
+- BUG_ON(rtnl_dereference(r->ctarget) != NULL);
++ r->target == rule->pref &&
++ rtnl_dereference(r->ctarget) == NULL) {
+ rcu_assign_pointer(r->ctarget, rule);
+ if (--ops->unresolved_rules == 0)
+ break;
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 799f06e..16db887 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1383,11 +1383,15 @@ static void neigh_proxy_process(unsigned long arg)
+
+ if (tdif <= 0) {
+ struct net_device *dev = skb->dev;
++
+ __skb_unlink(skb, &tbl->proxy_queue);
+- if (tbl->proxy_redo && netif_running(dev))
++ if (tbl->proxy_redo && netif_running(dev)) {
++ rcu_read_lock();
+ tbl->proxy_redo(skb);
+- else
++ rcu_read_unlock();
++ } else {
+ kfree_skb(skb);
++ }
+
+ dev_put(dev);
+ } else if (!sched_next || tdif < sched_next)
+diff --git a/net/core/scm.c b/net/core/scm.c
+index 4c1ef02..811b53f 100644
+--- a/net/core/scm.c
++++ b/net/core/scm.c
+@@ -192,7 +192,7 @@ int __scm_send(struct socket *sock, struct msghdr *msg, struct scm_cookie *p)
+ goto error;
+
+ cred->uid = cred->euid = p->creds.uid;
+- cred->gid = cred->egid = p->creds.uid;
++ cred->gid = cred->egid = p->creds.gid;
+ put_cred(p->cred);
+ p->cred = cred;
+ }
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 283c0a2..d577199 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -767,7 +767,7 @@ static int igmp_xmarksources(struct ip_mc_list *pmc, int nsrcs, __be32 *srcs)
+ break;
+ for (i=0; i<nsrcs; i++) {
+ /* skip inactive filters */
+- if (pmc->sfcount[MCAST_INCLUDE] ||
++ if (psf->sf_count[MCAST_INCLUDE] ||
+ pmc->sfcount[MCAST_EXCLUDE] !=
+ psf->sf_count[MCAST_EXCLUDE])
+ continue;
+diff --git a/net/ipv4/netfilter.c b/net/ipv4/netfilter.c
+index 2e97e3e..929b27b 100644
+--- a/net/ipv4/netfilter.c
++++ b/net/ipv4/netfilter.c
+@@ -18,17 +18,15 @@ int ip_route_me_harder(struct sk_buff *skb, unsigned addr_type)
+ struct rtable *rt;
+ struct flowi4 fl4 = {};
+ __be32 saddr = iph->saddr;
+- __u8 flags = 0;
++ __u8 flags = skb->sk ? inet_sk_flowi_flags(skb->sk) : 0;
+ unsigned int hh_len;
+
+- if (!skb->sk && addr_type != RTN_LOCAL) {
+- if (addr_type == RTN_UNSPEC)
+- addr_type = inet_addr_type(net, saddr);
+- if (addr_type == RTN_LOCAL || addr_type == RTN_UNICAST)
+- flags |= FLOWI_FLAG_ANYSRC;
+- else
+- saddr = 0;
+- }
++ if (addr_type == RTN_UNSPEC)
++ addr_type = inet_addr_type(net, saddr);
++ if (addr_type == RTN_LOCAL || addr_type == RTN_UNICAST)
++ flags |= FLOWI_FLAG_ANYSRC;
++ else
++ saddr = 0;
+
+ /* some non-standard hacks like ipt_REJECT.c:send_reset() can cause
+ * packets with foreign saddr to appear on the NF_INET_LOCAL_OUT hook.
+@@ -38,7 +36,7 @@ int ip_route_me_harder(struct sk_buff *skb, unsigned addr_type)
+ fl4.flowi4_tos = RT_TOS(iph->tos);
+ fl4.flowi4_oif = skb->sk ? skb->sk->sk_bound_dev_if : 0;
+ fl4.flowi4_mark = skb->mark;
+- fl4.flowi4_flags = skb->sk ? inet_sk_flowi_flags(skb->sk) : flags;
++ fl4.flowi4_flags = flags;
+ rt = ip_route_output_key(net, &fl4);
+ if (IS_ERR(rt))
+ return -1;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index cdabdbf..75ef66f 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -717,7 +717,7 @@ static inline bool compare_hash_inputs(const struct rtable *rt1,
+ {
+ return ((((__force u32)rt1->rt_key_dst ^ (__force u32)rt2->rt_key_dst) |
+ ((__force u32)rt1->rt_key_src ^ (__force u32)rt2->rt_key_src) |
+- (rt1->rt_iif ^ rt2->rt_iif)) == 0);
++ (rt1->rt_route_iif ^ rt2->rt_route_iif)) == 0);
+ }
+
+ static inline int compare_keys(struct rtable *rt1, struct rtable *rt2)
+@@ -727,8 +727,7 @@ static inline int compare_keys(struct rtable *rt1, struct rtable *rt2)
+ (rt1->rt_mark ^ rt2->rt_mark) |
+ (rt1->rt_key_tos ^ rt2->rt_key_tos) |
+ (rt1->rt_route_iif ^ rt2->rt_route_iif) |
+- (rt1->rt_oif ^ rt2->rt_oif) |
+- (rt1->rt_iif ^ rt2->rt_iif)) == 0;
++ (rt1->rt_oif ^ rt2->rt_oif)) == 0;
+ }
+
+ static inline int compare_netns(struct rtable *rt1, struct rtable *rt2)
+@@ -2282,9 +2281,8 @@ int ip_route_input_common(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ rth = rcu_dereference(rth->dst.rt_next)) {
+ if ((((__force u32)rth->rt_key_dst ^ (__force u32)daddr) |
+ ((__force u32)rth->rt_key_src ^ (__force u32)saddr) |
+- (rth->rt_iif ^ iif) |
++ (rth->rt_route_iif ^ iif) |
+ (rth->rt_key_tos ^ tos)) == 0 &&
+- rt_is_input_route(rth) &&
+ rth->rt_mark == skb->mark &&
+ net_eq(dev_net(rth->dst.dev), net) &&
+ !rt_is_expired(rth)) {
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 2646149..4382629 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -276,7 +276,7 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb,
+ int mss;
+ struct rtable *rt;
+ __u8 rcv_wscale;
+- bool ecn_ok;
++ bool ecn_ok = false;
+
+ if (!sysctl_tcp_syncookies || !th->ack || th->rst)
+ goto out;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index bef9f04..b6771f9 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -1115,7 +1115,7 @@ static int tcp_is_sackblock_valid(struct tcp_sock *tp, int is_dsack,
+ return 0;
+
+ /* ...Then it's D-SACK, and must reside below snd_una completely */
+- if (!after(end_seq, tp->snd_una))
++ if (after(end_seq, tp->snd_una))
+ return 0;
+
+ if (!before(start_seq, tp->undo_marker))
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 9cb191e..147ede38 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -913,7 +913,7 @@ static int ipv6_getsockopt_sticky(struct sock *sk, struct ipv6_txoptions *opt,
+ }
+
+ static int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
+- char __user *optval, int __user *optlen)
++ char __user *optval, int __user *optlen, unsigned flags)
+ {
+ struct ipv6_pinfo *np = inet6_sk(sk);
+ int len;
+@@ -962,7 +962,7 @@ static int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
+
+ msg.msg_control = optval;
+ msg.msg_controllen = len;
+- msg.msg_flags = 0;
++ msg.msg_flags = flags;
+
+ lock_sock(sk);
+ skb = np->pktoptions;
+@@ -1222,7 +1222,7 @@ int ipv6_getsockopt(struct sock *sk, int level, int optname,
+ if(level != SOL_IPV6)
+ return -ENOPROTOOPT;
+
+- err = do_ipv6_getsockopt(sk, level, optname, optval, optlen);
++ err = do_ipv6_getsockopt(sk, level, optname, optval, optlen, 0);
+ #ifdef CONFIG_NETFILTER
+ /* we need to exclude all possible ENOPROTOOPTs except default case */
+ if (err == -ENOPROTOOPT && optname != IPV6_2292PKTOPTIONS) {
+@@ -1264,7 +1264,8 @@ int compat_ipv6_getsockopt(struct sock *sk, int level, int optname,
+ return compat_mc_getsockopt(sk, level, optname, optval, optlen,
+ ipv6_getsockopt);
+
+- err = do_ipv6_getsockopt(sk, level, optname, optval, optlen);
++ err = do_ipv6_getsockopt(sk, level, optname, optval, optlen,
++ MSG_CMSG_COMPAT);
+ #ifdef CONFIG_NETFILTER
+ /* we need to exclude all possible ENOPROTOOPTs except default case */
+ if (err == -ENOPROTOOPT && optname != IPV6_2292PKTOPTIONS) {
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index 3e6ebcd..ee7839f 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -1059,7 +1059,7 @@ static int mld_xmarksources(struct ifmcaddr6 *pmc, int nsrcs,
+ break;
+ for (i=0; i<nsrcs; i++) {
+ /* skip inactive filters */
+- if (pmc->mca_sfcount[MCAST_INCLUDE] ||
++ if (psf->sf_count[MCAST_INCLUDE] ||
+ pmc->mca_sfcount[MCAST_EXCLUDE] !=
+ psf->sf_count[MCAST_EXCLUDE])
+ continue;
+diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
+index 8b9644a..14b8339 100644
+--- a/net/ipv6/syncookies.c
++++ b/net/ipv6/syncookies.c
+@@ -165,7 +165,7 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
+ int mss;
+ struct dst_entry *dst;
+ __u8 rcv_wscale;
+- bool ecn_ok;
++ bool ecn_ok = false;
+
+ if (!sysctl_tcp_syncookies || !th->ack || th->rst)
+ goto out;
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index b83870b..ca7bf10 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -669,7 +669,7 @@ static int __must_check __sta_info_destroy(struct sta_info *sta)
+ BUG_ON(!sdata->bss);
+
+ atomic_dec(&sdata->bss->num_sta_ps);
+- __sta_info_clear_tim_bit(sdata->bss, sta);
++ sta_info_clear_tim_bit(sta);
+ }
+
+ local->num_sta--;
+diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c
+index 2a318f2..b5d56a2 100644
+--- a/net/sched/sch_prio.c
++++ b/net/sched/sch_prio.c
+@@ -112,7 +112,7 @@ static struct sk_buff *prio_dequeue(struct Qdisc *sch)
+
+ for (prio = 0; prio < q->bands; prio++) {
+ struct Qdisc *qdisc = q->queues[prio];
+- struct sk_buff *skb = qdisc->dequeue(qdisc);
++ struct sk_buff *skb = qdisc_dequeue_peeked(qdisc);
+ if (skb) {
+ qdisc_bstats_update(sch, skb);
+ sch->q.qlen--;
+diff --git a/net/socket.c b/net/socket.c
+index ed46dbb..1ad42d3 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1965,8 +1965,9 @@ static int __sys_sendmsg(struct socket *sock, struct msghdr __user *msg,
+ * used_address->name_len is initialized to UINT_MAX so that the first
+ * destination address never matches.
+ */
+- if (used_address && used_address->name_len == msg_sys->msg_namelen &&
+- !memcmp(&used_address->name, msg->msg_name,
++ if (used_address && msg_sys->msg_name &&
++ used_address->name_len == msg_sys->msg_namelen &&
++ !memcmp(&used_address->name, msg_sys->msg_name,
+ used_address->name_len)) {
+ err = sock_sendmsg_nosec(sock, msg_sys, total_len);
+ goto out_freectl;
+@@ -1978,8 +1979,9 @@ static int __sys_sendmsg(struct socket *sock, struct msghdr __user *msg,
+ */
+ if (used_address && err >= 0) {
+ used_address->name_len = msg_sys->msg_namelen;
+- memcpy(&used_address->name, msg->msg_name,
+- used_address->name_len);
++ if (msg_sys->msg_name)
++ memcpy(&used_address->name, msg_sys->msg_name,
++ used_address->name_len);
+ }
+
+ out_freectl:
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index cea3381..1ac9443 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -4044,9 +4044,12 @@ static int nl80211_crypto_settings(struct cfg80211_registered_device *rdev,
+ if (len % sizeof(u32))
+ return -EINVAL;
+
++ if (settings->n_akm_suites > NL80211_MAX_NR_AKM_SUITES)
++ return -EINVAL;
++
+ memcpy(settings->akm_suites, data, len);
+
+- for (i = 0; i < settings->n_ciphers_pairwise; i++)
++ for (i = 0; i < settings->n_akm_suites; i++)
+ if (!nl80211_valid_akm_suite(settings->akm_suites[i]))
+ return -EINVAL;
+ }
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 4453eb7..379574c 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -852,6 +852,7 @@ static void handle_channel(struct wiphy *wiphy,
+ return;
+ }
+
++ chan->beacon_found = false;
+ chan->flags = flags | bw_flags | map_regdom_flags(reg_rule->flags);
+ chan->max_antenna_gain = min(chan->orig_mag,
+ (int) MBI_TO_DBI(power_rule->max_antenna_gain));
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index a026b0e..54a0dc2 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -212,6 +212,11 @@ resume:
+ /* only the first xfrm gets the encap type */
+ encap_type = 0;
+
++ if (async && x->repl->check(x, skb, seq)) {
++ XFRM_INC_STATS(net, LINUX_MIB_XFRMINSTATESEQERROR);
++ goto drop_unlock;
++ }
++
+ x->repl->advance(x, seq);
+
+ x->curlft.bytes += skb->len;
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index f134130..3388442 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1758,6 +1758,10 @@ static int wait_for_avail(struct snd_pcm_substream *substream,
+ snd_pcm_uframes_t avail = 0;
+ long wait_time, tout;
+
++ init_waitqueue_entry(&wait, current);
++ set_current_state(TASK_INTERRUPTIBLE);
++ add_wait_queue(&runtime->tsleep, &wait);
++
+ if (runtime->no_period_wakeup)
+ wait_time = MAX_SCHEDULE_TIMEOUT;
+ else {
+@@ -1768,16 +1772,32 @@ static int wait_for_avail(struct snd_pcm_substream *substream,
+ }
+ wait_time = msecs_to_jiffies(wait_time * 1000);
+ }
+- init_waitqueue_entry(&wait, current);
+- add_wait_queue(&runtime->tsleep, &wait);
++
+ for (;;) {
+ if (signal_pending(current)) {
+ err = -ERESTARTSYS;
+ break;
+ }
++
++ /*
++ * We need to check if space became available already
++ * (and thus the wakeup happened already) first to close
++ * the race of space already having become available.
++ * This check must happen after been added to the waitqueue
++ * and having current state be INTERRUPTIBLE.
++ */
++ if (is_playback)
++ avail = snd_pcm_playback_avail(runtime);
++ else
++ avail = snd_pcm_capture_avail(runtime);
++ if (avail >= runtime->twake)
++ break;
+ snd_pcm_stream_unlock_irq(substream);
+- tout = schedule_timeout_interruptible(wait_time);
++
++ tout = schedule_timeout(wait_time);
++
+ snd_pcm_stream_lock_irq(substream);
++ set_current_state(TASK_INTERRUPTIBLE);
+ switch (runtime->status->state) {
+ case SNDRV_PCM_STATE_SUSPENDED:
+ err = -ESTRPIPE;
+@@ -1803,14 +1823,9 @@ static int wait_for_avail(struct snd_pcm_substream *substream,
+ err = -EIO;
+ break;
+ }
+- if (is_playback)
+- avail = snd_pcm_playback_avail(runtime);
+- else
+- avail = snd_pcm_capture_avail(runtime);
+- if (avail >= runtime->twake)
+- break;
+ }
+ _endloop:
++ set_current_state(TASK_RUNNING);
+ remove_wait_queue(&runtime->tsleep, &wait);
+ *availp = avail;
+ return err;
+diff --git a/sound/pci/fm801.c b/sound/pci/fm801.c
+index a7ec703..ecce948 100644
+--- a/sound/pci/fm801.c
++++ b/sound/pci/fm801.c
+@@ -68,6 +68,7 @@ MODULE_PARM_DESC(enable, "Enable FM801 soundcard.");
+ module_param_array(tea575x_tuner, int, NULL, 0444);
+ MODULE_PARM_DESC(tea575x_tuner, "TEA575x tuner access method (0 = auto, 1 = SF256-PCS, 2=SF256-PCP, 3=SF64-PCR, 8=disable, +16=tuner-only).");
+
++#define TUNER_DISABLED (1<<3)
+ #define TUNER_ONLY (1<<4)
+ #define TUNER_TYPE_MASK (~TUNER_ONLY & 0xFFFF)
+
+@@ -1150,7 +1151,8 @@ static int snd_fm801_free(struct fm801 *chip)
+
+ __end_hw:
+ #ifdef CONFIG_SND_FM801_TEA575X_BOOL
+- snd_tea575x_exit(&chip->tea);
++ if (!(chip->tea575x_tuner & TUNER_DISABLED))
++ snd_tea575x_exit(&chip->tea);
+ #endif
+ if (chip->irq >= 0)
+ free_irq(chip->irq, chip);
+@@ -1236,7 +1238,6 @@ static int __devinit snd_fm801_create(struct snd_card *card,
+ (tea575x_tuner & TUNER_TYPE_MASK) < 4) {
+ if (snd_tea575x_init(&chip->tea)) {
+ snd_printk(KERN_ERR "TEA575x radio not found\n");
+- snd_fm801_free(chip);
+ return -ENODEV;
+ }
+ } else if ((tea575x_tuner & TUNER_TYPE_MASK) == 0) {
+@@ -1251,11 +1252,15 @@ static int __devinit snd_fm801_create(struct snd_card *card,
+ }
+ if (tea575x_tuner == 4) {
+ snd_printk(KERN_ERR "TEA575x radio not found\n");
+- snd_fm801_free(chip);
+- return -ENODEV;
++ chip->tea575x_tuner = TUNER_DISABLED;
+ }
+ }
+- strlcpy(chip->tea.card, snd_fm801_tea575x_gpios[(tea575x_tuner & TUNER_TYPE_MASK) - 1].name, sizeof(chip->tea.card));
++ if (!(chip->tea575x_tuner & TUNER_DISABLED)) {
++ strlcpy(chip->tea.card,
++ snd_fm801_tea575x_gpios[(tea575x_tuner &
++ TUNER_TYPE_MASK) - 1].name,
++ sizeof(chip->tea.card));
++ }
+ #endif
+
+ *rchip = chip;
+diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
+index 26a1521..fb6fbe4 100644
+--- a/sound/pci/hda/patch_cirrus.c
++++ b/sound/pci/hda/patch_cirrus.c
+@@ -508,7 +508,7 @@ static int add_volume(struct hda_codec *codec, const char *name,
+ int index, unsigned int pval, int dir,
+ struct snd_kcontrol **kctlp)
+ {
+- char tmp[32];
++ char tmp[44];
+ struct snd_kcontrol_new knew =
+ HDA_CODEC_VOLUME_IDX(tmp, index, 0, 0, HDA_OUTPUT);
+ knew.private_value = pval;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 524ff26..4c7cd6b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -397,7 +397,7 @@ struct alc_spec {
+ unsigned int auto_mic:1;
+ unsigned int automute:1; /* HP automute enabled */
+ unsigned int detect_line:1; /* Line-out detection enabled */
+- unsigned int automute_lines:1; /* automute line-out as well */
++ unsigned int automute_lines:1; /* automute line-out as well; NOP when automute_hp_lo isn't set */
+ unsigned int automute_hp_lo:1; /* both HP and LO available */
+
+ /* other flags */
+@@ -1161,7 +1161,7 @@ static void update_speakers(struct hda_codec *codec)
+ if (spec->autocfg.line_out_pins[0] == spec->autocfg.hp_pins[0] ||
+ spec->autocfg.line_out_pins[0] == spec->autocfg.speaker_pins[0])
+ return;
+- if (!spec->automute_lines || !spec->automute)
++ if (!spec->automute || (spec->automute_hp_lo && !spec->automute_lines))
+ on = 0;
+ else
+ on = spec->jack_present;
+@@ -1494,7 +1494,7 @@ static int alc_automute_mode_get(struct snd_kcontrol *kcontrol,
+ unsigned int val;
+ if (!spec->automute)
+ val = 0;
+- else if (!spec->automute_lines)
++ else if (!spec->automute_hp_lo || !spec->automute_lines)
+ val = 1;
+ else
+ val = 2;
+@@ -1515,7 +1515,8 @@ static int alc_automute_mode_put(struct snd_kcontrol *kcontrol,
+ spec->automute = 0;
+ break;
+ case 1:
+- if (spec->automute && !spec->automute_lines)
++ if (spec->automute &&
++ (!spec->automute_hp_lo || !spec->automute_lines))
+ return 0;
+ spec->automute = 1;
+ spec->automute_lines = 0;
+@@ -1858,7 +1859,9 @@ do_sku:
+ * 15 : 1 --> enable the function "Mute internal speaker
+ * when the external headphone out jack is plugged"
+ */
+- if (!spec->autocfg.hp_pins[0]) {
++ if (!spec->autocfg.hp_pins[0] &&
++ !(spec->autocfg.line_out_pins[0] &&
++ spec->autocfg.line_out_type == AUTO_PIN_HP_OUT)) {
+ hda_nid_t nid;
+ tmp = (ass >> 11) & 0x3; /* HP to chassis */
+ if (tmp == 0)
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index 7f81cc2..5c42f3e 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -5470,6 +5470,7 @@ again:
+ switch (codec->vendor_id) {
+ case 0x111d76d1:
+ case 0x111d76d9:
++ case 0x111d76df:
+ case 0x111d76e5:
+ case 0x111d7666:
+ case 0x111d7667:
+@@ -6399,6 +6400,7 @@ static const struct hda_codec_preset snd_hda_preset_sigmatel[] = {
+ { .id = 0x111d76cc, .name = "92HD89F3", .patch = patch_stac92hd73xx },
+ { .id = 0x111d76cd, .name = "92HD89F2", .patch = patch_stac92hd73xx },
+ { .id = 0x111d76ce, .name = "92HD89F1", .patch = patch_stac92hd73xx },
++ { .id = 0x111d76df, .name = "92HD93BXX", .patch = patch_stac92hd83xxx},
+ { .id = 0x111d76e0, .name = "92HD91BXX", .patch = patch_stac92hd83xxx},
+ { .id = 0x111d76e3, .name = "92HD98BXX", .patch = patch_stac92hd83xxx},
+ { .id = 0x111d76e5, .name = "92HD99BXX", .patch = patch_stac92hd83xxx},
+diff --git a/sound/soc/blackfin/bf5xx-ad193x.c b/sound/soc/blackfin/bf5xx-ad193x.c
+index d6651c0..2f0f836 100644
+--- a/sound/soc/blackfin/bf5xx-ad193x.c
++++ b/sound/soc/blackfin/bf5xx-ad193x.c
+@@ -103,7 +103,7 @@ static struct snd_soc_dai_link bf5xx_ad193x_dai[] = {
+ .cpu_dai_name = "bfin-tdm.0",
+ .codec_dai_name ="ad193x-hifi",
+ .platform_name = "bfin-tdm-pcm-audio",
+- .codec_name = "ad193x.5",
++ .codec_name = "spi0.5",
+ .ops = &bf5xx_ad193x_ops,
+ },
+ {
+@@ -112,7 +112,7 @@ static struct snd_soc_dai_link bf5xx_ad193x_dai[] = {
+ .cpu_dai_name = "bfin-tdm.1",
+ .codec_dai_name ="ad193x-hifi",
+ .platform_name = "bfin-tdm-pcm-audio",
+- .codec_name = "ad193x.5",
++ .codec_name = "spi0.5",
+ .ops = &bf5xx_ad193x_ops,
+ },
+ };
+diff --git a/sound/soc/codecs/ad193x.c b/sound/soc/codecs/ad193x.c
+index 2374ca5..f1a8be5 100644
+--- a/sound/soc/codecs/ad193x.c
++++ b/sound/soc/codecs/ad193x.c
+@@ -307,7 +307,8 @@ static int ad193x_hw_params(struct snd_pcm_substream *substream,
+ snd_soc_write(codec, AD193X_PLL_CLK_CTRL0, reg);
+
+ reg = snd_soc_read(codec, AD193X_DAC_CTRL2);
+- reg = (reg & (~AD193X_DAC_WORD_LEN_MASK)) | word_len;
++ reg = (reg & (~AD193X_DAC_WORD_LEN_MASK))
++ | (word_len << AD193X_DAC_WORD_LEN_SHFT);
+ snd_soc_write(codec, AD193X_DAC_CTRL2, reg);
+
+ reg = snd_soc_read(codec, AD193X_ADC_CTRL1);
+diff --git a/sound/soc/codecs/ad193x.h b/sound/soc/codecs/ad193x.h
+index 9747b54..cccc2e8 100644
+--- a/sound/soc/codecs/ad193x.h
++++ b/sound/soc/codecs/ad193x.h
+@@ -34,7 +34,8 @@
+ #define AD193X_DAC_LEFT_HIGH (1 << 3)
+ #define AD193X_DAC_BCLK_INV (1 << 7)
+ #define AD193X_DAC_CTRL2 0x804
+-#define AD193X_DAC_WORD_LEN_MASK 0xC
++#define AD193X_DAC_WORD_LEN_SHFT 3
++#define AD193X_DAC_WORD_LEN_MASK 0x18
+ #define AD193X_DAC_MASTER_MUTE 1
+ #define AD193X_DAC_CHNL_MUTE 0x805
+ #define AD193X_DACL1_MUTE 0
+@@ -63,7 +64,7 @@
+ #define AD193X_ADC_CTRL1 0x80f
+ #define AD193X_ADC_SERFMT_MASK 0x60
+ #define AD193X_ADC_SERFMT_STEREO (0 << 5)
+-#define AD193X_ADC_SERFMT_TDM (1 << 2)
++#define AD193X_ADC_SERFMT_TDM (1 << 5)
+ #define AD193X_ADC_SERFMT_AUX (2 << 5)
+ #define AD193X_ADC_WORD_LEN_MASK 0x3
+ #define AD193X_ADC_CTRL2 0x810
+diff --git a/sound/soc/codecs/ssm2602.c b/sound/soc/codecs/ssm2602.c
+index 84f4ad5..9801cd7 100644
+--- a/sound/soc/codecs/ssm2602.c
++++ b/sound/soc/codecs/ssm2602.c
+@@ -431,7 +431,8 @@ static int ssm2602_set_dai_fmt(struct snd_soc_dai *codec_dai,
+ static int ssm2602_set_bias_level(struct snd_soc_codec *codec,
+ enum snd_soc_bias_level level)
+ {
+- u16 reg = snd_soc_read(codec, SSM2602_PWR) & 0xff7f;
++ u16 reg = snd_soc_read(codec, SSM2602_PWR);
++ reg &= ~(PWR_POWER_OFF | PWR_OSC_PDN);
+
+ switch (level) {
+ case SND_SOC_BIAS_ON:
+diff --git a/sound/soc/fsl/mpc5200_dma.c b/sound/soc/fsl/mpc5200_dma.c
+index fff695c..cbaf8b7 100644
+--- a/sound/soc/fsl/mpc5200_dma.c
++++ b/sound/soc/fsl/mpc5200_dma.c
+@@ -368,7 +368,7 @@ static struct snd_soc_platform_driver mpc5200_audio_dma_platform = {
+ .pcm_free = &psc_dma_free,
+ };
+
+-static int mpc5200_hpcd_probe(struct of_device *op)
++static int mpc5200_hpcd_probe(struct platform_device *op)
+ {
+ phys_addr_t fifo;
+ struct psc_dma *psc_dma;
+@@ -486,7 +486,7 @@ out_unmap:
+ return ret;
+ }
+
+-static int mpc5200_hpcd_remove(struct of_device *op)
++static int mpc5200_hpcd_remove(struct platform_device *op)
+ {
+ struct psc_dma *psc_dma = dev_get_drvdata(&op->dev);
+
+@@ -518,7 +518,7 @@ MODULE_DEVICE_TABLE(of, mpc5200_hpcd_match);
+ static struct platform_driver mpc5200_hpcd_of_driver = {
+ .probe = mpc5200_hpcd_probe,
+ .remove = mpc5200_hpcd_remove,
+- .dev = {
++ .driver = {
+ .owner = THIS_MODULE,
+ .name = "mpc5200-pcm-audio",
+ .of_match_table = mpc5200_hpcd_match,
+diff --git a/sound/soc/omap/omap-mcbsp.c b/sound/soc/omap/omap-mcbsp.c
+index 07b7723..4b82290 100644
+--- a/sound/soc/omap/omap-mcbsp.c
++++ b/sound/soc/omap/omap-mcbsp.c
+@@ -516,6 +516,12 @@ static int omap_mcbsp_dai_set_dai_sysclk(struct snd_soc_dai *cpu_dai,
+ struct omap_mcbsp_reg_cfg *regs = &mcbsp_data->regs;
+ int err = 0;
+
++ if (mcbsp_data->active)
++ if (freq == mcbsp_data->in_freq)
++ return 0;
++ else
++ return -EBUSY;
++
+ /* The McBSP signal muxing functions are only available on McBSP1 */
+ if (clk_id == OMAP_MCBSP_CLKR_SRC_CLKR ||
+ clk_id == OMAP_MCBSP_CLKR_SRC_CLKX ||
+diff --git a/sound/soc/soc-jack.c b/sound/soc/soc-jack.c
+index 7c17b98..fa31d9c 100644
+--- a/sound/soc/soc-jack.c
++++ b/sound/soc/soc-jack.c
+@@ -105,7 +105,7 @@ void snd_soc_jack_report(struct snd_soc_jack *jack, int status, int mask)
+
+ snd_soc_dapm_sync(dapm);
+
+- snd_jack_report(jack->jack, status);
++ snd_jack_report(jack->jack, jack->status);
+
+ out:
+ mutex_unlock(&codec->mutex);
+@@ -327,7 +327,7 @@ int snd_soc_jack_add_gpios(struct snd_soc_jack *jack, int count,
+ IRQF_TRIGGER_FALLING,
+ gpios[i].name,
+ &gpios[i]);
+- if (ret)
++ if (ret < 0)
+ goto err;
+
+ if (gpios[i].wake) {
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 220c616..57a8e2d 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -529,8 +529,11 @@ static void *snd_usb_audio_probe(struct usb_device *dev,
+ return chip;
+
+ __error:
+- if (chip && !chip->num_interfaces)
+- snd_card_free(chip->card);
++ if (chip) {
++ if (!chip->num_interfaces)
++ snd_card_free(chip->card);
++ chip->probing = 0;
++ }
+ mutex_unlock(®ister_mutex);
+ __err_val:
+ return NULL;
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index eec1963..40fd1c7 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -1111,6 +1111,8 @@ static int dso__load_sym(struct dso *dso, struct map *map, const char *name,
+ }
+
+ opdsec = elf_section_by_name(elf, &ehdr, &opdshdr, ".opd", &opdidx);
++ if (opdshdr.sh_type != SHT_PROGBITS)
++ opdsec = NULL;
+ if (opdsec)
+ opddata = elf_rawdata(opdsec, NULL);
+
Added: dists/sid/linux-2.6/debian/patches/bugfix/all/stable/3.0.6.patch
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ dists/sid/linux-2.6/debian/patches/bugfix/all/stable/3.0.6.patch Tue Oct 4 05:41:51 2011 (r18146)
@@ -0,0 +1,17 @@
+diff --git a/Makefile b/Makefile
+index eeff5df..7767a64 100644
+diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
+index 830e1f1..7fcdbbb 100644
+--- a/drivers/gpu/drm/radeon/r100.c
++++ b/drivers/gpu/drm/radeon/r100.c
+@@ -773,8 +773,8 @@ int r100_copy_blit(struct radeon_device *rdev,
+ radeon_ring_write(rdev, (0x1fff) | (0x1fff << 16));
+ radeon_ring_write(rdev, 0);
+ radeon_ring_write(rdev, (0x1fff) | (0x1fff << 16));
+- radeon_ring_write(rdev, num_pages);
+- radeon_ring_write(rdev, num_pages);
++ radeon_ring_write(rdev, num_gpu_pages);
++ radeon_ring_write(rdev, num_gpu_pages);
+ radeon_ring_write(rdev, cur_pages | (stride_pixels << 16));
+ }
+ radeon_ring_write(rdev, PACKET0(RADEON_DSTCACHE_CTLSTAT, 0));
Modified: dists/sid/linux-2.6/debian/patches/series/5
==============================================================================
--- dists/sid/linux-2.6/debian/patches/series/5 Sun Oct 2 22:54:51 2011 (r18145)
+++ dists/sid/linux-2.6/debian/patches/series/5 Tue Oct 4 05:41:51 2011 (r18146)
@@ -1,4 +1,9 @@
-+ bugfix/all/fm801-Fix-double-free-in-case-of-error-in-tuner-dete.patch
-+ bugfix/all/fm801-Gracefully-handle-failure-of-tuner-auto-detect.patch
-+ bugfix/all/block-Free-queue-resources-at-blk_release_queue.patch
+ bugfix/all/kobj_uevent-Ignore-if-some-listeners-cannot-handle-m.patch
+
+- bugfix/sparc/sparc64-only-panther-cheetah-chips-have-popc.patch
+- bugfix/all/rt2x00-fix-crash-in-rt2800usb_get_txwi.patch
+- bugfix/all/rt2x00-fix-crash-in-rt2800usb_write_tx_desc.patch
+- bugfix/all/sendmmsg-sendmsg-fix-unsafe-user-pointer-access.patch
+- bugfix/all/netfilter-TCP-and-raw-fix-for-ip_route_me_harder.patch
++ bugfix/all/stable/3.0.5.patch
++ bugfix/all/stable/3.0.6.patch
More information about the Kernel-svn-changes
mailing list