[kernel] r10684 - in dists/trunk/linux-2.6/debian/patches: bugfix/all series

Maximilian Attems maks at alioth.debian.org
Mon Mar 3 13:24:35 UTC 2008


Author: maks
Date: Mon Mar  3 13:24:32 2008
New Revision: 10684

Log:
update to 2.6.25-rc3-git4

no further conflicts, bunch of important x86 + firewire fixes.


Added:
   dists/trunk/linux-2.6/debian/patches/bugfix/all/patch-2.6.25-rc3-git4
Modified:
   dists/trunk/linux-2.6/debian/patches/series/1~experimental.1

Added: dists/trunk/linux-2.6/debian/patches/bugfix/all/patch-2.6.25-rc3-git4
==============================================================================
--- (empty file)
+++ dists/trunk/linux-2.6/debian/patches/bugfix/all/patch-2.6.25-rc3-git4	Mon Mar  3 13:24:32 2008
@@ -0,0 +1,18263 @@
+diff --git a/Documentation/debugging-via-ohci1394.txt b/Documentation/debugging-via-ohci1394.txt
+index de4804e..c360d4e 100644
+--- a/Documentation/debugging-via-ohci1394.txt
++++ b/Documentation/debugging-via-ohci1394.txt
+@@ -36,14 +36,15 @@ available (notebooks) or too slow for extensive debug information (like ACPI).
+ Drivers
+ -------
+ 
+-The OHCI-1394 drivers in drivers/firewire and drivers/ieee1394 initialize
+-the OHCI-1394 controllers to a working state and can be used to enable
+-physical DMA. By default you only have to load the driver, and physical
+-DMA access will be granted to all remote nodes, but it can be turned off
+-when using the ohci1394 driver.
+-
+-Because these drivers depend on the PCI enumeration to be completed, an
+-initialization routine which can runs pretty early (long before console_init(),
++The ohci1394 driver in drivers/ieee1394 initializes the OHCI-1394 controllers
++to a working state and enables physical DMA by default for all remote nodes.
++This can be turned off by ohci1394's module parameter phys_dma=0.
++
++The alternative firewire-ohci driver in drivers/firewire uses filtered physical
++DMA, hence is not yet suitable for remote debugging.
++
++Because ohci1394 depends on the PCI enumeration to be completed, an
++initialization routine which runs pretty early (long before console_init()
+ which makes the printk buffer appear on the console can be called) was written.
+ 
+ To activate it, enable CONFIG_PROVIDE_OHCI1394_DMA_INIT (Kernel hacking menu:
+diff --git a/Documentation/feature-removal-schedule.txt b/Documentation/feature-removal-schedule.txt
+index 4d3aa51..c1d1fd0 100644
+--- a/Documentation/feature-removal-schedule.txt
++++ b/Documentation/feature-removal-schedule.txt
+@@ -172,6 +172,16 @@ Who:	Len Brown <len.brown at intel.com>
+ 
+ ---------------------------
+ 
++What:	ide-tape driver
++When:	July 2008
++Files:	drivers/ide/ide-tape.c
++Why:	This driver might not have any users anymore and maintaining it for no
++	reason is an effort no one wants to make.
++Who:	Bartlomiej Zolnierkiewicz <bzolnier at gmail.com>, Borislav Petkov
++	<petkovbb at googlemail.com>
++
++---------------------------
++
+ What: libata spindown skipping and warning
+ When: Dec 2008
+ Why:  Some halt(8) implementations synchronize caches for and spin
+@@ -306,3 +316,15 @@ Why:	Largely unmaintained and almost entirely unused.  File system
+ 	is largely pointless as without a lot of work only the most
+ 	trivial of Solaris binaries can work with the emulation code.
+ Who:	David S. Miller <davem at davemloft.net>
++
++---------------------------
++
++What:	init_mm export
++When:	2.6.26
++Why:	Not used in-tree. The current out-of-tree users used it to
++	work around problems in the CPA code which should be resolved
++	by now. One usecase was described to provide verification code
++	of the CPA operation. That's a good idea in general, but such
++	code / infrastructure should be in the kernel and not in some
++	out-of-tree driver.
++Who:	Thomas Gleixner <tglx at linutronix.de>
+diff --git a/Documentation/ide.txt b/Documentation/ide.txt
+index 94e2e3b..bcd7cd1 100644
+--- a/Documentation/ide.txt
++++ b/Documentation/ide.txt
+@@ -258,8 +258,6 @@ Summary of ide driver parameters for kernel command line
+ 			  As for VLB, it is safest to not specify it.
+ 			  Bigger values are safer than smaller ones.
+ 
+- "idex=noprobe"		: do not attempt to access/use this interface
+- 
+  "idex=base"		: probe for an interface at the addr specified,
+ 			  where "base" is usually 0x1f0 or 0x170
+ 			  and "ctl" is assumed to be "base"+0x206
+@@ -309,53 +307,6 @@ are detected automatically).
+ 
+ ================================================================================
+ 
+-IDE ATAPI streaming tape driver
+--------------------------------
+-
+-This driver is a part of the Linux ide driver and works in co-operation
+-with linux/drivers/block/ide.c.
+-
+-The driver, in co-operation with ide.c, basically traverses the
+-request-list for the block device interface. The character device
+-interface, on the other hand, creates new requests, adds them
+-to the request-list of the block device, and waits for their completion.
+-
+-Pipelined operation mode is now supported on both reads and writes.
+-
+-The block device major and minor numbers are determined from the
+-tape's relative position in the ide interfaces, as explained in ide.c.
+-
+-The character device interface consists of the following devices:
+-
+- ht0		major 37, minor 0	first  IDE tape, rewind on close.
+- ht1		major 37, minor 1	second IDE tape, rewind on close.
+- ...
+- nht0		major 37, minor 128	first  IDE tape, no rewind on close.
+- nht1		major 37, minor 129	second IDE tape, no rewind on close.
+- ...
+-
+-Run /dev/MAKEDEV to create the above entries.
+-
+-The general magnetic tape commands compatible interface, as defined by
+-include/linux/mtio.h, is accessible through the character device.
+-
+-General ide driver configuration options, such as the interrupt-unmask
+-flag, can be configured by issuing an ioctl to the block device interface,
+-as any other ide device.
+-
+-Our own ide-tape ioctl's can be issued to either the block device or
+-the character device interface.
+-
+-Maximal throughput with minimal bus load will usually be achieved in the
+-following scenario:
+-
+-	1.	ide-tape is operating in the pipelined operation mode.
+-	2.	No buffering is performed by the user backup program.
+-
+-
+-
+-================================================================================
+-
+ Some Terminology
+ ----------------
+ IDE = Integrated Drive Electronics, meaning that each drive has a built-in
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 36c7bc6..fed09b5 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -767,14 +767,14 @@ S:	Maintained
+ 
+ BLACKFIN ARCHITECTURE
+ P:	Bryan Wu
+-M:	bryan.wu at analog.com
++M:	cooloney at kernel.org
+ L:	uclinux-dist-devel at blackfin.uclinux.org (subscribers-only)
+ W:	http://blackfin.uclinux.org
+ S:	Supported
+ 
+ BLACKFIN EMAC DRIVER
+ P:	Bryan Wu
+-M:	bryan.wu at analog.com
++M:	cooloney at kernel.org
+ L:	uclinux-dist-devel at blackfin.uclinux.org (subscribers-only)
+ W:	http://blackfin.uclinux.org
+ S:	Supported
+@@ -982,6 +982,12 @@ M:	mchan at broadcom.com
+ L:	netdev at vger.kernel.org
+ S:	Supported
+ 
++BROADCOM BNX2X 10 GIGABIT ETHERNET DRIVER
++P:	Eliezer Tamir
++M:	eliezert at broadcom.com
++L:	netdev at vger.kernel.org
++S:	Supported
++
+ BROADCOM TG3 GIGABIT ETHERNET DRIVER
+ P:	Michael Chan
+ M:	mchan at broadcom.com
+@@ -2744,6 +2750,8 @@ S:	Maintained
+ NETEFFECT IWARP RNIC DRIVER (IW_NES)
+ P:	Faisal Latif
+ M:	flatif at neteffect.com
++P:	Nishi Gupta
++M:	ngupta at neteffect.com
+ P:	Glenn Streiff
+ M:	gstreiff at neteffect.com
+ L:	general at lists.openfabrics.org
+@@ -3884,10 +3892,13 @@ M:	trivial at kernel.org
+ L:	linux-kernel at vger.kernel.org
+ S:	Maintained
+ 
+-TULIP NETWORK DRIVER
+-L:	tulip-users at lists.sourceforge.net
+-W:	http://sourceforge.net/projects/tulip/
+-S:	Orphan
++TULIP NETWORK DRIVERS
++P:	Grant Grundler
++M:	grundler at parisc-linux.org
++P:	Kyle McMartin
++M:	kyle at parisc-linux.org
++L:	netdev at vger.kernel.org
++S:	Maintained
+ 
+ TUN/TAP driver
+ P:	Maxim Krasnyansky
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 9619c43..16b82e1 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -939,7 +939,8 @@ config KEXEC
+ 
+ config ATAGS_PROC
+ 	bool "Export atags in procfs"
+-	default n
++	depends on KEXEC
++	default y
+ 	help
+ 	  Should the atags used to boot the kernel be exported in an "atags"
+ 	  file in procfs. Useful with kexec.
+diff --git a/arch/arm/mach-pxa/cpu-pxa.c b/arch/arm/mach-pxa/cpu-pxa.c
+index 939a386..4b21479 100644
+--- a/arch/arm/mach-pxa/cpu-pxa.c
++++ b/arch/arm/mach-pxa/cpu-pxa.c
+@@ -43,7 +43,7 @@
+ 
+ #ifdef DEBUG
+ static unsigned int freq_debug;
+-MODULE_PARM(freq_debug, "i");
++module_param(freq_debug, uint, 0);
+ MODULE_PARM_DESC(freq_debug, "Set the debug messages to on=1/off=0");
+ #else
+ #define freq_debug  0
+diff --git a/arch/arm/mach-pxa/pxa3xx.c b/arch/arm/mach-pxa/pxa3xx.c
+index 7cd9ef8..35f25fd 100644
+--- a/arch/arm/mach-pxa/pxa3xx.c
++++ b/arch/arm/mach-pxa/pxa3xx.c
+@@ -129,28 +129,20 @@ static void clk_pxa3xx_cken_enable(struct clk *clk)
+ {
+ 	unsigned long mask = 1ul << (clk->cken & 0x1f);
+ 
+-	local_irq_disable();
+-
+ 	if (clk->cken < 32)
+ 		CKENA |= mask;
+ 	else
+ 		CKENB |= mask;
+-
+-	local_irq_enable();
+ }
+ 
+ static void clk_pxa3xx_cken_disable(struct clk *clk)
+ {
+ 	unsigned long mask = 1ul << (clk->cken & 0x1f);
+ 
+-	local_irq_disable();
+-
+ 	if (clk->cken < 32)
+ 		CKENA &= ~mask;
+ 	else
+ 		CKENB &= ~mask;
+-
+-	local_irq_enable();
+ }
+ 
+ static const struct clkops clk_pxa3xx_cken_ops = {
+diff --git a/arch/arm/mach-pxa/zylonite.c b/arch/arm/mach-pxa/zylonite.c
+index 7731d50..afd2cbf 100644
+--- a/arch/arm/mach-pxa/zylonite.c
++++ b/arch/arm/mach-pxa/zylonite.c
+@@ -58,7 +58,7 @@ static struct platform_device smc91x_device = {
+ 	.resource	= smc91x_resources,
+ };
+ 
+-#if defined(CONFIG_FB_PXA) || (CONFIG_FB_PXA_MODULES)
++#if defined(CONFIG_FB_PXA) || defined(CONFIG_FB_PXA_MODULE)
+ static void zylonite_backlight_power(int on)
+ {
+ 	gpio_set_value(gpio_backlight, on);
+diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
+index 2728b0e..3f6dc40 100644
+--- a/arch/arm/mm/mmap.c
++++ b/arch/arm/mm/mmap.c
+@@ -120,6 +120,8 @@ full_search:
+  */
+ int valid_phys_addr_range(unsigned long addr, size_t size)
+ {
++	if (addr < PHYS_OFFSET)
++		return 0;
+ 	if (addr + size > __pa(high_memory))
+ 		return 0;
+ 
+diff --git a/arch/avr32/boards/atstk1000/atstk1004.c b/arch/avr32/boards/atstk1000/atstk1004.c
+index 5a77030..e765a86 100644
+--- a/arch/avr32/boards/atstk1000/atstk1004.c
++++ b/arch/avr32/boards/atstk1000/atstk1004.c
+@@ -129,7 +129,7 @@ static int __init atstk1004_init(void)
+ #ifdef CONFIG_BOARD_ATSTK100X_SPI1
+ 	at32_add_device_spi(1, spi1_board_info, ARRAY_SIZE(spi1_board_info));
+ #endif
+-#ifndef CONFIG_BOARD_ATSTK1002_SW2_CUSTOM
++#ifndef CONFIG_BOARD_ATSTK100X_SW2_CUSTOM
+ 	at32_add_device_mci(0);
+ #endif
+ 	at32_add_device_lcdc(0, &atstk1000_lcdc_data,
+diff --git a/arch/avr32/kernel/process.c b/arch/avr32/kernel/process.c
+index eaaa69b..7f4af0b 100644
+--- a/arch/avr32/kernel/process.c
++++ b/arch/avr32/kernel/process.c
+@@ -11,6 +11,7 @@
+ #include <linux/fs.h>
+ #include <linux/ptrace.h>
+ #include <linux/reboot.h>
++#include <linux/tick.h>
+ #include <linux/uaccess.h>
+ #include <linux/unistd.h>
+ 
+@@ -30,8 +31,10 @@ void cpu_idle(void)
+ {
+ 	/* endless idle loop with no priority at all */
+ 	while (1) {
++		tick_nohz_stop_sched_tick();
+ 		while (!need_resched())
+ 			cpu_idle_sleep();
++		tick_nohz_restart_sched_tick();
+ 		preempt_enable_no_resched();
+ 		schedule();
+ 		preempt_disable();
+@@ -345,6 +348,7 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long usp,
+ 	p->thread.cpu_context.ksp = (unsigned long)childregs;
+ 	p->thread.cpu_context.pc = (unsigned long)ret_from_fork;
+ 
++	clear_tsk_thread_flag(p, TIF_DEBUG);
+ 	if ((clone_flags & CLONE_PTRACE) && test_thread_flag(TIF_DEBUG))
+ 		ocd_enable(p);
+ 
+diff --git a/arch/avr32/mm/fault.c b/arch/avr32/mm/fault.c
+index 6560cb1..ce4e429 100644
+--- a/arch/avr32/mm/fault.c
++++ b/arch/avr32/mm/fault.c
+@@ -189,6 +189,8 @@ no_context:
+ 
+ 	page = sysreg_read(PTBR);
+ 	printk(KERN_ALERT "ptbr = %08lx", page);
++	if (address >= TASK_SIZE)
++		page = (unsigned long)swapper_pg_dir;
+ 	if (page) {
+ 		page = ((unsigned long *)page)[address >> 22];
+ 		printk(" pgd = %08lx", page);
+diff --git a/arch/blackfin/Makefile b/arch/blackfin/Makefile
+index fe254f8..75eba2c 100644
+--- a/arch/blackfin/Makefile
++++ b/arch/blackfin/Makefile
+@@ -98,8 +98,11 @@ drivers-$(CONFIG_OPROFILE) += arch/$(ARCH)/oprofile/
+ #	them changed.  We use .mach to indicate when they were updated
+ #	last, otherwise make uses the target directory mtime.
+ 
++       show_mach_symlink = :
++ quiet_show_mach_symlink = echo '  SYMLINK include/asm-$(ARCH)/mach-$(MACHINE) -> include/asm-$(ARCH)/mach'
++silent_show_mach_symlink = :
+ include/asm-blackfin/.mach: $(wildcard include/config/arch/*.h) include/config/auto.conf
+-	@echo '  SYMLINK include/asm-$(ARCH)/mach-$(MACHINE) -> include/asm-$(ARCH)/mach'
++	@$($(quiet)show_mach_symlink)
+ ifneq ($(KBUILD_SRC),)
+ 	$(Q)mkdir -p include/asm-$(ARCH)
+ 	$(Q)ln -fsn $(srctree)/include/asm-$(ARCH)/mach-$(MACHINE) include/asm-$(ARCH)/mach
+diff --git a/arch/blackfin/configs/BF527-EZKIT_defconfig b/arch/blackfin/configs/BF527-EZKIT_defconfig
+index d59ee15..ae320dc 100644
+--- a/arch/blackfin/configs/BF527-EZKIT_defconfig
++++ b/arch/blackfin/configs/BF527-EZKIT_defconfig
+@@ -1,7 +1,6 @@
+ #
+ # Automatically generated make config: don't edit
+-# Linux kernel version: 2.6.22.14
+-# Thu Nov 29 17:32:47 2007
++# Linux kernel version: 2.6.22.16
+ #
+ # CONFIG_MMU is not set
+ # CONFIG_FPU is not set
+@@ -116,7 +115,10 @@ CONFIG_PREEMPT_VOLUNTARY=y
+ # Processor and Board Settings
+ #
+ # CONFIG_BF522 is not set
++# CONFIG_BF523 is not set
++# CONFIG_BF524 is not set
+ # CONFIG_BF525 is not set
++# CONFIG_BF526 is not set
+ CONFIG_BF527=y
+ # CONFIG_BF531 is not set
+ # CONFIG_BF532 is not set
+@@ -306,6 +308,7 @@ CONFIG_BFIN_DCACHE=y
+ # CONFIG_BFIN_WB is not set
+ CONFIG_BFIN_WT=y
+ CONFIG_L1_MAX_PIECE=16
++# CONFIG_MPU is not set
+ 
+ #
+ # Asynchonous Memory Configuration
+@@ -354,6 +357,7 @@ CONFIG_BINFMT_ZFLAT=y
+ # Power management options
+ #
+ # CONFIG_PM is not set
++# CONFIG_PM_WAKEUP_BY_GPIO is not set
+ 
+ #
+ # Networking
+@@ -496,7 +500,6 @@ CONFIG_MTD_CFI_I2=y
+ # CONFIG_MTD_CFI_INTELEXT is not set
+ # CONFIG_MTD_CFI_AMDSTD is not set
+ # CONFIG_MTD_CFI_STAA is not set
+-CONFIG_MTD_MW320D=m
+ CONFIG_MTD_RAM=y
+ CONFIG_MTD_ROM=m
+ # CONFIG_MTD_ABSENT is not set
+@@ -506,9 +509,6 @@ CONFIG_MTD_ROM=m
+ #
+ CONFIG_MTD_COMPLEX_MAPPINGS=y
+ # CONFIG_MTD_PHYSMAP is not set
+-CONFIG_MTD_BF5xx=m
+-CONFIG_BFIN_FLASH_SIZE=0x400000
+-CONFIG_EBIU_FLASH_BASE=0x20000000
+ # CONFIG_MTD_UCLINUX is not set
+ # CONFIG_MTD_PLATRAM is not set
+ 
+@@ -684,7 +684,6 @@ CONFIG_INPUT_MISC=y
+ # CONFIG_INPUT_POWERMATE is not set
+ # CONFIG_INPUT_YEALINK is not set
+ # CONFIG_INPUT_UINPUT is not set
+-# CONFIG_BF53X_PFBUTTONS is not set
+ # CONFIG_TWI_KEYPAD is not set
+ 
+ #
+@@ -702,12 +701,12 @@ CONFIG_INPUT_MISC=y
+ # CONFIG_BF5xx_PPIFCD is not set
+ # CONFIG_BFIN_SIMPLE_TIMER is not set
+ # CONFIG_BF5xx_PPI is not set
++CONFIG_BFIN_OTP=y
++# CONFIG_BFIN_OTP_WRITE_ENABLE is not set
+ # CONFIG_BFIN_SPORT is not set
+ # CONFIG_BFIN_TIMER_LATENCY is not set
+ # CONFIG_TWI_LCD is not set
+ # CONFIG_AD5304 is not set
+-# CONFIG_BF5xx_TEA5764 is not set
+-# CONFIG_BF5xx_FBDMA is not set
+ # CONFIG_VT is not set
+ # CONFIG_SERIAL_NONSTANDARD is not set
+ 
+@@ -772,7 +771,6 @@ CONFIG_I2C_CHARDEV=m
+ #
+ # I2C Hardware Bus support
+ #
+-# CONFIG_I2C_BLACKFIN_GPIO is not set
+ CONFIG_I2C_BLACKFIN_TWI=m
+ CONFIG_I2C_BLACKFIN_TWI_CLK_KHZ=50
+ # CONFIG_I2C_GPIO is not set
+diff --git a/arch/blackfin/configs/BF533-EZKIT_defconfig b/arch/blackfin/configs/BF533-EZKIT_defconfig
+index 811711f..9621caa 100644
+--- a/arch/blackfin/configs/BF533-EZKIT_defconfig
++++ b/arch/blackfin/configs/BF533-EZKIT_defconfig
+@@ -322,10 +322,9 @@ CONFIG_PM=y
+ # CONFIG_PM_LEGACY is not set
+ # CONFIG_PM_DEBUG is not set
+ # CONFIG_PM_SYSFS_DEPRECATED is not set
+-CONFIG_PM_WAKEUP_GPIO_BY_SIC_IWR=y
++CONFIG_PM_BFIN_SLEEP_DEEPER=y
++# CONFIG_PM_BFIN_SLEEP is not set
+ # CONFIG_PM_WAKEUP_BY_GPIO is not set
+-# CONFIG_PM_WAKEUP_GPIO_API is not set
+-CONFIG_PM_WAKEUP_SIC_IWR=0x80
+ 
+ #
+ # CPU Frequency scaling
+@@ -697,7 +696,6 @@ CONFIG_SERIAL_BFIN_DMA=y
+ # CONFIG_SERIAL_BFIN_PIO is not set
+ CONFIG_SERIAL_BFIN_UART0=y
+ # CONFIG_BFIN_UART0_CTSRTS is not set
+-# CONFIG_SERIAL_BFIN_UART1 is not set
+ CONFIG_SERIAL_CORE=y
+ CONFIG_SERIAL_CORE_CONSOLE=y
+ # CONFIG_SERIAL_BFIN_SPORT is not set
+diff --git a/arch/blackfin/configs/BF533-STAMP_defconfig b/arch/blackfin/configs/BF533-STAMP_defconfig
+index 198f412..b51e76c 100644
+--- a/arch/blackfin/configs/BF533-STAMP_defconfig
++++ b/arch/blackfin/configs/BF533-STAMP_defconfig
+@@ -323,10 +323,9 @@ CONFIG_PM=y
+ # CONFIG_PM_LEGACY is not set
+ # CONFIG_PM_DEBUG is not set
+ # CONFIG_PM_SYSFS_DEPRECATED is not set
+-CONFIG_PM_WAKEUP_GPIO_BY_SIC_IWR=y
++CONFIG_PM_BFIN_SLEEP_DEEPER=y
++# CONFIG_PM_BFIN_SLEEP is not set
+ # CONFIG_PM_WAKEUP_BY_GPIO is not set
+-# CONFIG_PM_WAKEUP_GPIO_API is not set
+-CONFIG_PM_WAKEUP_SIC_IWR=0x80
+ 
+ #
+ # CPU Frequency scaling
+@@ -714,7 +713,6 @@ CONFIG_SERIAL_BFIN_DMA=y
+ # CONFIG_SERIAL_BFIN_PIO is not set
+ CONFIG_SERIAL_BFIN_UART0=y
+ # CONFIG_BFIN_UART0_CTSRTS is not set
+-# CONFIG_SERIAL_BFIN_UART1 is not set
+ CONFIG_SERIAL_CORE=y
+ CONFIG_SERIAL_CORE_CONSOLE=y
+ # CONFIG_SERIAL_BFIN_SPORT is not set
+diff --git a/arch/blackfin/configs/BF537-STAMP_defconfig b/arch/blackfin/configs/BF537-STAMP_defconfig
+index b37ccc6..d45fa53 100644
+--- a/arch/blackfin/configs/BF537-STAMP_defconfig
++++ b/arch/blackfin/configs/BF537-STAMP_defconfig
+@@ -330,10 +330,9 @@ CONFIG_PM=y
+ # CONFIG_PM_LEGACY is not set
+ # CONFIG_PM_DEBUG is not set
+ # CONFIG_PM_SYSFS_DEPRECATED is not set
+-CONFIG_PM_WAKEUP_GPIO_BY_SIC_IWR=y
++CONFIG_PM_BFIN_SLEEP_DEEPER=y
++# CONFIG_PM_BFIN_SLEEP is not set
+ # CONFIG_PM_WAKEUP_BY_GPIO is not set
+-# CONFIG_PM_WAKEUP_GPIO_API is not set
+-CONFIG_PM_WAKEUP_SIC_IWR=0x8
+ 
+ #
+ # CPU Frequency scaling
+@@ -1013,6 +1012,7 @@ CONFIG_SND_BFIN_AD73311_SE=4
+ CONFIG_SND_SOC_AC97_BUS=y
+ CONFIG_SND_SOC=m
+ CONFIG_SND_BF5XX_SOC=m
++CONFIG_SND_MMAP_SUPPORT=y
+ CONFIG_SND_BF5XX_SOC_AC97=m
+ # CONFIG_SND_BF5XX_SOC_WM8750 is not set
+ # CONFIG_SND_BF5XX_SOC_WM8731 is not set
+diff --git a/arch/blackfin/configs/BF548-EZKIT_defconfig b/arch/blackfin/configs/BF548-EZKIT_defconfig
+index fd70216..c9707f7 100644
+--- a/arch/blackfin/configs/BF548-EZKIT_defconfig
++++ b/arch/blackfin/configs/BF548-EZKIT_defconfig
+@@ -396,6 +396,7 @@ CONFIG_BINFMT_ZFLAT=y
+ # Power management options
+ #
+ # CONFIG_PM is not set
++# CONFIG_PM_WAKEUP_BY_GPIO is not set
+ 
+ #
+ # CPU Frequency scaling
+@@ -1075,6 +1076,7 @@ CONFIG_SND_VERBOSE_PROCFS=y
+ CONFIG_SND_SOC_AC97_BUS=y
+ CONFIG_SND_SOC=y
+ CONFIG_SND_BF5XX_SOC=y
++CONFIG_SND_MMAP_SUPPORT=y
+ CONFIG_SND_BF5XX_SOC_AC97=y
+ CONFIG_SND_BF5XX_SOC_BF548_EZKIT=y
+ # CONFIG_SND_BF5XX_SOC_WM8750 is not set
+diff --git a/arch/blackfin/configs/BF561-EZKIT_defconfig b/arch/blackfin/configs/BF561-EZKIT_defconfig
+index 8546994..4d8a633 100644
+--- a/arch/blackfin/configs/BF561-EZKIT_defconfig
++++ b/arch/blackfin/configs/BF561-EZKIT_defconfig
+@@ -367,6 +367,7 @@ CONFIG_BINFMT_ZFLAT=y
+ # Power management options
+ #
+ # CONFIG_PM is not set
++# CONFIG_PM_WAKEUP_BY_GPIO is not set
+ 
+ #
+ # Networking
+diff --git a/arch/blackfin/kernel/bfin_dma_5xx.c b/arch/blackfin/kernel/bfin_dma_5xx.c
+index 5453bc3..8fd5d22 100644
+--- a/arch/blackfin/kernel/bfin_dma_5xx.c
++++ b/arch/blackfin/kernel/bfin_dma_5xx.c
+@@ -105,13 +105,14 @@ int request_dma(unsigned int channel, char *device_id)
+ 	mutex_unlock(&(dma_ch[channel].dmalock));
+ 
+ #ifdef CONFIG_BF54x
+-	if (channel >= CH_UART2_RX && channel <= CH_UART3_TX &&
+-		strncmp(device_id, "BFIN_UART", 9) == 0)
+-		dma_ch[channel].regs->peripheral_map |=
+-			(channel - CH_UART2_RX + 0xC);
+-	else
+-		dma_ch[channel].regs->peripheral_map |=
+-			(channel - CH_UART2_RX + 0x6);
++	if (channel >= CH_UART2_RX && channel <= CH_UART3_TX) {
++		if (strncmp(device_id, "BFIN_UART", 9) == 0)
++			dma_ch[channel].regs->peripheral_map |=
++				(channel - CH_UART2_RX + 0xC);
++		else
++			dma_ch[channel].regs->peripheral_map |=
++				(channel - CH_UART2_RX + 0x6);
++	}
+ #endif
+ 
+ 	dma_ch[channel].device_id = device_id;
+diff --git a/arch/blackfin/kernel/gptimers.c b/arch/blackfin/kernel/gptimers.c
+index 5cf4bdb..1904d8b 100644
+--- a/arch/blackfin/kernel/gptimers.c
++++ b/arch/blackfin/kernel/gptimers.c
+@@ -1,9 +1,9 @@
+ /*
+- * bfin_gptimers.c - derived from bf53x_timers.c
+- *  Driver for General Purpose Timer functions on the Blackfin processor
++ * gptimers.c - Blackfin General Purpose Timer core API
+  *
+- *  Copyright (C) 2005 John DeHority
+- *  Copyright (C) 2006 Hella Aglaia GmbH (awe at aglaia-gmbh.de)
++ * Copyright (c) 2005-2008 Analog Devices Inc.
++ * Copyright (C) 2005 John DeHority
++ * Copyright (C) 2006 Hella Aglaia GmbH (awe at aglaia-gmbh.de)
+  *
+  * Licensed under the GPLv2.
+  */
+diff --git a/arch/blackfin/kernel/setup.c b/arch/blackfin/kernel/setup.c
+index 8229b10..2255c28 100644
+--- a/arch/blackfin/kernel/setup.c
++++ b/arch/blackfin/kernel/setup.c
+@@ -32,6 +32,7 @@
+ static DEFINE_PER_CPU(struct cpu, cpu_devices);
+ 
+ u16 _bfin_swrst;
++EXPORT_SYMBOL(_bfin_swrst);
+ 
+ unsigned long memory_start, memory_end, physical_mem_end;
+ unsigned long reserved_mem_dcache_on;
+@@ -514,6 +515,7 @@ static __init void  memory_setup(void)
+ 	printk(KERN_INFO "Kernel Managed Memory: %ldMB\n", _ramend >> 20);
+ 
+ 	printk(KERN_INFO "Memory map:\n"
++		KERN_INFO "  fixedcode = 0x%p-0x%p\n"
+ 		KERN_INFO "  text      = 0x%p-0x%p\n"
+ 		KERN_INFO "  rodata    = 0x%p-0x%p\n"
+ 		KERN_INFO "  bss       = 0x%p-0x%p\n"
+@@ -527,7 +529,8 @@ static __init void  memory_setup(void)
+ #if DMA_UNCACHED_REGION > 0
+ 		KERN_INFO "  DMA Zone  = 0x%p-0x%p\n"
+ #endif
+-		, _stext, _etext,
++		, (void *)FIXED_CODE_START, (void *)FIXED_CODE_END,
++		_stext, _etext,
+ 		__start_rodata, __end_rodata,
+ 		__bss_start, __bss_stop,
+ 		_sdata, _edata,
+diff --git a/arch/blackfin/kernel/vmlinux.lds.S b/arch/blackfin/kernel/vmlinux.lds.S
+index aed8325..cb01a9d 100644
+--- a/arch/blackfin/kernel/vmlinux.lds.S
++++ b/arch/blackfin/kernel/vmlinux.lds.S
+@@ -147,44 +147,64 @@ SECTIONS
+ 
+ 	__l1_lma_start = .;
+ 
++#if L1_CODE_LENGTH
++# define LDS_L1_CODE *(.l1.text)
++#else
++# define LDS_L1_CODE
++#endif
+ 	.text_l1 L1_CODE_START : AT(LOADADDR(.init.ramfs) + SIZEOF(.init.ramfs))
+ 	{
+ 		. = ALIGN(4);
+ 		__stext_l1 = .;
+-		*(.l1.text)
+-
++		LDS_L1_CODE
+ 		. = ALIGN(4);
+ 		__etext_l1 = .;
+ 	}
+ 
++#if L1_DATA_A_LENGTH
++# define LDS_L1_A_DATA  *(.l1.data)
++# define LDS_L1_A_BSS   *(.l1.bss)
++# define LDS_L1_A_CACHE *(.data_l1.cacheline_aligned)
++#else
++# define LDS_L1_A_DATA
++# define LDS_L1_A_BSS
++# define LDS_L1_A_CACHE
++#endif
+ 	.data_l1 L1_DATA_A_START : AT(LOADADDR(.text_l1) + SIZEOF(.text_l1))
+ 	{
+ 		. = ALIGN(4);
+ 		__sdata_l1 = .;
+-		*(.l1.data)
++		LDS_L1_A_DATA
+ 		__edata_l1 = .;
+ 
+ 		. = ALIGN(4);
+ 		__sbss_l1 = .;
+-		*(.l1.bss)
++		LDS_L1_A_BSS
+ 
+ 		. = ALIGN(32);
+-		*(.data_l1.cacheline_aligned)
++		LDS_L1_A_CACHE
+ 
+ 		. = ALIGN(4);
+ 		__ebss_l1 = .;
+ 	}
+ 
++#if L1_DATA_B_LENGTH
++# define LDS_L1_B_DATA  *(.l1.data.B)
++# define LDS_L1_B_BSS   *(.l1.bss.B)
++#else
++# define LDS_L1_B_DATA
++# define LDS_L1_B_BSS
++#endif
+ 	.data_b_l1 L1_DATA_B_START : AT(LOADADDR(.data_l1) + SIZEOF(.data_l1))
+ 	{
+ 		. = ALIGN(4);
+ 		__sdata_b_l1 = .;
+-		*(.l1.data.B)
++		LDS_L1_B_DATA
+ 		__edata_b_l1 = .;
+ 
+ 		. = ALIGN(4);
+ 		__sbss_b_l1 = .;
+-		*(.l1.bss.B)
++		LDS_L1_B_BSS
+ 
+ 		. = ALIGN(4);
+ 		__ebss_b_l1 = .;
+diff --git a/arch/blackfin/mach-bf527/boards/ezkit.c b/arch/blackfin/mach-bf527/boards/ezkit.c
+index 337515f..cf4bc0d 100644
+--- a/arch/blackfin/mach-bf527/boards/ezkit.c
++++ b/arch/blackfin/mach-bf527/boards/ezkit.c
+@@ -180,8 +180,8 @@ static struct mtd_partition partition_info[] = {
+ 	},
+ 	{
+ 		.name = "File System",
+-		.offset = 4 * SIZE_1M,
+-		.size = (256 - 4) * SIZE_1M,
++		.offset = MTDPART_OFS_APPEND,
++		.size = MTDPART_SIZ_FULL,
+ 	},
+ };
+ 
+@@ -422,11 +422,11 @@ static struct mtd_partition bfin_spi_flash_partitions[] = {
+ 	}, {
+ 		.name = "kernel",
+ 		.size = 0xe0000,
+-		.offset = 0x20000
++		.offset = MTDPART_OFS_APPEND,
+ 	}, {
+ 		.name = "file system",
+-		.size = 0x700000,
+-		.offset = 0x00100000,
++		.size = MTDPART_SIZ_FULL,
++		.offset = MTDPART_OFS_APPEND,
+ 	}
+ };
+ 
+@@ -484,13 +484,6 @@ static struct bfin5xx_spi_chip spi_si3xxx_chip_info = {
+ };
+ #endif
+ 
+-#if defined(CONFIG_AD5304) || defined(CONFIG_AD5304_MODULE)
+-static struct bfin5xx_spi_chip ad5304_chip_info = {
+-	.enable_dma = 0,
+-	.bits_per_word = 16,
+-};
+-#endif
+-
+ #if defined(CONFIG_TOUCHSCREEN_AD7877) || defined(CONFIG_TOUCHSCREEN_AD7877_MODULE)
+ static struct bfin5xx_spi_chip spi_ad7877_chip_info = {
+ 	.enable_dma = 0,
+@@ -611,17 +604,6 @@ static struct spi_board_info bfin_spi_board_info[] __initdata = {
+ 		.mode = SPI_MODE_3,
+ 	},
+ #endif
+-#if defined(CONFIG_AD5304) || defined(CONFIG_AD5304_MODULE)
+-	{
+-		.modalias = "ad5304_spi",
+-		.max_speed_hz = 1250000,     /* max spi clock (SCK) speed in HZ */
+-		.bus_num = 0,
+-		.chip_select = 2,
+-		.platform_data = NULL,
+-		.controller_data = &ad5304_chip_info,
+-		.mode = SPI_MODE_2,
+-	},
+-#endif
+ #if defined(CONFIG_TOUCHSCREEN_AD7877) || defined(CONFIG_TOUCHSCREEN_AD7877_MODULE)
+ 	{
+ 		.modalias		= "ad7877",
+@@ -818,6 +800,19 @@ static struct platform_device bfin_device_gpiokeys = {
+ };
+ #endif
+ 
++static struct resource bfin_gpios_resources = {
++	.start = 0,
++	.end   = MAX_BLACKFIN_GPIOS - 1,
++	.flags = IORESOURCE_IRQ,
++};
++
++static struct platform_device bfin_gpios_device = {
++	.name = "simple-gpio",
++	.id = -1,
++	.num_resources = 1,
++	.resource = &bfin_gpios_resources,
++};
++
+ static struct platform_device *stamp_devices[] __initdata = {
+ #if defined(CONFIG_MTD_NAND_BF5XX) || defined(CONFIG_MTD_NAND_BF5XX_MODULE)
+ 	&bf5xx_nand_device,
+@@ -895,6 +890,8 @@ static struct platform_device *stamp_devices[] __initdata = {
+ #if defined(CONFIG_KEYBOARD_GPIO) || defined(CONFIG_KEYBOARD_GPIO_MODULE)
+ 	&bfin_device_gpiokeys,
+ #endif
++
++	&bfin_gpios_device,
+ };
+ 
+ static int __init stamp_init(void)
+@@ -921,13 +918,18 @@ void native_machine_restart(char *cmd)
+ 		bfin_gpio_reset_spi0_ssel1();
+ }
+ 
+-/*
+- * Currently the MAC address is saved in Flash by U-Boot
+- */
+-#define FLASH_MAC	0x203f0000
+ void bfin_get_ether_addr(char *addr)
+ {
+-	*(u32 *)(&(addr[0])) = bfin_read32(FLASH_MAC);
+-	*(u16 *)(&(addr[4])) = bfin_read16(FLASH_MAC + 4);
++	/* the MAC is stored in OTP memory page 0xDF */
++	u32 ret;
++	u64 otp_mac;
++	u32 (*otp_read)(u32 page, u32 flags, u64 *page_content) = (void *)0xEF00001A;
++
++	ret = otp_read(0xDF, 0x00, &otp_mac);
++	if (!(ret & 0x1)) {
++		char *otp_mac_p = (char *)&otp_mac;
++		for (ret = 0; ret < 6; ++ret)
++			addr[ret] = otp_mac_p[5 - ret];
++	}
+ }
+ EXPORT_SYMBOL(bfin_get_ether_addr);
+diff --git a/arch/blackfin/mach-bf533/boards/ezkit.c b/arch/blackfin/mach-bf533/boards/ezkit.c
+index 2b09aa3..241b5a2 100644
+--- a/arch/blackfin/mach-bf533/boards/ezkit.c
++++ b/arch/blackfin/mach-bf533/boards/ezkit.c
+@@ -99,11 +99,11 @@ static struct mtd_partition bfin_spi_flash_partitions[] = {
+ 	}, {
+ 		.name = "kernel",
+ 		.size = 0xe0000,
+-		.offset = 0x20000
++		.offset = MTDPART_OFS_APPEND,
+ 	}, {
+ 		.name = "file system",
+-		.size = 0x700000,
+-		.offset = 0x00100000,
++		.size = MTDPART_SIZ_FULL,
++		.offset = MTDPART_OFS_APPEND,
+ 	}
+ };
+ 
+@@ -298,6 +298,19 @@ static struct platform_device bfin_device_gpiokeys = {
+ };
+ #endif
+ 
++static struct resource bfin_gpios_resources = {
++	.start = 0,
++	.end   = MAX_BLACKFIN_GPIOS - 1,
++	.flags = IORESOURCE_IRQ,
++};
++
++static struct platform_device bfin_gpios_device = {
++	.name = "simple-gpio",
++	.id = -1,
++	.num_resources = 1,
++	.resource = &bfin_gpios_resources,
++};
++
+ #if defined(CONFIG_I2C_GPIO) || defined(CONFIG_I2C_GPIO_MODULE)
+ #include <linux/i2c-gpio.h>
+ 
+@@ -350,6 +363,8 @@ static struct platform_device *ezkit_devices[] __initdata = {
+ #if defined(CONFIG_I2C_GPIO) || defined(CONFIG_I2C_GPIO_MODULE)
+ 	&i2c_gpio_device,
+ #endif
++
++	&bfin_gpios_device,
+ };
+ 
+ static int __init ezkit_init(void)
+diff --git a/arch/blackfin/mach-bf533/boards/stamp.c b/arch/blackfin/mach-bf533/boards/stamp.c
+index a645f6f..b2ac481 100644
+--- a/arch/blackfin/mach-bf533/boards/stamp.c
++++ b/arch/blackfin/mach-bf533/boards/stamp.c
+@@ -112,7 +112,7 @@ static struct platform_device net2272_bfin_device = {
+ static struct mtd_partition stamp_partitions[] = {
+ 	{
+ 		.name   = "Bootloader",
+-		.size   = 0x20000,
++		.size   = 0x40000,
+ 		.offset = 0,
+ 	}, {
+ 		.name   = "Kernel",
+@@ -160,17 +160,17 @@ static struct platform_device stamp_flash_device = {
+ static struct mtd_partition bfin_spi_flash_partitions[] = {
+ 	{
+ 		.name = "bootloader",
+-		.size = 0x00020000,
++		.size = 0x00040000,
+ 		.offset = 0,
+ 		.mask_flags = MTD_CAP_ROM
+ 	}, {
+ 		.name = "kernel",
+ 		.size = 0xe0000,
+-		.offset = 0x20000
++		.offset = MTDPART_OFS_APPEND,
+ 	}, {
+ 		.name = "file system",
+-		.size = 0x700000,
+-		.offset = 0x00100000,
++		.size = MTDPART_SIZ_FULL,
++		.offset = MTDPART_OFS_APPEND,
+ 	}
+ };
+ 
+@@ -212,13 +212,6 @@ static struct bfin5xx_spi_chip spi_si3xxx_chip_info = {
+ };
+ #endif
+ 
+-#if defined(CONFIG_AD5304) || defined(CONFIG_AD5304_MODULE)
+-static struct bfin5xx_spi_chip ad5304_chip_info = {
+-	.enable_dma = 0,
+-	.bits_per_word = 16,
+-};
+-#endif
+-
+ #if defined(CONFIG_SPI_MMC) || defined(CONFIG_SPI_MMC_MODULE)
+ static struct bfin5xx_spi_chip spi_mmc_chip_info = {
+ 	.enable_dma = 1,
+@@ -308,17 +301,6 @@ static struct spi_board_info bfin_spi_board_info[] __initdata = {
+ 	},
+ #endif
+ 
+-#if defined(CONFIG_AD5304) || defined(CONFIG_AD5304_MODULE)
+-	{
+-		.modalias = "ad5304_spi",
+-		.max_speed_hz = 1000000,     /* max spi clock (SCK) speed in HZ */
+-		.bus_num = 0,
+-		.chip_select = 2,
+-		.platform_data = NULL,
+-		.controller_data = &ad5304_chip_info,
+-		.mode = SPI_MODE_2,
+-	},
+-#endif
+ #if defined(CONFIG_SPI_SPIDEV) || defined(CONFIG_SPI_SPIDEV_MODULE)
+ 	{
+ 		.modalias = "spidev",
+@@ -457,6 +439,19 @@ static struct platform_device bfin_device_gpiokeys = {
+ };
+ #endif
+ 
++static struct resource bfin_gpios_resources = {
++	.start = 0,
++	.end   = MAX_BLACKFIN_GPIOS - 1,
++	.flags = IORESOURCE_IRQ,
++};
++
++static struct platform_device bfin_gpios_device = {
++	.name = "simple-gpio",
++	.id = -1,
++	.num_resources = 1,
++	.resource = &bfin_gpios_resources,
++};
++
+ #if defined(CONFIG_I2C_GPIO) || defined(CONFIG_I2C_GPIO_MODULE)
+ #include <linux/i2c-gpio.h>
+ 
+@@ -518,6 +513,8 @@ static struct platform_device *stamp_devices[] __initdata = {
+ #if defined(CONFIG_I2C_GPIO) || defined(CONFIG_I2C_GPIO_MODULE)
+ 	&i2c_gpio_device,
+ #endif
++
++	&bfin_gpios_device,
+ 	&stamp_flash_device,
+ };
+ 
+diff --git a/arch/blackfin/mach-bf537/boards/generic_board.c b/arch/blackfin/mach-bf537/boards/generic_board.c
+index 8a3397d..c95395b 100644
+--- a/arch/blackfin/mach-bf537/boards/generic_board.c
++++ b/arch/blackfin/mach-bf537/boards/generic_board.c
+@@ -371,13 +371,6 @@ static struct bfin5xx_spi_chip spi_si3xxx_chip_info = {
+ };
+ #endif
+ 
+-#if defined(CONFIG_AD5304) || defined(CONFIG_AD5304_MODULE)
+-static struct bfin5xx_spi_chip ad5304_chip_info = {
+-	.enable_dma = 0,
+-	.bits_per_word = 16,
+-};
+-#endif
+-
+ #if defined(CONFIG_TOUCHSCREEN_AD7877) || defined(CONFIG_TOUCHSCREEN_AD7877_MODULE)
+ static struct bfin5xx_spi_chip spi_ad7877_chip_info = {
+ 	.enable_dma = 0,
+@@ -483,17 +476,6 @@ static struct spi_board_info bfin_spi_board_info[] __initdata = {
+ 		.mode = SPI_MODE_3,
+ 	},
+ #endif
+-#if defined(CONFIG_AD5304) || defined(CONFIG_AD5304_MODULE)
+-	{
+-		.modalias = "ad5304_spi",
+-		.max_speed_hz = 1250000,     /* max spi clock (SCK) speed in HZ */
+-		.bus_num = 0,
+-		.chip_select = 2,
+-		.platform_data = NULL,
+-		.controller_data = &ad5304_chip_info,
+-		.mode = SPI_MODE_2,
+-	},
+-#endif
+ #if defined(CONFIG_TOUCHSCREEN_AD7877) || defined(CONFIG_TOUCHSCREEN_AD7877_MODULE)
+ 	{
+ 		.modalias		= "ad7877",
+diff --git a/arch/blackfin/mach-bf537/boards/stamp.c b/arch/blackfin/mach-bf537/boards/stamp.c
+index 9e2277e..ea83148 100644
+--- a/arch/blackfin/mach-bf537/boards/stamp.c
++++ b/arch/blackfin/mach-bf537/boards/stamp.c
+@@ -128,6 +128,19 @@ static struct platform_device bfin_device_gpiokeys = {
+ };
+ #endif
+ 
++static struct resource bfin_gpios_resources = {
++	.start = 0,
++	.end   = MAX_BLACKFIN_GPIOS - 1,
++	.flags = IORESOURCE_IRQ,
++};
++
++static struct platform_device bfin_gpios_device = {
++	.name = "simple-gpio",
++	.id = -1,
++	.num_resources = 1,
++	.resource = &bfin_gpios_resources,
++};
++
+ #if defined(CONFIG_BFIN_CFPCMCIA) || defined(CONFIG_BFIN_CFPCMCIA_MODULE)
+ static struct resource bfin_pcmcia_cf_resources[] = {
+ 	{
+@@ -343,7 +356,7 @@ static struct platform_device net2272_bfin_device = {
+ static struct mtd_partition stamp_partitions[] = {
+ 	{
+ 		.name       = "Bootloader",
+-		.size       = 0x20000,
++		.size       = 0x40000,
+ 		.offset     = 0,
+ 	}, {
+ 		.name       = "Kernel",
+@@ -351,7 +364,7 @@ static struct mtd_partition stamp_partitions[] = {
+ 		.offset     = MTDPART_OFS_APPEND,
+ 	}, {
+ 		.name       = "RootFS",
+-		.size       = 0x400000 - 0x20000 - 0xE0000 - 0x10000,
++		.size       = 0x400000 - 0x40000 - 0xE0000 - 0x10000,
+ 		.offset     = MTDPART_OFS_APPEND,
+ 	}, {
+ 		.name       = "MAC Address",
+@@ -391,17 +404,17 @@ static struct platform_device stamp_flash_device = {
+ static struct mtd_partition bfin_spi_flash_partitions[] = {
+ 	{
+ 		.name = "bootloader",
+-		.size = 0x00020000,
++		.size = 0x00040000,
+ 		.offset = 0,
+ 		.mask_flags = MTD_CAP_ROM
+ 	}, {
+ 		.name = "kernel",
+ 		.size = 0xe0000,
+-		.offset = 0x20000
++		.offset = MTDPART_OFS_APPEND,
+ 	}, {
+ 		.name = "file system",
+-		.size = 0x700000,
+-		.offset = 0x00100000,
++		.size = MTDPART_SIZ_FULL,
++		.offset = MTDPART_OFS_APPEND,
+ 	}
+ };
+ 
+@@ -459,13 +472,6 @@ static struct bfin5xx_spi_chip spi_si3xxx_chip_info = {
+ };
+ #endif
+ 
+-#if defined(CONFIG_AD5304) || defined(CONFIG_AD5304_MODULE)
+-static struct bfin5xx_spi_chip ad5304_chip_info = {
+-	.enable_dma = 0,
+-	.bits_per_word = 16,
+-};
+-#endif
+-
+ #if defined(CONFIG_TOUCHSCREEN_AD7877) || defined(CONFIG_TOUCHSCREEN_AD7877_MODULE)
+ static struct bfin5xx_spi_chip spi_ad7877_chip_info = {
+ 	.enable_dma = 0,
+@@ -578,17 +584,6 @@ static struct spi_board_info bfin_spi_board_info[] __initdata = {
+ 		.mode = SPI_MODE_3,
+ 	},
+ #endif
+-#if defined(CONFIG_AD5304) || defined(CONFIG_AD5304_MODULE)
+-	{
+-		.modalias = "ad5304_spi",
+-		.max_speed_hz = 1250000,     /* max spi clock (SCK) speed in HZ */
+-		.bus_num = 0,
+-		.chip_select = 2,
+-		.platform_data = NULL,
+-		.controller_data = &ad5304_chip_info,
+-		.mode = SPI_MODE_2,
+-	},
+-#endif
+ #if defined(CONFIG_TOUCHSCREEN_AD7877) || defined(CONFIG_TOUCHSCREEN_AD7877_MODULE)
+ 	{
+ 		.modalias		= "ad7877",
+@@ -821,6 +816,8 @@ static struct platform_device *stamp_devices[] __initdata = {
+ #if defined(CONFIG_KEYBOARD_GPIO) || defined(CONFIG_KEYBOARD_GPIO_MODULE)
+ 	&bfin_device_gpiokeys,
+ #endif
++
++	&bfin_gpios_device,
+ 	&stamp_flash_device,
+ };
+ 
+diff --git a/arch/blackfin/mach-bf548/boards/ezkit.c b/arch/blackfin/mach-bf548/boards/ezkit.c
+index 916e963..a0950c1 100644
+--- a/arch/blackfin/mach-bf548/boards/ezkit.c
++++ b/arch/blackfin/mach-bf548/boards/ezkit.c
+@@ -285,8 +285,8 @@ static struct mtd_partition partition_info[] = {
+ 	},
+ 	{
+ 		.name = "File System",
+-		.offset = 4 * SIZE_1M,
+-		.size = (256 - 4) * SIZE_1M,
++		.offset = MTDPART_OFS_APPEND,
++		.size = MTDPART_SIZ_FULL,
+ 	},
+ };
+ 
+@@ -333,7 +333,7 @@ static struct platform_device bf54x_sdh_device = {
+ static struct mtd_partition ezkit_partitions[] = {
+ 	{
+ 		.name       = "Bootloader",
+-		.size       = 0x20000,
++		.size       = 0x40000,
+ 		.offset     = 0,
+ 	}, {
+ 		.name       = "Kernel",
+@@ -381,8 +381,8 @@ static struct mtd_partition bfin_spi_flash_partitions[] = {
+ 		.mask_flags = MTD_CAP_ROM
+ 	}, {
+ 		.name = "linux kernel",
+-		.size = 0x1c0000,
+-		.offset = 0x40000
++		.size = MTDPART_SIZ_FULL,
++		.offset = MTDPART_OFS_APPEND,
+ 	}
+ };
+ 
+@@ -594,6 +594,19 @@ static struct platform_device bfin_device_gpiokeys = {
+ };
+ #endif
+ 
++static struct resource bfin_gpios_resources = {
++	.start = 0,
++	.end   = MAX_BLACKFIN_GPIOS - 1,
++	.flags = IORESOURCE_IRQ,
++};
++
++static struct platform_device bfin_gpios_device = {
++	.name = "simple-gpio",
++	.id = -1,
++	.num_resources = 1,
++	.resource = &bfin_gpios_resources,
++};
++
+ static struct platform_device *ezkit_devices[] __initdata = {
+ #if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE)
+ 	&rtc_device,
+@@ -646,6 +659,8 @@ static struct platform_device *ezkit_devices[] __initdata = {
+ #if defined(CONFIG_KEYBOARD_GPIO) || defined(CONFIG_KEYBOARD_GPIO_MODULE)
+ 	&bfin_device_gpiokeys,
+ #endif
++
++	&bfin_gpios_device,
+ 	&ezkit_flash_device,
+ };
+ 
+diff --git a/arch/blackfin/mach-bf548/dma.c b/arch/blackfin/mach-bf548/dma.c
+index 374803a..f547929 100644
+--- a/arch/blackfin/mach-bf548/dma.c
++++ b/arch/blackfin/mach-bf548/dma.c
+@@ -27,6 +27,8 @@
+  * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+  */
+ 
++#include <linux/module.h>
++
+ #include <asm/blackfin.h>
+ #include <asm/dma.h>
+ 
+diff --git a/arch/blackfin/mach-bf548/head.S b/arch/blackfin/mach-bf548/head.S
+index 74fe258..46222a7 100644
+--- a/arch/blackfin/mach-bf548/head.S
++++ b/arch/blackfin/mach-bf548/head.S
+@@ -28,6 +28,7 @@
+  */
+ 
+ #include <linux/linkage.h>
++#include <linux/init.h>
+ #include <asm/blackfin.h>
+ #include <asm/trace.h>
+ #if CONFIG_BFIN_KERNEL_CLOCK
+@@ -44,10 +45,9 @@
+ 
+ #define INITIAL_STACK   0xFFB01000
+ 
+-.text
++__INIT
+ 
+ ENTRY(__start)
+-ENTRY(__stext)
+ 	/* R0: argument of command line string, passed from uboot, save it */
+ 	R7 = R0;
+ 	/* Enable Cycle Counter and Nesting Of Interrupts */
+@@ -213,6 +213,7 @@ ENTRY(__stext)
+ 
+ .LWAIT_HERE:
+ 	jump .LWAIT_HERE;
++ENDPROC(__start)
+ 
+ ENTRY(_real_start)
+ 	[ -- sp ] = reti;
+@@ -285,6 +286,9 @@ ENTRY(_real_start)
+ 	call _start_kernel;
+ .L_exit:
+ 	jump.s	.L_exit;
++ENDPROC(_real_start)
++
++__FINIT
+ 
+ .section .l1.text
+ #if CONFIG_BFIN_KERNEL_CLOCK
+@@ -450,6 +454,7 @@ ENTRY(_start_dma_code)
+ 	SSYNC;
+ 
+ 	RTS;
++ENDPROC(_start_dma_code)
+ #endif /* CONFIG_BFIN_KERNEL_CLOCK */
+ 
+ .data
+diff --git a/arch/blackfin/mach-bf561/boards/ezkit.c b/arch/blackfin/mach-bf561/boards/ezkit.c
+index 43c1b09..d357f64 100644
+--- a/arch/blackfin/mach-bf561/boards/ezkit.c
++++ b/arch/blackfin/mach-bf561/boards/ezkit.c
+@@ -223,7 +223,7 @@ static struct platform_device bfin_uart_device = {
+ static struct mtd_partition ezkit_partitions[] = {
+ 	{
+ 		.name       = "Bootloader",
+-		.size       = 0x20000,
++		.size       = 0x40000,
+ 		.offset     = 0,
+ 	}, {
+ 		.name       = "Kernel",
+@@ -389,6 +389,19 @@ static struct platform_device bfin_device_gpiokeys = {
+ };
+ #endif
+ 
++static struct resource bfin_gpios_resources = {
++	.start = 0,
++	.end   = MAX_BLACKFIN_GPIOS - 1,
++	.flags = IORESOURCE_IRQ,
++};
++
++static struct platform_device bfin_gpios_device = {
++	.name = "simple-gpio",
++	.id = -1,
++	.num_resources = 1,
++	.resource = &bfin_gpios_resources,
++};
++
+ #if defined(CONFIG_I2C_GPIO) || defined(CONFIG_I2C_GPIO_MODULE)
+ #include <linux/i2c-gpio.h>
+ 
+@@ -446,6 +459,7 @@ static struct platform_device *ezkit_devices[] __initdata = {
+ 	&isp1362_hcd_device,
+ #endif
+ 
++	&bfin_gpios_device,
+ 	&ezkit_flash_device,
+ };
+ 
+diff --git a/arch/blackfin/mach-common/dpmc.S b/arch/blackfin/mach-common/dpmc.S
+index b80ddd8..9d45aa3 100644
+--- a/arch/blackfin/mach-common/dpmc.S
++++ b/arch/blackfin/mach-common/dpmc.S
+@@ -31,140 +31,6 @@
+ #include <asm/blackfin.h>
+ #include <asm/mach/irq.h>
+ 
+-.text
+-
+-ENTRY(_unmask_wdog_wakeup_evt)
+-	[--SP] = ( R7:0, P5:0 );
+-#if defined(CONFIG_BF561)
+-	P0.H = hi(SICA_IWR1);
+-	P0.L = lo(SICA_IWR1);
+-#elif defined(CONFIG_BF54x) || defined(CONFIG_BF52x)
+-	P0.h = HI(SIC_IWR0);
+-	P0.l = LO(SIC_IWR0);
+-#else
+-	P0.h = HI(SIC_IWR);
+-	P0.l = LO(SIC_IWR);
+-#endif
+-	R7 = [P0];
+-#if defined(CONFIG_BF561)
+-	BITSET(R7, 27);
+-#else
+-	BITSET(R7,(IRQ_WATCH - IVG7));
+-#endif
+-	[P0] = R7;
+-	SSYNC;
+-
+-	( R7:0, P5:0 ) = [SP++];
+-	RTS;
+-
+-.LWRITE_TO_STAT:
+-	/* When watch dog timer is enabled, a write to STAT will load the
+-	 * contents of CNT to STAT
+-	 */
+-	R7 = 0x0000(z);
+-#if defined(CONFIG_BF561)
+-	P0.h = HI(WDOGA_STAT);
+-	P0.l = LO(WDOGA_STAT);
+-#else
+-	P0.h = HI(WDOG_STAT);
+-	P0.l = LO(WDOG_STAT);
+-#endif
+-	[P0] = R7;
+-	SSYNC;
+-	JUMP .LSKIP_WRITE_TO_STAT;
+-
+-ENTRY(_program_wdog_timer)
+-	[--SP] = ( R7:0, P5:0 );
+-#if defined(CONFIG_BF561)
+-	P0.h = HI(WDOGA_CNT);
+-	P0.l = LO(WDOGA_CNT);
+-#else
+-	P0.h = HI(WDOG_CNT);
+-	P0.l = LO(WDOG_CNT);
+-#endif
+-	[P0] = R0;
+-	SSYNC;
+-
+-#if defined(CONFIG_BF561)
+-	P0.h = HI(WDOGA_CTL);
+-	P0.l = LO(WDOGA_CTL);
+-#else
+-	P0.h = HI(WDOG_CTL);
+-	P0.l = LO(WDOG_CTL);
+-#endif
+-	R7 = W[P0](Z);
+-	CC = BITTST(R7,1);
+-	if !CC JUMP .LWRITE_TO_STAT;
+-	CC = BITTST(R7,2);
+-	if !CC JUMP .LWRITE_TO_STAT;
+-
+-.LSKIP_WRITE_TO_STAT:
+-#if defined(CONFIG_BF561)
+-	P0.h = HI(WDOGA_CTL);
+-	P0.l = LO(WDOGA_CTL);
+-#else
+-	P0.h = HI(WDOG_CTL);
+-	P0.l = LO(WDOG_CTL);
+-#endif
+-	R7 = W[P0](Z);
+-	BITCLR(R7,1);   /* Enable GP event */
+-	BITSET(R7,2);
+-	W[P0] = R7.L;
+-	SSYNC;
+-	NOP;
+-
+-	R7 = W[P0](Z);
+-	BITCLR(R7,4);   /* Enable the wdog counter */
+-	W[P0] = R7.L;
+-	SSYNC;
+-
+-	( R7:0, P5:0 ) = [SP++];
+-	RTS;
+-
+-ENTRY(_clear_wdog_wakeup_evt)
+-	[--SP] = ( R7:0, P5:0 );
+-
+-#if defined(CONFIG_BF561)
+-	P0.h = HI(WDOGA_CTL);
+-	P0.l = LO(WDOGA_CTL);
+-#else
+-	P0.h = HI(WDOG_CTL);
+-	P0.l = LO(WDOG_CTL);
+-#endif
+-	R7 = 0x0AD6(Z);
+-	W[P0] = R7.L;
+-	SSYNC;
+-
+-	R7 = W[P0](Z);
+-	BITSET(R7,15);
+-	W[P0] = R7.L;
+-	SSYNC;
+-
+-	R7 = W[P0](Z);
+-	BITSET(R7,1);
+-	BITSET(R7,2);
+-	W[P0] = R7.L;
+-	SSYNC;
+-
+-	( R7:0, P5:0 ) = [SP++];
+-	RTS;
+-
+-ENTRY(_disable_wdog_timer)
+-	[--SP] = ( R7:0, P5:0 );
+-#if defined(CONFIG_BF561)
+-	P0.h = HI(WDOGA_CTL);
+-	P0.l = LO(WDOGA_CTL);
+-#else
+-	P0.h = HI(WDOG_CTL);
+-	P0.l = LO(WDOG_CTL);
+-#endif
+-	R7 = 0xAD6(Z);
+-	W[P0] = R7.L;
+-	SSYNC;
+-	( R7:0, P5:0 ) = [SP++];
+-	RTS;
+-
+-#if !defined(CONFIG_BF561)
+ 
+ .section .l1.text
+ 
+@@ -459,10 +325,12 @@ ENTRY(_set_sic_iwr)
+ 	RTS;
+ 
+ ENTRY(_set_rtc_istat)
++#ifndef CONFIG_BF561
+ 	P0.H = hi(RTC_ISTAT);
+ 	P0.L = lo(RTC_ISTAT);
+ 	w[P0] = R0.L;
+ 	SSYNC;
++#endif
+ 	RTS;
+ 
+ ENTRY(_test_pll_locked)
+@@ -473,4 +341,3 @@ ENTRY(_test_pll_locked)
+ 	CC = BITTST(R0,5);
+ 	IF !CC JUMP 1b;
+ 	RTS;
+-#endif
+diff --git a/arch/blackfin/mach-common/ints-priority.c b/arch/blackfin/mach-common/ints-priority.c
+index 880595a..225ef14 100644
+--- a/arch/blackfin/mach-common/ints-priority.c
++++ b/arch/blackfin/mach-common/ints-priority.c
+@@ -74,7 +74,7 @@ unsigned long bfin_sic_iwr[3];	/* Up to 3 SIC_IWRx registers */
+ #endif
+ 
+ struct ivgx {
+-	/* irq number for request_irq, available in mach-bf533/irq.h */
++	/* irq number for request_irq, available in mach-bf5xx/irq.h */
+ 	unsigned int irqno;
+ 	/* corresponding bit in the SIC_ISR register */
+ 	unsigned int isrflag;
+@@ -86,7 +86,6 @@ struct ivg_slice {
+ 	struct ivgx *istop;
+ } ivg7_13[IVG13 - IVG7 + 1];
+ 
+-static void search_IAR(void);
+ 
+ /*
+  * Search SIC_IAR and fill tables with the irqvalues
+@@ -120,10 +119,10 @@ static void __init search_IAR(void)
+ }
+ 
+ /*
+- * This is for BF533 internal IRQs
++ * This is for core internal IRQs
+  */
+ 
+-static void ack_noop(unsigned int irq)
++static void bfin_ack_noop(unsigned int irq)
+ {
+ 	/* Dummy function.  */
+ }
+@@ -156,11 +155,11 @@ static void bfin_internal_mask_irq(unsigned int irq)
+ {
+ #ifdef CONFIG_BF53x
+ 	bfin_write_SIC_IMASK(bfin_read_SIC_IMASK() &
+-			     ~(1 << (irq - (IRQ_CORETMR + 1))));
++			     ~(1 << SIC_SYSIRQ(irq)));
+ #else
+ 	unsigned mask_bank, mask_bit;
+-	mask_bank = (irq - (IRQ_CORETMR + 1)) / 32;
+-	mask_bit = (irq - (IRQ_CORETMR + 1)) % 32;
++	mask_bank = SIC_SYSIRQ(irq) / 32;
++	mask_bit = SIC_SYSIRQ(irq) % 32;
+ 	bfin_write_SIC_IMASK(mask_bank, bfin_read_SIC_IMASK(mask_bank) &
+ 			     ~(1 << mask_bit));
+ #endif
+@@ -171,11 +170,11 @@ static void bfin_internal_unmask_irq(unsigned int irq)
+ {
+ #ifdef CONFIG_BF53x
+ 	bfin_write_SIC_IMASK(bfin_read_SIC_IMASK() |
+-			     (1 << (irq - (IRQ_CORETMR + 1))));
++			     (1 << SIC_SYSIRQ(irq)));
+ #else
+ 	unsigned mask_bank, mask_bit;
+-	mask_bank = (irq - (IRQ_CORETMR + 1)) / 32;
+-	mask_bit = (irq - (IRQ_CORETMR + 1)) % 32;
++	mask_bank = SIC_SYSIRQ(irq) / 32;
++	mask_bit = SIC_SYSIRQ(irq) % 32;
+ 	bfin_write_SIC_IMASK(mask_bank, bfin_read_SIC_IMASK(mask_bank) |
+ 			     (1 << mask_bit));
+ #endif
+@@ -187,8 +186,8 @@ int bfin_internal_set_wake(unsigned int irq, unsigned int state)
+ {
+ 	unsigned bank, bit;
+ 	unsigned long flags;
+-	bank = (irq - (IRQ_CORETMR + 1)) / 32;
+-	bit = (irq - (IRQ_CORETMR + 1)) % 32;
++	bank = SIC_SYSIRQ(irq) / 32;
++	bit = SIC_SYSIRQ(irq) % 32;
+ 
+ 	local_irq_save(flags);
+ 
+@@ -204,15 +203,18 @@ int bfin_internal_set_wake(unsigned int irq, unsigned int state)
+ #endif
+ 
+ static struct irq_chip bfin_core_irqchip = {
+-	.ack = ack_noop,
++	.ack = bfin_ack_noop,
+ 	.mask = bfin_core_mask_irq,
+ 	.unmask = bfin_core_unmask_irq,
+ };
+ 
+ static struct irq_chip bfin_internal_irqchip = {
+-	.ack = ack_noop,
++	.ack = bfin_ack_noop,
+ 	.mask = bfin_internal_mask_irq,
+ 	.unmask = bfin_internal_unmask_irq,
++	.mask_ack = bfin_internal_mask_irq,
++	.disable = bfin_internal_mask_irq,
++	.enable = bfin_internal_unmask_irq,
+ #ifdef CONFIG_PM
+ 	.set_wake = bfin_internal_set_wake,
+ #endif
+@@ -221,38 +223,23 @@ static struct irq_chip bfin_internal_irqchip = {
+ #ifdef BF537_GENERIC_ERROR_INT_DEMUX
+ static int error_int_mask;
+ 
+-static void bfin_generic_error_ack_irq(unsigned int irq)
+-{
+-
+-}
+-
+ static void bfin_generic_error_mask_irq(unsigned int irq)
+ {
+ 	error_int_mask &= ~(1L << (irq - IRQ_PPI_ERROR));
+ 
+-	if (!error_int_mask) {
+-		local_irq_disable();
+-		bfin_write_SIC_IMASK(bfin_read_SIC_IMASK() &
+-				     ~(1 << (IRQ_GENERIC_ERROR -
+-					(IRQ_CORETMR + 1))));
+-		SSYNC();
+-		local_irq_enable();
+-	}
++	if (!error_int_mask)
++		bfin_internal_mask_irq(IRQ_GENERIC_ERROR);
+ }
+ 
+ static void bfin_generic_error_unmask_irq(unsigned int irq)
+ {
+-	local_irq_disable();
+-	bfin_write_SIC_IMASK(bfin_read_SIC_IMASK() | 1 <<
+-			     (IRQ_GENERIC_ERROR - (IRQ_CORETMR + 1)));
+-	SSYNC();
+-	local_irq_enable();
+-
++	bfin_internal_unmask_irq(IRQ_GENERIC_ERROR);
+ 	error_int_mask |= 1L << (irq - IRQ_PPI_ERROR);
+ }
+ 
+ static struct irq_chip bfin_generic_error_irqchip = {
+-	.ack = bfin_generic_error_ack_irq,
++	.ack = bfin_ack_noop,
++	.mask_ack = bfin_generic_error_mask_irq,
+ 	.mask = bfin_generic_error_mask_irq,
+ 	.unmask = bfin_generic_error_unmask_irq,
+ };
+@@ -608,7 +595,7 @@ static struct pin_int_t *pint[NR_PINT_SYS_IRQS] = {
+ 	(struct pin_int_t *)PINT3_MASK_SET,
+ };
+ 
+-unsigned short get_irq_base(u8 bank, u8 bmap)
++inline unsigned short get_irq_base(u8 bank, u8 bmap)
+ {
+ 
+ 	u16 irq_base;
+@@ -969,17 +956,12 @@ int __init init_arch_irq(void)
+ #if defined(CONFIG_BF54x) || defined(CONFIG_BF52x) || defined(CONFIG_BF561)
+ 	bfin_write_SIC_IMASK0(SIC_UNMASK_ALL);
+ 	bfin_write_SIC_IMASK1(SIC_UNMASK_ALL);
+-	bfin_write_SIC_IWR0(IWR_ENABLE_ALL);
+-	bfin_write_SIC_IWR1(IWR_ENABLE_ALL);
+ # ifdef CONFIG_BF54x
+ 	bfin_write_SIC_IMASK2(SIC_UNMASK_ALL);
+-	bfin_write_SIC_IWR2(IWR_ENABLE_ALL);
+ # endif
+ #else
+ 	bfin_write_SIC_IMASK(SIC_UNMASK_ALL);
+-	bfin_write_SIC_IWR(IWR_ENABLE_ALL);
+ #endif
+-	SSYNC();
+ 
+ 	local_irq_disable();
+ 
+@@ -1001,90 +983,53 @@ int __init init_arch_irq(void)
+ 			set_irq_chip(irq, &bfin_core_irqchip);
+ 		else
+ 			set_irq_chip(irq, &bfin_internal_irqchip);
+-#ifdef BF537_GENERIC_ERROR_INT_DEMUX
+-		if (irq != IRQ_GENERIC_ERROR) {
+-#endif
+ 
+-			switch (irq) {
++		switch (irq) {
+ #if defined(CONFIG_BF53x)
+-			case IRQ_PROG_INTA:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
++		case IRQ_PROG_INTA:
+ # if defined(BF537_FAMILY) && !(defined(CONFIG_BFIN_MAC) || defined(CONFIG_BFIN_MAC_MODULE))
+-			case IRQ_MAC_RX:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
++		case IRQ_MAC_RX:
+ # endif
+ #elif defined(CONFIG_BF54x)
+-			case IRQ_PINT0:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
+-			case IRQ_PINT1:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
+-			case IRQ_PINT2:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
+-			case IRQ_PINT3:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
++		case IRQ_PINT0:
++		case IRQ_PINT1:
++		case IRQ_PINT2:
++		case IRQ_PINT3:
+ #elif defined(CONFIG_BF52x)
+-			case IRQ_PORTF_INTA:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
+-			case IRQ_PORTG_INTA:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
+-			case IRQ_PORTH_INTA:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
++		case IRQ_PORTF_INTA:
++		case IRQ_PORTG_INTA:
++		case IRQ_PORTH_INTA:
+ #elif defined(CONFIG_BF561)
+-			case IRQ_PROG0_INTA:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
+-			case IRQ_PROG1_INTA:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
+-			case IRQ_PROG2_INTA:
+-				set_irq_chained_handler(irq,
+-							bfin_demux_gpio_irq);
+-				break;
++		case IRQ_PROG0_INTA:
++		case IRQ_PROG1_INTA:
++		case IRQ_PROG2_INTA:
+ #endif
+-			default:
+-				set_irq_handler(irq, handle_simple_irq);
+-				break;
+-			}
+-
++			set_irq_chained_handler(irq,
++						bfin_demux_gpio_irq);
++			break;
+ #ifdef BF537_GENERIC_ERROR_INT_DEMUX
+-		} else {
++		case IRQ_GENERIC_ERROR:
+ 			set_irq_handler(irq, bfin_demux_error_irq);
+-		}
++
++			break;
+ #endif
++		default:
++			set_irq_handler(irq, handle_simple_irq);
++			break;
++		}
+ 	}
++
+ #ifdef BF537_GENERIC_ERROR_INT_DEMUX
+-	for (irq = IRQ_PPI_ERROR; irq <= IRQ_UART1_ERROR; irq++) {
+-		set_irq_chip(irq, &bfin_generic_error_irqchip);
+-		set_irq_handler(irq, handle_level_irq);
+-	}
++	for (irq = IRQ_PPI_ERROR; irq <= IRQ_UART1_ERROR; irq++)
++		set_irq_chip_and_handler(irq, &bfin_generic_error_irqchip,
++					 handle_level_irq);
+ #endif
+ 
+-	for (irq = GPIO_IRQ_BASE; irq < NR_IRQS; irq++) {
++	/* if configured as edge, then will be changed to do_edge_IRQ */
++	for (irq = GPIO_IRQ_BASE; irq < NR_IRQS; irq++)
++		set_irq_chip_and_handler(irq, &bfin_gpio_irqchip,
++					 handle_level_irq);
+ 
+-		set_irq_chip(irq, &bfin_gpio_irqchip);
+-		/* if configured as edge, then will be changed to do_edge_IRQ */
+-		set_irq_handler(irq, handle_level_irq);
+-	}
+ 
+ 	bfin_write_IMASK(0);
+ 	CSYNC();
+@@ -1106,6 +1051,16 @@ int __init init_arch_irq(void)
+ 	    IMASK_IVG14 | IMASK_IVG13 | IMASK_IVG12 | IMASK_IVG11 |
+ 	    IMASK_IVG10 | IMASK_IVG9 | IMASK_IVG8 | IMASK_IVG7 | IMASK_IVGHW;
+ 
++#if defined(CONFIG_BF54x) || defined(CONFIG_BF52x) || defined(CONFIG_BF561)
++	bfin_write_SIC_IWR0(IWR_ENABLE_ALL);
++	bfin_write_SIC_IWR1(IWR_ENABLE_ALL);
++# ifdef CONFIG_BF54x
++	bfin_write_SIC_IWR2(IWR_ENABLE_ALL);
++# endif
++#else
++	bfin_write_SIC_IWR(IWR_ENABLE_ALL);
++#endif
++
+ 	return 0;
+ }
+ 
+@@ -1122,7 +1077,6 @@ void do_irq(int vec, struct pt_regs *fp)
+ #if defined(CONFIG_BF54x) || defined(CONFIG_BF52x) || defined(CONFIG_BF561)
+ 		unsigned long sic_status[3];
+ 
+-		SSYNC();
+ 		sic_status[0] = bfin_read_SIC_ISR0() & bfin_read_SIC_IMASK0();
+ 		sic_status[1] = bfin_read_SIC_ISR1() & bfin_read_SIC_IMASK1();
+ #ifdef CONFIG_BF54x
+@@ -1138,7 +1092,7 @@ void do_irq(int vec, struct pt_regs *fp)
+ 		}
+ #else
+ 		unsigned long sic_status;
+-		SSYNC();
++
+ 		sic_status = bfin_read_SIC_IMASK() & bfin_read_SIC_ISR();
+ 
+ 		for (;; ivg++) {
+diff --git a/arch/blackfin/mm/init.c b/arch/blackfin/mm/init.c
+index 1f516c5..ec3141f 100644
+--- a/arch/blackfin/mm/init.c
++++ b/arch/blackfin/mm/init.c
+@@ -181,7 +181,7 @@ void __init mem_init(void)
+ 	}
+ }
+ 
+-static __init void free_init_pages(const char *what, unsigned long begin, unsigned long end)
++static void __init free_init_pages(const char *what, unsigned long begin, unsigned long end)
+ {
+ 	unsigned long addr;
+ 	/* next to check that the page we free is not a partial page */
+@@ -203,7 +203,7 @@ void __init free_initrd_mem(unsigned long start, unsigned long end)
+ }
+ #endif
+ 
+-void __init free_initmem(void)
++void __init_refok free_initmem(void)
+ {
+ #if defined CONFIG_RAMKERNEL && !defined CONFIG_MPU
+ 	free_init_pages("unused kernel memory",
+diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
+index b3400b5..783cfbb 100644
+--- a/arch/sh/Kconfig
++++ b/arch/sh/Kconfig
+@@ -330,6 +330,7 @@ config CPU_SUBTYPE_SH5_101
+ 
+ config CPU_SUBTYPE_SH5_103
+ 	bool "Support SH5-103 processor"
++	select CPU_SH5
+ 
+ endchoice
+ 
+diff --git a/arch/sh/drivers/dma/dma-sh.c b/arch/sh/drivers/dma/dma-sh.c
+index 5c33597..71ff3d6 100644
+--- a/arch/sh/drivers/dma/dma-sh.c
++++ b/arch/sh/drivers/dma/dma-sh.c
+@@ -90,7 +90,7 @@ static irqreturn_t dma_tei(int irq, void *dev_id)
+ 
+ static int sh_dmac_request_dma(struct dma_channel *chan)
+ {
+-	if (unlikely(!chan->flags & DMA_TEI_CAPABLE))
++	if (unlikely(!(chan->flags & DMA_TEI_CAPABLE)))
+ 		return 0;
+ 
+ 	return request_irq(get_dmte_irq(chan->chan), dma_tei,
+diff --git a/arch/sh/drivers/heartbeat.c b/arch/sh/drivers/heartbeat.c
+index b76a14f..ab77b0e 100644
+--- a/arch/sh/drivers/heartbeat.c
++++ b/arch/sh/drivers/heartbeat.c
+@@ -93,7 +93,7 @@ static int heartbeat_drv_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	hd->base = ioremap_nocache(res->start, res->end - res->start + 1);
+-	if (!unlikely(hd->base)) {
++	if (unlikely(!hd->base)) {
+ 		dev_err(&pdev->dev, "ioremap failed\n");
+ 
+ 		if (!pdev->dev.platform_data)
+diff --git a/arch/sh/drivers/pci/ops-dreamcast.c b/arch/sh/drivers/pci/ops-dreamcast.c
+index 0dac87b..e1284fc 100644
+--- a/arch/sh/drivers/pci/ops-dreamcast.c
++++ b/arch/sh/drivers/pci/ops-dreamcast.c
+@@ -83,9 +83,9 @@ static int gapspci_read(struct pci_bus *bus, unsigned int devfn, int where, int
+ 		return PCIBIOS_DEVICE_NOT_FOUND;
+ 
+ 	switch (size) {
+-		case 1: *val = ctrl_inb(GAPSPCI_BBA_CONFIG+where); break;
+-		case 2: *val = ctrl_inw(GAPSPCI_BBA_CONFIG+where); break;
+-		case 4: *val = ctrl_inl(GAPSPCI_BBA_CONFIG+where); break;
++		case 1: *val = inb(GAPSPCI_BBA_CONFIG+where); break;
++		case 2: *val = inw(GAPSPCI_BBA_CONFIG+where); break;
++		case 4: *val = inl(GAPSPCI_BBA_CONFIG+where); break;
+ 	}	
+ 
+         return PCIBIOS_SUCCESSFUL;
+@@ -97,9 +97,9 @@ static int gapspci_write(struct pci_bus *bus, unsigned int devfn, int where, int
+ 		return PCIBIOS_DEVICE_NOT_FOUND;
+ 
+ 	switch (size) {
+-		case 1: ctrl_outb(( u8)val, GAPSPCI_BBA_CONFIG+where); break;
+-		case 2: ctrl_outw((u16)val, GAPSPCI_BBA_CONFIG+where); break;
+-		case 4: ctrl_outl((u32)val, GAPSPCI_BBA_CONFIG+where); break;
++		case 1: outb(( u8)val, GAPSPCI_BBA_CONFIG+where); break;
++		case 2: outw((u16)val, GAPSPCI_BBA_CONFIG+where); break;
++		case 4: outl((u32)val, GAPSPCI_BBA_CONFIG+where); break;
+ 	}
+ 
+         return PCIBIOS_SUCCESSFUL;
+@@ -127,36 +127,36 @@ int __init gapspci_init(void)
+ 	 */
+ 
+ 	for (i=0; i<16; i++)
+-		idbuf[i] = ctrl_inb(GAPSPCI_REGS+i);
++		idbuf[i] = inb(GAPSPCI_REGS+i);
+ 
+ 	if (strncmp(idbuf, "GAPSPCI_BRIDGE_2", 16))
+ 		return -ENODEV;
+ 
+-	ctrl_outl(0x5a14a501, GAPSPCI_REGS+0x18);
++	outl(0x5a14a501, GAPSPCI_REGS+0x18);
+ 
+ 	for (i=0; i<1000000; i++)
+ 		;
+ 
+-	if (ctrl_inl(GAPSPCI_REGS+0x18) != 1)
++	if (inl(GAPSPCI_REGS+0x18) != 1)
+ 		return -EINVAL;
+ 
+-	ctrl_outl(0x01000000, GAPSPCI_REGS+0x20);
+-	ctrl_outl(0x01000000, GAPSPCI_REGS+0x24);
++	outl(0x01000000, GAPSPCI_REGS+0x20);
++	outl(0x01000000, GAPSPCI_REGS+0x24);
+ 
+-	ctrl_outl(GAPSPCI_DMA_BASE, GAPSPCI_REGS+0x28);
+-	ctrl_outl(GAPSPCI_DMA_BASE+GAPSPCI_DMA_SIZE, GAPSPCI_REGS+0x2c);
++	outl(GAPSPCI_DMA_BASE, GAPSPCI_REGS+0x28);
++	outl(GAPSPCI_DMA_BASE+GAPSPCI_DMA_SIZE, GAPSPCI_REGS+0x2c);
+ 
+-	ctrl_outl(1, GAPSPCI_REGS+0x14);
+-	ctrl_outl(1, GAPSPCI_REGS+0x34);
++	outl(1, GAPSPCI_REGS+0x14);
++	outl(1, GAPSPCI_REGS+0x34);
+ 
+ 	/* Setting Broadband Adapter */
+-	ctrl_outw(0xf900, GAPSPCI_BBA_CONFIG+0x06);
+-	ctrl_outl(0x00000000, GAPSPCI_BBA_CONFIG+0x30);
+-	ctrl_outb(0x00, GAPSPCI_BBA_CONFIG+0x3c);
+-	ctrl_outb(0xf0, GAPSPCI_BBA_CONFIG+0x0d);
+-	ctrl_outw(0x0006, GAPSPCI_BBA_CONFIG+0x04);
+-	ctrl_outl(0x00002001, GAPSPCI_BBA_CONFIG+0x10);
+-	ctrl_outl(0x01000000, GAPSPCI_BBA_CONFIG+0x14);
++	outw(0xf900, GAPSPCI_BBA_CONFIG+0x06);
++	outl(0x00000000, GAPSPCI_BBA_CONFIG+0x30);
++	outb(0x00, GAPSPCI_BBA_CONFIG+0x3c);
++	outb(0xf0, GAPSPCI_BBA_CONFIG+0x0d);
++	outw(0x0006, GAPSPCI_BBA_CONFIG+0x04);
++	outl(0x00002001, GAPSPCI_BBA_CONFIG+0x10);
++	outl(0x01000000, GAPSPCI_BBA_CONFIG+0x14);
+ 
+ 	return 0;
+ }
+diff --git a/arch/sh/kernel/cpu/sh2/setup-sh7619.c b/arch/sh/kernel/cpu/sh2/setup-sh7619.c
+index b230eb2..cc530f4 100644
+--- a/arch/sh/kernel/cpu/sh2/setup-sh7619.c
++++ b/arch/sh/kernel/cpu/sh2/setup-sh7619.c
+@@ -10,7 +10,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ enum {
+ 	UNUSED = 0,
+diff --git a/arch/sh/kernel/cpu/sh2a/clock-sh7203.c b/arch/sh/kernel/cpu/sh2a/clock-sh7203.c
+index 3feb95a..fb78132 100644
+--- a/arch/sh/kernel/cpu/sh2a/clock-sh7203.c
++++ b/arch/sh/kernel/cpu/sh2a/clock-sh7203.c
+@@ -21,8 +21,8 @@
+ #include <asm/freq.h>
+ #include <asm/io.h>
+ 
+-const static int pll1rate[]={8,12,16,0};
+-const static int pfc_divisors[]={1,2,3,4,6,8,12};
++static const int pll1rate[]={8,12,16,0};
++static const int pfc_divisors[]={1,2,3,4,6,8,12};
+ #define ifc_divisors pfc_divisors
+ 
+ #if (CONFIG_SH_CLK_MD == 0)
+diff --git a/arch/sh/kernel/cpu/sh2a/setup-sh7203.c b/arch/sh/kernel/cpu/sh2a/setup-sh7203.c
+index db6ef5c..e98dc44 100644
+--- a/arch/sh/kernel/cpu/sh2a/setup-sh7203.c
++++ b/arch/sh/kernel/cpu/sh2a/setup-sh7203.c
+@@ -10,7 +10,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ enum {
+ 	UNUSED = 0,
+diff --git a/arch/sh/kernel/cpu/sh2a/setup-sh7206.c b/arch/sh/kernel/cpu/sh2a/setup-sh7206.c
+index a564425..e6d4ec4 100644
+--- a/arch/sh/kernel/cpu/sh2a/setup-sh7206.c
++++ b/arch/sh/kernel/cpu/sh2a/setup-sh7206.c
+@@ -10,7 +10,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ enum {
+ 	UNUSED = 0,
+diff --git a/arch/sh/kernel/cpu/sh3/probe.c b/arch/sh/kernel/cpu/sh3/probe.c
+index fcc80bb..10f2a76 100644
+--- a/arch/sh/kernel/cpu/sh3/probe.c
++++ b/arch/sh/kernel/cpu/sh3/probe.c
+@@ -94,9 +94,9 @@ int __uses_jump_to_uncached detect_cpu_and_cache_system(void)
+ 		boot_cpu_data.dcache.way_incr	= (1 << 13);
+ 		boot_cpu_data.dcache.entry_mask	= 0x1ff0;
+ 		boot_cpu_data.dcache.sets	= 512;
+-		ctrl_outl(CCR_CACHE_32KB, CCR3);
++		ctrl_outl(CCR_CACHE_32KB, CCR3_REG);
+ #else
+-		ctrl_outl(CCR_CACHE_16KB, CCR3);
++		ctrl_outl(CCR_CACHE_16KB, CCR3_REG);
+ #endif
+ #endif
+ 	}
+diff --git a/arch/sh/kernel/cpu/sh3/setup-sh7705.c b/arch/sh/kernel/cpu/sh3/setup-sh7705.c
+index dd0a20a..f581534 100644
+--- a/arch/sh/kernel/cpu/sh3/setup-sh7705.c
++++ b/arch/sh/kernel/cpu/sh3/setup-sh7705.c
+@@ -12,7 +12,7 @@
+ #include <linux/init.h>
+ #include <linux/irq.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ #include <asm/rtc.h>
+ 
+ enum {
+diff --git a/arch/sh/kernel/cpu/sh3/setup-sh770x.c b/arch/sh/kernel/cpu/sh3/setup-sh770x.c
+index 969804b..d3733b1 100644
+--- a/arch/sh/kernel/cpu/sh3/setup-sh770x.c
++++ b/arch/sh/kernel/cpu/sh3/setup-sh770x.c
+@@ -16,7 +16,7 @@
+ #include <linux/irq.h>
+ #include <linux/platform_device.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ enum {
+ 	UNUSED = 0,
+@@ -123,15 +123,15 @@ static struct resource rtc_resources[] = {
+ 		.flags  = IORESOURCE_IO,
+ 	},
+ 	[1] =	{
+-		.start  = 20,
++		.start  = 21,
+ 		.flags	= IORESOURCE_IRQ,
+ 	},
+ 	[2] =	{
+-		.start	= 21,
++		.start	= 22,
+ 		.flags	= IORESOURCE_IRQ,
+ 	},
+ 	[3] =	{
+-		.start	= 22,
++		.start	= 20,
+ 		.flags  = IORESOURCE_IRQ,
+ 	},
+ };
+diff --git a/arch/sh/kernel/cpu/sh3/setup-sh7710.c b/arch/sh/kernel/cpu/sh3/setup-sh7710.c
+index 0cc0e2b..7406c9a 100644
+--- a/arch/sh/kernel/cpu/sh3/setup-sh7710.c
++++ b/arch/sh/kernel/cpu/sh3/setup-sh7710.c
+@@ -12,7 +12,7 @@
+ #include <linux/init.h>
+ #include <linux/irq.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ #include <asm/rtc.h>
+ 
+ enum {
+diff --git a/arch/sh/kernel/cpu/sh3/setup-sh7720.c b/arch/sh/kernel/cpu/sh3/setup-sh7720.c
+index 3855ea4..8028082 100644
+--- a/arch/sh/kernel/cpu/sh3/setup-sh7720.c
++++ b/arch/sh/kernel/cpu/sh3/setup-sh7720.c
+@@ -16,7 +16,7 @@
+ #include <linux/init.h>
+ #include <linux/serial.h>
+ #include <linux/io.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ #include <asm/rtc.h>
+ 
+ #define INTC_ICR1	0xA4140010UL
+diff --git a/arch/sh/kernel/cpu/sh4/setup-sh4-202.c b/arch/sh/kernel/cpu/sh4/setup-sh4-202.c
+index dab1932..7371abf 100644
+--- a/arch/sh/kernel/cpu/sh4/setup-sh4-202.c
++++ b/arch/sh/kernel/cpu/sh4/setup-sh4-202.c
+@@ -10,7 +10,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ static struct plat_sci_port sci_platform_data[] = {
+ 	{
+diff --git a/arch/sh/kernel/cpu/sh4/setup-sh7750.c b/arch/sh/kernel/cpu/sh4/setup-sh7750.c
+index ae3603a..ec88403 100644
+--- a/arch/sh/kernel/cpu/sh4/setup-sh7750.c
++++ b/arch/sh/kernel/cpu/sh4/setup-sh7750.c
+@@ -12,7 +12,7 @@
+ #include <linux/init.h>
+ #include <linux/serial.h>
+ #include <linux/io.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ static struct resource rtc_resources[] = {
+ 	[0] = {
+diff --git a/arch/sh/kernel/cpu/sh4/setup-sh7760.c b/arch/sh/kernel/cpu/sh4/setup-sh7760.c
+index 85f8157..254c5c5 100644
+--- a/arch/sh/kernel/cpu/sh4/setup-sh7760.c
++++ b/arch/sh/kernel/cpu/sh4/setup-sh7760.c
+@@ -10,7 +10,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ enum {
+ 	UNUSED = 0,
+diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7343.c b/arch/sh/kernel/cpu/sh4a/setup-sh7343.c
+index c0a3f07..6d4f50c 100644
+--- a/arch/sh/kernel/cpu/sh4a/setup-sh7343.c
++++ b/arch/sh/kernel/cpu/sh4a/setup-sh7343.c
+@@ -10,7 +10,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ static struct plat_sci_port sci_platform_data[] = {
+ 	{
+diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7366.c b/arch/sh/kernel/cpu/sh4a/setup-sh7366.c
+index 967e8b6..f26b5cd 100644
+--- a/arch/sh/kernel/cpu/sh4a/setup-sh7366.c
++++ b/arch/sh/kernel/cpu/sh4a/setup-sh7366.c
+@@ -12,7 +12,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ static struct plat_sci_port sci_platform_data[] = {
+ 	{
+diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7722.c b/arch/sh/kernel/cpu/sh4a/setup-sh7722.c
+index 73c778d..b98b4bc 100644
+--- a/arch/sh/kernel/cpu/sh4a/setup-sh7722.c
++++ b/arch/sh/kernel/cpu/sh4a/setup-sh7722.c
+@@ -10,9 +10,9 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
++#include <linux/serial_sci.h>
+ #include <linux/mm.h>
+ #include <asm/mmzone.h>
+-#include <asm/sci.h>
+ 
+ static struct resource usbf_resources[] = {
+ 	[0] = {
+diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7763.c b/arch/sh/kernel/cpu/sh4a/setup-sh7763.c
+index eabd538..07c988d 100644
+--- a/arch/sh/kernel/cpu/sh4a/setup-sh7763.c
++++ b/arch/sh/kernel/cpu/sh4a/setup-sh7763.c
+@@ -12,7 +12,7 @@
+ #include <linux/init.h>
+ #include <linux/serial.h>
+ #include <linux/io.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ static struct resource rtc_resources[] = {
+ 	[0] = {
+diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7770.c b/arch/sh/kernel/cpu/sh4a/setup-sh7770.c
+index 32f4f59..b9cec48 100644
+--- a/arch/sh/kernel/cpu/sh4a/setup-sh7770.c
++++ b/arch/sh/kernel/cpu/sh4a/setup-sh7770.c
+@@ -10,7 +10,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ static struct plat_sci_port sci_platform_data[] = {
+ 	{
+diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7780.c b/arch/sh/kernel/cpu/sh4a/setup-sh7780.c
+index 293004b..18dbbe2 100644
+--- a/arch/sh/kernel/cpu/sh4a/setup-sh7780.c
++++ b/arch/sh/kernel/cpu/sh4a/setup-sh7780.c
+@@ -11,7 +11,7 @@
+ #include <linux/init.h>
+ #include <linux/serial.h>
+ #include <linux/io.h>
+-#include <asm/sci.h>
++#include <linux/serial_sci.h>
+ 
+ static struct resource rtc_resources[] = {
+ 	[0] = {
+diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7785.c b/arch/sh/kernel/cpu/sh4a/setup-sh7785.c
+index 74b60e9..621e732 100644
+--- a/arch/sh/kernel/cpu/sh4a/setup-sh7785.c
++++ b/arch/sh/kernel/cpu/sh4a/setup-sh7785.c
+@@ -10,10 +10,10 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
++#include <linux/serial_sci.h>
+ #include <linux/io.h>
+ #include <linux/mm.h>
+ #include <asm/mmzone.h>
+-#include <asm/sci.h>
+ 
+ static struct plat_sci_port sci_platform_data[] = {
+ 	{
+diff --git a/arch/sh/kernel/cpu/sh4a/setup-shx3.c b/arch/sh/kernel/cpu/sh4a/setup-shx3.c
+index 4dc958b..bd35f32 100644
+--- a/arch/sh/kernel/cpu/sh4a/setup-shx3.c
++++ b/arch/sh/kernel/cpu/sh4a/setup-shx3.c
+@@ -10,9 +10,9 @@
+ #include <linux/platform_device.h>
+ #include <linux/init.h>
+ #include <linux/serial.h>
++#include <linux/serial_sci.h>
+ #include <linux/io.h>
+ #include <asm/mmzone.h>
+-#include <asm/sci.h>
+ 
+ static struct plat_sci_port sci_platform_data[] = {
+ 	{
+diff --git a/arch/sparc/kernel/led.c b/arch/sparc/kernel/led.c
+index 313d162..59e9344 100644
+--- a/arch/sparc/kernel/led.c
++++ b/arch/sparc/kernel/led.c
+@@ -3,6 +3,9 @@
+ #include <linux/init.h>
+ #include <linux/proc_fs.h>
+ #include <linux/string.h>
++#include <linux/jiffies.h>
++#include <linux/timer.h>
++#include <linux/uaccess.h>
+ 
+ #include <asm/auxio.h>
+ 
+diff --git a/arch/sparc64/kernel/ds.c b/arch/sparc64/kernel/ds.c
+index eeb5a2f..bd76482 100644
+--- a/arch/sparc64/kernel/ds.c
++++ b/arch/sparc64/kernel/ds.c
+@@ -525,10 +525,10 @@ static void dr_cpu_mark(struct ds_data *resp, int cpu, int ncpus,
+ 	}
+ }
+ 
+-static int dr_cpu_configure(struct ds_info *dp,
+-			    struct ds_cap_state *cp,
+-			    u64 req_num,
+-			    cpumask_t *mask)
++static int __cpuinit dr_cpu_configure(struct ds_info *dp,
++				      struct ds_cap_state *cp,
++				      u64 req_num,
++				      cpumask_t *mask)
+ {
+ 	struct ds_data *resp;
+ 	int resp_len, ncpus, cpu;
+@@ -623,9 +623,9 @@ static int dr_cpu_unconfigure(struct ds_info *dp,
+ 	return 0;
+ }
+ 
+-static void dr_cpu_data(struct ds_info *dp,
+-			struct ds_cap_state *cp,
+-			void *buf, int len)
++static void __cpuinit dr_cpu_data(struct ds_info *dp,
++				  struct ds_cap_state *cp,
++				  void *buf, int len)
+ {
+ 	struct ds_data *data = buf;
+ 	struct dr_cpu_tag *tag = (struct dr_cpu_tag *) (data + 1);
+diff --git a/arch/sparc64/kernel/mdesc.c b/arch/sparc64/kernel/mdesc.c
+index 856659b..9100835 100644
+--- a/arch/sparc64/kernel/mdesc.c
++++ b/arch/sparc64/kernel/mdesc.c
+@@ -758,7 +758,7 @@ static void __devinit get_mondo_data(struct mdesc_handle *hp, u64 mp,
+ 	get_one_mondo_bits(val, &tb->nonresum_qmask, 2);
+ }
+ 
+-void __devinit mdesc_fill_in_cpu_data(cpumask_t mask)
++void __cpuinit mdesc_fill_in_cpu_data(cpumask_t mask)
+ {
+ 	struct mdesc_handle *hp = mdesc_grab();
+ 	u64 mp;
+diff --git a/arch/sparc64/mm/fault.c b/arch/sparc64/mm/fault.c
+index e2027f2..2650d0d 100644
+--- a/arch/sparc64/mm/fault.c
++++ b/arch/sparc64/mm/fault.c
+@@ -244,16 +244,8 @@ static void do_kernel_fault(struct pt_regs *regs, int si_code, int fault_code,
+ 	if (regs->tstate & TSTATE_PRIV) {
+ 		const struct exception_table_entry *entry;
+ 
+-		if (asi == ASI_P && (insn & 0xc0800000) == 0xc0800000) {
+-			if (insn & 0x2000)
+-				asi = (regs->tstate >> 24);
+-			else
+-				asi = (insn >> 5);
+-		}
+-	
+-		/* Look in asi.h: All _S asis have LS bit set */
+-		if ((asi & 0x1) &&
+-		    (entry = search_exception_tables(regs->tpc))) {
++		entry = search_exception_tables(regs->tpc);
++		if (entry) {
+ 			regs->tpc = entry->fixup;
+ 			regs->tnpc = regs->tpc + 4;
+ 			return;
+@@ -294,7 +286,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
+ 		unsigned long tpc = regs->tpc;
+ 
+ 		/* Sanity check the PC. */
+-		if ((tpc >= KERNBASE && tpc < (unsigned long) _etext) ||
++		if ((tpc >= KERNBASE && tpc < (unsigned long) __init_end) ||
+ 		    (tpc >= MODULES_VADDR && tpc < MODULES_END)) {
+ 			/* Valid, no problems... */
+ 		} else {
+diff --git a/arch/sparc64/mm/init.c b/arch/sparc64/mm/init.c
+index 9e6bca2..b5c3041 100644
+--- a/arch/sparc64/mm/init.c
++++ b/arch/sparc64/mm/init.c
+@@ -1010,7 +1010,8 @@ static struct linux_prom64_registers pall[MAX_BANKS] __initdata;
+ static int pall_ents __initdata;
+ 
+ #ifdef CONFIG_DEBUG_PAGEALLOC
+-static unsigned long kernel_map_range(unsigned long pstart, unsigned long pend, pgprot_t prot)
++static unsigned long __ref kernel_map_range(unsigned long pstart,
++					    unsigned long pend, pgprot_t prot)
+ {
+ 	unsigned long vstart = PAGE_OFFSET + pstart;
+ 	unsigned long vend = PAGE_OFFSET + pend;
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index fc50d2f..e8cb9ff 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -128,8 +128,6 @@ void *get_current(void)
+ 	return current;
+ }
+ 
+-extern void schedule_tail(struct task_struct *prev);
+-
+ /*
+  * This is called magically, by its address being stuffed in a jmp_buf
+  * and being longjmp-d to.
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index e09a6b7..6d50064 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -377,6 +377,19 @@ config X86_OOSTORE
+ 	def_bool y
+ 	depends on (MWINCHIP3D || MWINCHIP2 || MWINCHIPC6) && MTRR
+ 
++#
++# P6_NOPs are a relatively minor optimization that require a family >=
++# 6 processor, except that it is broken on certain VIA chips.
++# Furthermore, AMD chips prefer a totally different sequence of NOPs
++# (which work on all CPUs).  As a result, disallow these if we're
++# compiling X86_GENERIC but not X86_64 (these NOPs do work on all
++# x86-64 capable chips); the list of processors in the right-hand clause
++# are the cores that benefit from this optimization.
++#
++config X86_P6_NOP
++	def_bool y
++	depends on (X86_64 || !X86_GENERIC) && (M686 || MPENTIUMII || MPENTIUMIII || MPENTIUMM || MCORE2 || PENTIUM4)
++
+ config X86_TSC
+ 	def_bool y
+ 	depends on ((MWINCHIP3D || MWINCHIP2 || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2) && !X86_NUMAQ) || X86_64
+@@ -390,6 +403,7 @@ config X86_CMOV
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+ 	default "64" if X86_64
++	default "6" if X86_32 && X86_P6_NOP
+ 	default "4" if X86_32 && (X86_XADD || X86_CMPXCHG || X86_BSWAP || X86_WP_WORKS_OK)
+ 	default "3"
+ 
+diff --git a/arch/x86/boot/memory.c b/arch/x86/boot/memory.c
+index 3783539..e77d89f 100644
+--- a/arch/x86/boot/memory.c
++++ b/arch/x86/boot/memory.c
+@@ -37,6 +37,12 @@ static int detect_memory_e820(void)
+ 		      "=m" (*desc)
+ 		    : "D" (desc), "d" (SMAP), "a" (0xe820));
+ 
++		/* BIOSes which terminate the chain with CF = 1 as opposed
++		   to %ebx = 0 don't always report the SMAP signature on
++		   the final, failing, probe. */
++		if (err)
++			break;
++
+ 		/* Some BIOSes stop returning SMAP in the middle of
+ 		   the search loop.  We don't know exactly how the BIOS
+ 		   screwed up the map at that point, we might have a
+@@ -47,9 +53,6 @@ static int detect_memory_e820(void)
+ 			break;
+ 		}
+ 
+-		if (err)
+-			break;
+-
+ 		count++;
+ 		desc++;
+ 	} while (next && count < E820MAX);
+diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c
+index a33d530..8ea0401 100644
+--- a/arch/x86/kernel/asm-offsets_32.c
++++ b/arch/x86/kernel/asm-offsets_32.c
+@@ -128,13 +128,11 @@ void foo(void)
+ 	OFFSET(XEN_vcpu_info_pending, vcpu_info, evtchn_upcall_pending);
+ #endif
+ 
+-#ifdef CONFIG_LGUEST_GUEST
++#if defined(CONFIG_LGUEST) || defined(CONFIG_LGUEST_GUEST) || defined(CONFIG_LGUEST_MODULE)
+ 	BLANK();
+ 	OFFSET(LGUEST_DATA_irq_enabled, lguest_data, irq_enabled);
+ 	OFFSET(LGUEST_DATA_pgdir, lguest_data, pgdir);
+-#endif
+ 
+-#ifdef CONFIG_LGUEST
+ 	BLANK();
+ 	OFFSET(LGUEST_PAGES_host_gdt_desc, lguest_pages, state.host_gdt_desc);
+ 	OFFSET(LGUEST_PAGES_host_idt_desc, lguest_pages, state.host_idt_desc);
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index f86a3c4..a38aafa 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -504,7 +504,7 @@ void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+ 
+ 	/* Clear all flags overriden by options */
+ 	for (i = 0; i < NCAPINTS; i++)
+-		c->x86_capability[i] ^= cleared_cpu_caps[i];
++		c->x86_capability[i] &= ~cleared_cpu_caps[i];
+ 
+ 	/* Init Machine Check Exception if available. */
+ 	mcheck_init(c);
+diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
+index b6e136f..be83336 100644
+--- a/arch/x86/kernel/cpu/mtrr/main.c
++++ b/arch/x86/kernel/cpu/mtrr/main.c
+@@ -43,6 +43,7 @@
+ #include <asm/uaccess.h>
+ #include <asm/processor.h>
+ #include <asm/msr.h>
++#include <asm/kvm_para.h>
+ #include "mtrr.h"
+ 
+ u32 num_var_ranges = 0;
+@@ -649,6 +650,7 @@ static __init int amd_special_default_mtrr(void)
+ 
+ /**
+  * mtrr_trim_uncached_memory - trim RAM not covered by MTRRs
++ * @end_pfn: ending page frame number
+  *
+  * Some buggy BIOSes don't setup the MTRRs properly for systems with certain
+  * memory configurations.  This routine checks that the highest MTRR matches
+@@ -688,8 +690,11 @@ int __init mtrr_trim_uncached_memory(unsigned long end_pfn)
+ 
+ 	/* kvm/qemu doesn't have mtrr set right, don't trim them all */
+ 	if (!highest_pfn) {
+-		printk(KERN_WARNING "WARNING: strange, CPU MTRRs all blank?\n");
+-		WARN_ON(1);
++		if (!kvm_para_available()) {
++			printk(KERN_WARNING
++				"WARNING: strange, CPU MTRRs all blank?\n");
++			WARN_ON(1);
++		}
+ 		return 0;
+ 	}
+ 
+diff --git a/arch/x86/kernel/cpu/transmeta.c b/arch/x86/kernel/cpu/transmeta.c
+index 200fb3f..e8b422c 100644
+--- a/arch/x86/kernel/cpu/transmeta.c
++++ b/arch/x86/kernel/cpu/transmeta.c
+@@ -76,13 +76,6 @@ static void __cpuinit init_transmeta(struct cpuinfo_x86 *c)
+ 	/* All Transmeta CPUs have a constant TSC */
+ 	set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
+ 	
+-	/* If we can run i686 user-space code, call us an i686 */
+-#define USER686 ((1 << X86_FEATURE_TSC)|\
+-		 (1 << X86_FEATURE_CX8)|\
+-		 (1 << X86_FEATURE_CMOV))
+-        if (c->x86 == 5 && (c->x86_capability[0] & USER686) == USER686)
+-		c->x86 = 6;
+-
+ #ifdef CONFIG_SYSCTL
+ 	/* randomize_va_space slows us down enormously;
+ 	   it probably triggers retranslation of x86->native bytecode */
+diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
+index 2ad9a1b..c20c9e7 100644
+--- a/arch/x86/kernel/entry_64.S
++++ b/arch/x86/kernel/entry_64.S
+@@ -453,6 +453,7 @@ ENTRY(stub_execve)
+ 	CFI_REGISTER rip, r11
+ 	SAVE_REST
+ 	FIXUP_TOP_OF_STACK %r11
++	movq %rsp, %rcx
+ 	call sys_execve
+ 	RESTORE_TOP_OF_STACK %r11
+ 	movq %rax,RAX(%rsp)
+@@ -1036,15 +1037,16 @@ ENDPROC(child_rip)
+  *	rdi: name, rsi: argv, rdx: envp
+  *
+  * We want to fallback into:
+- *	extern long sys_execve(char *name, char **argv,char **envp, struct pt_regs regs)
++ *	extern long sys_execve(char *name, char **argv,char **envp, struct pt_regs *regs)
+  *
+  * do_sys_execve asm fallback arguments:
+- *	rdi: name, rsi: argv, rdx: envp, fake frame on the stack
++ *	rdi: name, rsi: argv, rdx: envp, rcx: fake frame on the stack
+  */
+ ENTRY(kernel_execve)
+ 	CFI_STARTPROC
+ 	FAKE_STACK_FRAME $0
+ 	SAVE_ALL	
++	movq %rsp,%rcx
+ 	call sys_execve
+ 	movq %rax, RAX(%rsp)	
+ 	RESTORE_REST
+diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
+index 25eb985..fd8ca53 100644
+--- a/arch/x86/kernel/head_32.S
++++ b/arch/x86/kernel/head_32.S
+@@ -606,7 +606,7 @@ ENTRY(_stext)
+ .section ".bss.page_aligned","wa"
+ 	.align PAGE_SIZE_asm
+ #ifdef CONFIG_X86_PAE
+-ENTRY(swapper_pg_pmd)
++swapper_pg_pmd:
+ 	.fill 1024*KPMDS,4,0
+ #else
+ ENTRY(swapper_pg_dir)
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index eb41504..a007454 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -379,18 +379,24 @@ NEXT_PAGE(level2_ident_pgt)
+ 	/* Since I easily can, map the first 1G.
+ 	 * Don't set NX because code runs from these pages.
+ 	 */
+-	PMDS(0x0000000000000000, __PAGE_KERNEL_LARGE_EXEC, PTRS_PER_PMD)
++	PMDS(0, __PAGE_KERNEL_LARGE_EXEC, PTRS_PER_PMD)
+ 
+ NEXT_PAGE(level2_kernel_pgt)
+-	/* 40MB kernel mapping. The kernel code cannot be bigger than that.
+-	   When you change this change KERNEL_TEXT_SIZE in page.h too. */
+-	/* (2^48-(2*1024*1024*1024)-((2^39)*511)-((2^30)*510)) = 0 */
+-	PMDS(0x0000000000000000, __PAGE_KERNEL_LARGE_EXEC|_PAGE_GLOBAL, KERNEL_TEXT_SIZE/PMD_SIZE)
+-	/* Module mapping starts here */
+-	.fill	(PTRS_PER_PMD - (KERNEL_TEXT_SIZE/PMD_SIZE)),8,0
++	/*
++	 * 128 MB kernel mapping. We spend a full page on this pagetable
++	 * anyway.
++	 *
++	 * The kernel code+data+bss must not be bigger than that.
++	 *
++	 * (NOTE: at +128MB starts the module area, see MODULES_VADDR.
++	 *  If you want to increase this then increase MODULES_VADDR
++	 *  too.)
++	 */
++	PMDS(0, __PAGE_KERNEL_LARGE_EXEC|_PAGE_GLOBAL,
++		KERNEL_IMAGE_SIZE/PMD_SIZE)
+ 
+ NEXT_PAGE(level2_spare_pgt)
+-	.fill   512,8,0
++	.fill   512, 8, 0
+ 
+ #undef PMDS
+ #undef NEXT_PAGE
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 429d084..235fd6c 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -368,8 +368,8 @@ static int hpet_clocksource_register(void)
+ 	return 0;
+ }
+ 
+-/*
+- * Try to setup the HPET timer
++/**
++ * hpet_enable - Try to setup the HPET timer. Returns 1 on success.
+  */
+ int __init hpet_enable(void)
+ {
+diff --git a/arch/x86/kernel/init_task.c b/arch/x86/kernel/init_task.c
+index 5b3ce79..3d01e47 100644
+--- a/arch/x86/kernel/init_task.c
++++ b/arch/x86/kernel/init_task.c
+@@ -15,6 +15,7 @@ static struct files_struct init_files = INIT_FILES;
+ static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
+ static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
+ struct mm_struct init_mm = INIT_MM(init_mm);
++EXPORT_UNUSED_SYMBOL(init_mm); /* will be removed in 2.6.26 */
+ 
+ /*
+  * Initial thread structure.
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index a7d50a5..be3c7a2 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -603,11 +603,13 @@ __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
+ 	}
+ #endif
+ 
++#ifdef X86_BTS
+ 	if (test_tsk_thread_flag(prev_p, TIF_BTS_TRACE_TS))
+ 		ptrace_bts_take_timestamp(prev_p, BTS_TASK_DEPARTS);
+ 
+ 	if (test_tsk_thread_flag(next_p, TIF_BTS_TRACE_TS))
+ 		ptrace_bts_take_timestamp(next_p, BTS_TASK_ARRIVES);
++#endif
+ 
+ 
+ 	if (!test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) {
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index b0cc8f0..3baf9b9 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -604,11 +604,13 @@ static inline void __switch_to_xtra(struct task_struct *prev_p,
+ 		memset(tss->io_bitmap, 0xff, prev->io_bitmap_max);
+ 	}
+ 
++#ifdef X86_BTS
+ 	if (test_tsk_thread_flag(prev_p, TIF_BTS_TRACE_TS))
+ 		ptrace_bts_take_timestamp(prev_p, BTS_TASK_DEPARTS);
+ 
+ 	if (test_tsk_thread_flag(next_p, TIF_BTS_TRACE_TS))
+ 		ptrace_bts_take_timestamp(next_p, BTS_TASK_ARRIVES);
++#endif
+ }
+ 
+ /*
+@@ -730,16 +732,16 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+  */
+ asmlinkage
+ long sys_execve(char __user *name, char __user * __user *argv,
+-		char __user * __user *envp, struct pt_regs regs)
++		char __user * __user *envp, struct pt_regs *regs)
+ {
+ 	long error;
+ 	char * filename;
+ 
+ 	filename = getname(name);
+ 	error = PTR_ERR(filename);
+-	if (IS_ERR(filename)) 
++	if (IS_ERR(filename))
+ 		return error;
+-	error = do_execve(filename, argv, envp, &regs); 
++	error = do_execve(filename, argv, envp, regs);
+ 	putname(filename);
+ 	return error;
+ }
+diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
+index d862e39..f41fdc9 100644
+--- a/arch/x86/kernel/ptrace.c
++++ b/arch/x86/kernel/ptrace.c
+@@ -544,6 +544,8 @@ static int ptrace_set_debugreg(struct task_struct *child,
+ 	return 0;
+ }
+ 
++#ifdef X86_BTS
++
+ static int ptrace_bts_get_size(struct task_struct *child)
+ {
+ 	if (!child->thread.ds_area_msr)
+@@ -826,6 +828,7 @@ void ptrace_bts_take_timestamp(struct task_struct *tsk,
+ 
+ 	ptrace_bts_write_record(tsk, &rec);
+ }
++#endif /* X86_BTS */
+ 
+ /*
+  * Called by kernel/ptrace.c when detaching..
+@@ -839,7 +842,9 @@ void ptrace_disable(struct task_struct *child)
+ 	clear_tsk_thread_flag(child, TIF_SYSCALL_EMU);
+ #endif
+ 	if (child->thread.ds_area_msr) {
++#ifdef X86_BTS
+ 		ptrace_bts_realloc(child, 0, 0);
++#endif
+ 		child->thread.debugctlmsr &= ~ds_debugctl_mask();
+ 		if (!child->thread.debugctlmsr)
+ 			clear_tsk_thread_flag(child, TIF_DEBUGCTLMSR);
+@@ -961,6 +966,10 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
+ 		break;
+ #endif
+ 
++	/*
++	 * These bits need more cooking - not enabled yet:
++	 */
++#ifdef X86_BTS
+ 	case PTRACE_BTS_CONFIG:
+ 		ret = ptrace_bts_config
+ 			(child, data, (struct ptrace_bts_config __user *)addr);
+@@ -988,6 +997,7 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
+ 		ret = ptrace_bts_drain
+ 			(child, data, (struct bts_struct __user *) addr);
+ 		break;
++#endif
+ 
+ 	default:
+ 		ret = ptrace_request(child, request, addr, data);
+@@ -1226,12 +1236,14 @@ asmlinkage long sys32_ptrace(long request, u32 pid, u32 addr, u32 data)
+ 	case PTRACE_SETOPTIONS:
+ 	case PTRACE_SET_THREAD_AREA:
+ 	case PTRACE_GET_THREAD_AREA:
++#ifdef X86_BTS
+ 	case PTRACE_BTS_CONFIG:
+ 	case PTRACE_BTS_STATUS:
+ 	case PTRACE_BTS_SIZE:
+ 	case PTRACE_BTS_GET:
+ 	case PTRACE_BTS_CLEAR:
+ 	case PTRACE_BTS_DRAIN:
++#endif
+ 		return sys_ptrace(request, pid, addr, data);
+ 
+ 	default:
+diff --git a/arch/x86/kernel/setup_64.c b/arch/x86/kernel/setup_64.c
+index 6fd804f..7637dc9 100644
+--- a/arch/x86/kernel/setup_64.c
++++ b/arch/x86/kernel/setup_64.c
+@@ -1021,7 +1021,7 @@ void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
+ 
+ 	/* Clear all flags overriden by options */
+ 	for (i = 0; i < NCAPINTS; i++)
+-		c->x86_capability[i] ^= cleared_cpu_caps[i];
++		c->x86_capability[i] &= ~cleared_cpu_caps[i];
+ 
+ #ifdef CONFIG_X86_MCE
+ 	mcheck_init(c);
+diff --git a/arch/x86/kernel/smpboot_64.c b/arch/x86/kernel/smpboot_64.c
+index d53bd6f..0880f2c 100644
+--- a/arch/x86/kernel/smpboot_64.c
++++ b/arch/x86/kernel/smpboot_64.c
+@@ -554,10 +554,10 @@ static int __cpuinit do_boot_cpu(int cpu, int apicid)
+ 	int timeout;
+ 	unsigned long start_rip;
+ 	struct create_idle c_idle = {
+-		.work = __WORK_INITIALIZER(c_idle.work, do_fork_idle),
+ 		.cpu = cpu,
+ 		.done = COMPLETION_INITIALIZER_ONSTACK(c_idle.done),
+ 	};
++	INIT_WORK(&c_idle.work, do_fork_idle);
+ 
+ 	/* allocate memory for gdts of secondary cpus. Hotplug is considered */
+ 	if (!cpu_gdt_descr[cpu].address &&
+diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
+index 02f0f61..c28c342 100644
+--- a/arch/x86/kernel/stacktrace.c
++++ b/arch/x86/kernel/stacktrace.c
+@@ -25,6 +25,8 @@ static int save_stack_stack(void *data, char *name)
+ static void save_stack_address(void *data, unsigned long addr, int reliable)
+ {
+ 	struct stack_trace *trace = data;
++	if (!reliable)
++		return;
+ 	if (trace->skip > 0) {
+ 		trace->skip--;
+ 		return;
+@@ -37,6 +39,8 @@ static void
+ save_stack_address_nosched(void *data, unsigned long addr, int reliable)
+ {
+ 	struct stack_trace *trace = (struct stack_trace *)data;
++	if (!reliable)
++		return;
+ 	if (in_sched_functions(addr))
+ 		return;
+ 	if (trace->skip > 0) {
+diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
+index 6dfd4e7..022bcaa 100644
+--- a/arch/x86/kernel/tls.c
++++ b/arch/x86/kernel/tls.c
+@@ -91,7 +91,9 @@ int do_set_thread_area(struct task_struct *p, int idx,
+ 
+ asmlinkage int sys_set_thread_area(struct user_desc __user *u_info)
+ {
+-	return do_set_thread_area(current, -1, u_info, 1);
++	int ret = do_set_thread_area(current, -1, u_info, 1);
++	prevent_tail_call(ret);
++	return ret;
+ }
+ 
+ 
+@@ -139,7 +141,9 @@ int do_get_thread_area(struct task_struct *p, int idx,
+ 
+ asmlinkage int sys_get_thread_area(struct user_desc __user *u_info)
+ {
+-	return do_get_thread_area(current, -1, u_info);
++	int ret = do_get_thread_area(current, -1, u_info);
++	prevent_tail_call(ret);
++	return ret;
+ }
+ 
+ int regset_tls_active(struct task_struct *target,
+diff --git a/arch/x86/kernel/tsc_32.c b/arch/x86/kernel/tsc_32.c
+index 43517e3..f14cfd9 100644
+--- a/arch/x86/kernel/tsc_32.c
++++ b/arch/x86/kernel/tsc_32.c
+@@ -28,7 +28,8 @@ EXPORT_SYMBOL_GPL(tsc_khz);
+ static int __init tsc_setup(char *str)
+ {
+ 	printk(KERN_WARNING "notsc: Kernel compiled with CONFIG_X86_TSC, "
+-				"cannot disable TSC.\n");
++				"cannot disable TSC completely.\n");
++	mark_tsc_unstable("user disabled TSC");
+ 	return 1;
+ }
+ #else
+diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
+index 3f82427..edff4c9 100644
+--- a/arch/x86/kernel/vsyscall_64.c
++++ b/arch/x86/kernel/vsyscall_64.c
+@@ -44,11 +44,6 @@
+ 
+ #define __vsyscall(nr) __attribute__ ((unused,__section__(".vsyscall_" #nr)))
+ #define __syscall_clobber "r11","cx","memory"
+-#define __pa_vsymbol(x)			\
+-	({unsigned long v;  		\
+-	extern char __vsyscall_0; 	\
+-	  asm("" : "=r" (v) : "0" (x)); \
+-	  ((v - VSYSCALL_START) + __pa_symbol(&__vsyscall_0)); })
+ 
+ /*
+  * vsyscall_gtod_data contains data that is :
+@@ -102,7 +97,7 @@ static __always_inline void do_get_tz(struct timezone * tz)
+ static __always_inline int gettimeofday(struct timeval *tv, struct timezone *tz)
+ {
+ 	int ret;
+-	asm volatile("vsysc2: syscall"
++	asm volatile("syscall"
+ 		: "=a" (ret)
+ 		: "0" (__NR_gettimeofday),"D" (tv),"S" (tz)
+ 		: __syscall_clobber );
+@@ -112,7 +107,7 @@ static __always_inline int gettimeofday(struct timeval *tv, struct timezone *tz)
+ static __always_inline long time_syscall(long *t)
+ {
+ 	long secs;
+-	asm volatile("vsysc1: syscall"
++	asm volatile("syscall"
+ 		: "=a" (secs)
+ 		: "0" (__NR_time),"D" (t) : __syscall_clobber);
+ 	return secs;
+@@ -228,42 +223,11 @@ long __vsyscall(3) venosys_1(void)
+ 
+ #ifdef CONFIG_SYSCTL
+ 
+-#define SYSCALL 0x050f
+-#define NOP2    0x9090
+-
+-/*
+- * NOP out syscall in vsyscall page when not needed.
+- */
+-static int vsyscall_sysctl_change(ctl_table *ctl, int write, struct file * filp,
+-                        void __user *buffer, size_t *lenp, loff_t *ppos)
++static int
++vsyscall_sysctl_change(ctl_table *ctl, int write, struct file * filp,
++		       void __user *buffer, size_t *lenp, loff_t *ppos)
+ {
+-	extern u16 vsysc1, vsysc2;
+-	u16 __iomem *map1;
+-	u16 __iomem *map2;
+-	int ret = proc_dointvec(ctl, write, filp, buffer, lenp, ppos);
+-	if (!write)
+-		return ret;
+-	/* gcc has some trouble with __va(__pa()), so just do it this
+-	   way. */
+-	map1 = ioremap(__pa_vsymbol(&vsysc1), 2);
+-	if (!map1)
+-		return -ENOMEM;
+-	map2 = ioremap(__pa_vsymbol(&vsysc2), 2);
+-	if (!map2) {
+-		ret = -ENOMEM;
+-		goto out;
+-	}
+-	if (!vsyscall_gtod_data.sysctl_enabled) {
+-		writew(SYSCALL, map1);
+-		writew(SYSCALL, map2);
+-	} else {
+-		writew(NOP2, map1);
+-		writew(NOP2, map2);
+-	}
+-	iounmap(map2);
+-out:
+-	iounmap(map1);
+-	return ret;
++	return proc_dointvec(ctl, write, filp, buffer, lenp, ppos);
+ }
+ 
+ static ctl_table kernel_table2[] = {
+@@ -279,7 +243,6 @@ static ctl_table kernel_root_table2[] = {
+ 	  .child = kernel_table2 },
+ 	{}
+ };
+-
+ #endif
+ 
+ /* Assume __initcall executes before all user space. Hopefully kmod
+diff --git a/arch/x86/lguest/boot.c b/arch/x86/lguest/boot.c
+index 5afdde4..cccb38a 100644
+--- a/arch/x86/lguest/boot.c
++++ b/arch/x86/lguest/boot.c
+@@ -57,6 +57,7 @@
+ #include <linux/lguest_launcher.h>
+ #include <linux/virtio_console.h>
+ #include <linux/pm.h>
++#include <asm/lguest.h>
+ #include <asm/paravirt.h>
+ #include <asm/param.h>
+ #include <asm/page.h>
+@@ -75,15 +76,6 @@
+  * behaving in simplified but equivalent ways.  In particular, the Guest is the
+  * same kernel as the Host (or at least, built from the same source code). :*/
+ 
+-/* Declarations for definitions in lguest_guest.S */
+-extern char lguest_noirq_start[], lguest_noirq_end[];
+-extern const char lgstart_cli[], lgend_cli[];
+-extern const char lgstart_sti[], lgend_sti[];
+-extern const char lgstart_popf[], lgend_popf[];
+-extern const char lgstart_pushf[], lgend_pushf[];
+-extern const char lgstart_iret[], lgend_iret[];
+-extern void lguest_iret(void);
+-
+ struct lguest_data lguest_data = {
+ 	.hcall_status = { [0 ... LHCALL_RING_SIZE-1] = 0xFF },
+ 	.noirq_start = (u32)lguest_noirq_start,
+@@ -489,7 +481,7 @@ static void lguest_set_pmd(pmd_t *pmdp, pmd_t pmdval)
+ {
+ 	*pmdp = pmdval;
+ 	lazy_hcall(LHCALL_SET_PMD, __pa(pmdp)&PAGE_MASK,
+-		   (__pa(pmdp)&(PAGE_SIZE-1))/4, 0);
++		   (__pa(pmdp)&(PAGE_SIZE-1)), 0);
+ }
+ 
+ /* There are a couple of legacy places where the kernel sets a PTE, but we
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index bb652f5..a02a14f 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -172,8 +172,9 @@ set_pte_phys(unsigned long vaddr, unsigned long phys, pgprot_t prot)
+ }
+ 
+ /*
+- * The head.S code sets up the kernel high mapping from:
+- * __START_KERNEL_map to __START_KERNEL_map + KERNEL_TEXT_SIZE
++ * The head.S code sets up the kernel high mapping:
++ *
++ *   from __START_KERNEL_map to __START_KERNEL_map + size (== _end-_text)
+  *
+  * phys_addr holds the negative offset to the kernel, which is added
+  * to the compile time generated pmds. This results in invalid pmds up
+@@ -515,14 +516,6 @@ void __init mem_init(void)
+ 
+ 	/* clear_bss() already clear the empty_zero_page */
+ 
+-	/* temporary debugging - double check it's true: */
+-	{
+-		int i;
+-
+-		for (i = 0; i < 1024; i++)
+-			WARN_ON_ONCE(empty_zero_page[i]);
+-	}
+-
+ 	reservedpages = 0;
+ 
+ 	/* this will put all low memory onto the freelists */
+diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
+index 882328e..ac3c959 100644
+--- a/arch/x86/mm/ioremap.c
++++ b/arch/x86/mm/ioremap.c
+@@ -162,7 +162,7 @@ static void __iomem *__ioremap(unsigned long phys_addr, unsigned long size,
+ 	area->phys_addr = phys_addr;
+ 	vaddr = (unsigned long) area->addr;
+ 	if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot)) {
+-		remove_vm_area((void *)(vaddr & PAGE_MASK));
++		free_vm_area(area);
+ 		return NULL;
+ 	}
+ 
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 464d8fc..7049294 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -26,6 +26,7 @@ struct cpa_data {
+ 	pgprot_t	mask_set;
+ 	pgprot_t	mask_clr;
+ 	int		numpages;
++	int		processed;
+ 	int		flushtlb;
+ 	unsigned long	pfn;
+ };
+@@ -44,6 +45,12 @@ static inline unsigned long highmap_end_pfn(void)
+ 
+ #endif
+ 
++#ifdef CONFIG_DEBUG_PAGEALLOC
++# define debug_pagealloc 1
++#else
++# define debug_pagealloc 0
++#endif
++
+ static inline int
+ within(unsigned long addr, unsigned long start, unsigned long end)
+ {
+@@ -284,8 +291,8 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
+ 	 */
+ 	nextpage_addr = (address + psize) & pmask;
+ 	numpages = (nextpage_addr - address) >> PAGE_SHIFT;
+-	if (numpages < cpa->numpages)
+-		cpa->numpages = numpages;
++	if (numpages < cpa->processed)
++		cpa->processed = numpages;
+ 
+ 	/*
+ 	 * We are safe now. Check whether the new pgprot is the same:
+@@ -312,7 +319,7 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
+ 	 */
+ 	addr = address + PAGE_SIZE;
+ 	pfn++;
+-	for (i = 1; i < cpa->numpages; i++, addr += PAGE_SIZE, pfn++) {
++	for (i = 1; i < cpa->processed; i++, addr += PAGE_SIZE, pfn++) {
+ 		pgprot_t chk_prot = static_protections(new_prot, addr, pfn);
+ 
+ 		if (pgprot_val(chk_prot) != pgprot_val(new_prot))
+@@ -336,7 +343,7 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
+ 	 * that we limited the number of possible pages already to
+ 	 * the number of pages in the large page.
+ 	 */
+-	if (address == (nextpage_addr - psize) && cpa->numpages == numpages) {
++	if (address == (nextpage_addr - psize) && cpa->processed == numpages) {
+ 		/*
+ 		 * The address is aligned and the number of pages
+ 		 * covers the full page.
+@@ -355,45 +362,48 @@ out_unlock:
+ 
+ static LIST_HEAD(page_pool);
+ static unsigned long pool_size, pool_pages, pool_low;
+-static unsigned long pool_used, pool_failed, pool_refill;
++static unsigned long pool_used, pool_failed;
+ 
+-static void cpa_fill_pool(void)
++static void cpa_fill_pool(struct page **ret)
+ {
+-	struct page *p;
+ 	gfp_t gfp = GFP_KERNEL;
++	unsigned long flags;
++	struct page *p;
+ 
+-	/* Do not allocate from interrupt context */
+-	if (in_irq() || irqs_disabled())
+-		return;
+ 	/*
+-	 * Check unlocked. I does not matter when we have one more
+-	 * page in the pool. The bit lock avoids recursive pool
+-	 * allocations:
++	 * Avoid recursion (on debug-pagealloc) and also signal
++	 * our priority to get to these pagetables:
+ 	 */
+-	if (pool_pages >= pool_size || test_and_set_bit_lock(0, &pool_refill))
++	if (current->flags & PF_MEMALLOC)
+ 		return;
++	current->flags |= PF_MEMALLOC;
+ 
+-#ifdef CONFIG_DEBUG_PAGEALLOC
+ 	/*
+-	 * We could do:
+-	 * gfp = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
+-	 * but this fails on !PREEMPT kernels
++	 * Allocate atomically from atomic contexts:
+ 	 */
+-	gfp =  GFP_ATOMIC | __GFP_NORETRY | __GFP_NOWARN;
+-#endif
++	if (in_atomic() || irqs_disabled() || debug_pagealloc)
++		gfp =  GFP_ATOMIC | __GFP_NORETRY | __GFP_NOWARN;
+ 
+-	while (pool_pages < pool_size) {
++	while (pool_pages < pool_size || (ret && !*ret)) {
+ 		p = alloc_pages(gfp, 0);
+ 		if (!p) {
+ 			pool_failed++;
+ 			break;
+ 		}
+-		spin_lock_irq(&pgd_lock);
++		/*
++		 * If the call site needs a page right now, provide it:
++		 */
++		if (ret && !*ret) {
++			*ret = p;
++			continue;
++		}
++		spin_lock_irqsave(&pgd_lock, flags);
+ 		list_add(&p->lru, &page_pool);
+ 		pool_pages++;
+-		spin_unlock_irq(&pgd_lock);
++		spin_unlock_irqrestore(&pgd_lock, flags);
+ 	}
+-	clear_bit_unlock(0, &pool_refill);
++
++	current->flags &= ~PF_MEMALLOC;
+ }
+ 
+ #define SHIFT_MB		(20 - PAGE_SHIFT)
+@@ -414,11 +424,15 @@ void __init cpa_init(void)
+ 	 * GiB. Shift MiB to Gib and multiply the result by
+ 	 * POOL_PAGES_PER_GB:
+ 	 */
+-	gb = ((si.totalram >> SHIFT_MB) + ROUND_MB_GB) >> SHIFT_MB_GB;
+-	pool_size = POOL_PAGES_PER_GB * gb;
++	if (debug_pagealloc) {
++		gb = ((si.totalram >> SHIFT_MB) + ROUND_MB_GB) >> SHIFT_MB_GB;
++		pool_size = POOL_PAGES_PER_GB * gb;
++	} else {
++		pool_size = 1;
++	}
+ 	pool_low = pool_size;
+ 
+-	cpa_fill_pool();
++	cpa_fill_pool(NULL);
+ 	printk(KERN_DEBUG
+ 	       "CPA: page pool initialized %lu of %lu pages preallocated\n",
+ 	       pool_pages, pool_size);
+@@ -440,16 +454,20 @@ static int split_large_page(pte_t *kpte, unsigned long address)
+ 	spin_lock_irqsave(&pgd_lock, flags);
+ 	if (list_empty(&page_pool)) {
+ 		spin_unlock_irqrestore(&pgd_lock, flags);
+-		return -ENOMEM;
++		base = NULL;
++		cpa_fill_pool(&base);
++		if (!base)
++			return -ENOMEM;
++		spin_lock_irqsave(&pgd_lock, flags);
++	} else {
++		base = list_first_entry(&page_pool, struct page, lru);
++		list_del(&base->lru);
++		pool_pages--;
++
++		if (pool_pages < pool_low)
++			pool_low = pool_pages;
+ 	}
+ 
+-	base = list_first_entry(&page_pool, struct page, lru);
+-	list_del(&base->lru);
+-	pool_pages--;
+-
+-	if (pool_pages < pool_low)
+-		pool_low = pool_pages;
+-
+ 	/*
+ 	 * Check for races, another CPU might have split this page
+ 	 * up for us already:
+@@ -555,7 +573,7 @@ repeat:
+ 			set_pte_atomic(kpte, new_pte);
+ 			cpa->flushtlb = 1;
+ 		}
+-		cpa->numpages = 1;
++		cpa->processed = 1;
+ 		return 0;
+ 	}
+ 
+@@ -566,7 +584,7 @@ repeat:
+ 	do_split = try_preserve_large_page(kpte, address, cpa);
+ 	/*
+ 	 * When the range fits into the existing large page,
+-	 * return. cp->numpages and cpa->tlbflush have been updated in
++	 * return. cp->processed and cpa->tlbflush have been updated in
+ 	 * try_large_page:
+ 	 */
+ 	if (do_split <= 0)
+@@ -645,7 +663,7 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
+ 		 * Store the remaining nr of pages for the large page
+ 		 * preservation check.
+ 		 */
+-		cpa->numpages = numpages;
++		cpa->numpages = cpa->processed = numpages;
+ 
+ 		ret = __change_page_attr(cpa, checkalias);
+ 		if (ret)
+@@ -662,9 +680,9 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
+ 		 * CPA operation. Either a large page has been
+ 		 * preserved or a single page update happened.
+ 		 */
+-		BUG_ON(cpa->numpages > numpages);
+-		numpages -= cpa->numpages;
+-		cpa->vaddr += cpa->numpages * PAGE_SIZE;
++		BUG_ON(cpa->processed > numpages);
++		numpages -= cpa->processed;
++		cpa->vaddr += cpa->processed * PAGE_SIZE;
+ 	}
+ 	return 0;
+ }
+@@ -734,7 +752,8 @@ static int change_page_attr_set_clr(unsigned long addr, int numpages,
+ 		cpa_flush_all(cache);
+ 
+ out:
+-	cpa_fill_pool();
++	cpa_fill_pool(NULL);
++
+ 	return ret;
+ }
+ 
+@@ -897,7 +916,7 @@ void kernel_map_pages(struct page *page, int numpages, int enable)
+ 	 * Try to refill the page pool here. We can do this only after
+ 	 * the tlb flush.
+ 	 */
+-	cpa_fill_pool();
++	cpa_fill_pool(NULL);
+ }
+ 
+ #ifdef CONFIG_HIBERNATION
+diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
+index f385a4b..0a8f474 100644
+--- a/arch/x86/vdso/Makefile
++++ b/arch/x86/vdso/Makefile
+@@ -50,7 +50,9 @@ obj-$(VDSO64-y)			+= vdso-syms.lds
+ sed-vdsosym := -e 's/^00*/0/' \
+ 	-e 's/^\([0-9a-fA-F]*\) . \(VDSO[a-zA-Z0-9_]*\)$$/\2 = 0x\1;/p'
+ quiet_cmd_vdsosym = VDSOSYM $@
+-      cmd_vdsosym = $(NM) $< | sed -n $(sed-vdsosym) | LC_ALL=C sort > $@
++define cmd_vdsosym
++	$(NM) $< | LC_ALL=C sed -n $(sed-vdsosym) | LC_ALL=C sort > $@
++endef
+ 
+ $(obj)/%-syms.lds: $(obj)/%.so.dbg FORCE
+ 	$(call if_changed,vdsosym)
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 49e5358..8b9ee27 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -153,6 +153,7 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
+ 	if (*ax == 1)
+ 		maskedx = ~((1 << X86_FEATURE_APIC) |  /* disable APIC */
+ 			    (1 << X86_FEATURE_ACPI) |  /* disable ACPI */
++			    (1 << X86_FEATURE_SEP)  |  /* disable SEP */
+ 			    (1 << X86_FEATURE_ACC));   /* thermal monitoring */
+ 
+ 	asm(XEN_EMULATE_PREFIX "cpuid"
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index fbc2435..4fbcce7 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -113,7 +113,7 @@ int atapi_enabled = 1;
+ module_param(atapi_enabled, int, 0444);
+ MODULE_PARM_DESC(atapi_enabled, "Enable discovery of ATAPI devices (0=off, 1=on)");
+ 
+-int atapi_dmadir = 0;
++static int atapi_dmadir = 0;
+ module_param(atapi_dmadir, int, 0444);
+ MODULE_PARM_DESC(atapi_dmadir, "Enable ATAPI DMADIR bridge support (0=off, 1=on)");
+ 
+@@ -6567,6 +6567,8 @@ int ata_host_suspend(struct ata_host *host, pm_message_t mesg)
+ 	ata_lpm_enable(host);
+ 
+ 	rc = ata_host_request_pm(host, mesg, 0, ATA_EHI_QUIET, 1);
++	if (rc == 0)
++		host->dev->power.power_state = mesg;
+ 	return rc;
+ }
+ 
+@@ -6585,6 +6587,7 @@ void ata_host_resume(struct ata_host *host)
+ {
+ 	ata_host_request_pm(host, PMSG_ON, ATA_EH_SOFTRESET,
+ 			    ATA_EHI_NO_AUTOPSY | ATA_EHI_QUIET, 0);
++	host->dev->power.power_state = PMSG_ON;
+ 
+ 	/* reenable link pm */
+ 	ata_lpm_disable(host);
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 0562b0a..7b1f1ee 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -1694,12 +1694,17 @@ void ata_scsi_rbuf_fill(struct ata_scsi_args *args,
+ 	u8 *rbuf;
+ 	unsigned int buflen, rc;
+ 	struct scsi_cmnd *cmd = args->cmd;
++	unsigned long flags;
++
++	local_irq_save(flags);
+ 
+ 	buflen = ata_scsi_rbuf_get(cmd, &rbuf);
+ 	memset(rbuf, 0, buflen);
+ 	rc = actor(args, rbuf, buflen);
+ 	ata_scsi_rbuf_put(cmd, rbuf);
+ 
++	local_irq_restore(flags);
++
+ 	if (rc == 0)
+ 		cmd->result = SAM_STAT_GOOD;
+ 	args->done(cmd);
+@@ -2473,6 +2478,9 @@ static void atapi_qc_complete(struct ata_queued_cmd *qc)
+ 		if ((scsicmd[0] == INQUIRY) && ((scsicmd[1] & 0x03) == 0)) {
+ 			u8 *buf = NULL;
+ 			unsigned int buflen;
++			unsigned long flags;
++
++			local_irq_save(flags);
+ 
+ 			buflen = ata_scsi_rbuf_get(cmd, &buf);
+ 
+@@ -2490,6 +2498,8 @@ static void atapi_qc_complete(struct ata_queued_cmd *qc)
+ 			}
+ 
+ 			ata_scsi_rbuf_put(cmd, buf);
++
++			local_irq_restore(flags);
+ 		}
+ 
+ 		cmd->result = SAM_STAT_GOOD;
+diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
+index 6036ded..aa884f7 100644
+--- a/drivers/ata/libata.h
++++ b/drivers/ata/libata.h
+@@ -56,7 +56,6 @@ enum {
+ extern unsigned int ata_print_id;
+ extern struct workqueue_struct *ata_aux_wq;
+ extern int atapi_enabled;
+-extern int atapi_dmadir;
+ extern int atapi_passthru16;
+ extern int libata_fua;
+ extern int libata_noacpi;
+diff --git a/drivers/ata/sata_svw.c b/drivers/ata/sata_svw.c
+index 69f651e..840d1c4 100644
+--- a/drivers/ata/sata_svw.c
++++ b/drivers/ata/sata_svw.c
+@@ -45,6 +45,8 @@
+ #include <linux/interrupt.h>
+ #include <linux/device.h>
+ #include <scsi/scsi_host.h>
++#include <scsi/scsi_cmnd.h>
++#include <scsi/scsi.h>
+ #include <linux/libata.h>
+ 
+ #ifdef CONFIG_PPC_OF
+@@ -59,6 +61,7 @@ enum {
+ 	/* ap->flags bits */
+ 	K2_FLAG_SATA_8_PORTS		= (1 << 24),
+ 	K2_FLAG_NO_ATAPI_DMA		= (1 << 25),
++	K2_FLAG_BAR_POS_3			= (1 << 26),
+ 
+ 	/* Taskfile registers offsets */
+ 	K2_SATA_TF_CMD_OFFSET		= 0x00,
+@@ -88,8 +91,10 @@ enum {
+ 	/* Port stride */
+ 	K2_SATA_PORT_OFFSET		= 0x100,
+ 
+-	board_svw4			= 0,
+-	board_svw8			= 1,
++	chip_svw4			= 0,
++	chip_svw8			= 1,
++	chip_svw42			= 2,	/* bar 3 */
++	chip_svw43			= 3,	/* bar 5 */
+ };
+ 
+ static u8 k2_stat_check_status(struct ata_port *ap);
+@@ -97,10 +102,25 @@ static u8 k2_stat_check_status(struct ata_port *ap);
+ 
+ static int k2_sata_check_atapi_dma(struct ata_queued_cmd *qc)
+ {
++	u8 cmnd = qc->scsicmd->cmnd[0];
++
+ 	if (qc->ap->flags & K2_FLAG_NO_ATAPI_DMA)
+ 		return -1;	/* ATAPI DMA not supported */
++	else {
++		switch (cmnd) {
++		case READ_10:
++		case READ_12:
++		case READ_16:
++		case WRITE_10:
++		case WRITE_12:
++		case WRITE_16:
++			return 0;
++
++		default:
++			return -1;
++		}
+ 
+-	return 0;
++	}
+ }
+ 
+ static int k2_sata_scr_read(struct ata_port *ap, unsigned int sc_reg, u32 *val)
+@@ -354,7 +374,7 @@ static const struct ata_port_operations k2_sata_ops = {
+ };
+ 
+ static const struct ata_port_info k2_port_info[] = {
+-	/* board_svw4 */
++	/* chip_svw4 */
+ 	{
+ 		.flags		= ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
+ 				  ATA_FLAG_MMIO | K2_FLAG_NO_ATAPI_DMA,
+@@ -363,7 +383,7 @@ static const struct ata_port_info k2_port_info[] = {
+ 		.udma_mask	= ATA_UDMA6,
+ 		.port_ops	= &k2_sata_ops,
+ 	},
+-	/* board_svw8 */
++	/* chip_svw8 */
+ 	{
+ 		.flags		= ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
+ 				  ATA_FLAG_MMIO | K2_FLAG_NO_ATAPI_DMA |
+@@ -373,6 +393,24 @@ static const struct ata_port_info k2_port_info[] = {
+ 		.udma_mask	= ATA_UDMA6,
+ 		.port_ops	= &k2_sata_ops,
+ 	},
++	/* chip_svw42 */
++	{
++		.flags		= ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
++				  ATA_FLAG_MMIO | K2_FLAG_BAR_POS_3,
++		.pio_mask	= 0x1f,
++		.mwdma_mask	= 0x07,
++		.udma_mask	= ATA_UDMA6,
++		.port_ops	= &k2_sata_ops,
++	},
++	/* chip_svw43 */
++	{
++		.flags		= ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
++				  ATA_FLAG_MMIO,
++		.pio_mask	= 0x1f,
++		.mwdma_mask	= 0x07,
++		.udma_mask	= ATA_UDMA6,
++		.port_ops	= &k2_sata_ops,
++	},
+ };
+ 
+ static void k2_sata_setup_port(struct ata_ioports *port, void __iomem *base)
+@@ -402,7 +440,7 @@ static int k2_sata_init_one(struct pci_dev *pdev, const struct pci_device_id *en
+ 		{ &k2_port_info[ent->driver_data], NULL };
+ 	struct ata_host *host;
+ 	void __iomem *mmio_base;
+-	int n_ports, i, rc;
++	int n_ports, i, rc, bar_pos;
+ 
+ 	if (!printed_version++)
+ 		dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n");
+@@ -416,6 +454,9 @@ static int k2_sata_init_one(struct pci_dev *pdev, const struct pci_device_id *en
+ 	if (!host)
+ 		return -ENOMEM;
+ 
++	bar_pos = 5;
++	if (ppi[0]->flags & K2_FLAG_BAR_POS_3)
++		bar_pos = 3;
+ 	/*
+ 	 * If this driver happens to only be useful on Apple's K2, then
+ 	 * we should check that here as it has a normal Serverworks ID
+@@ -428,17 +469,23 @@ static int k2_sata_init_one(struct pci_dev *pdev, const struct pci_device_id *en
+ 	 * Check if we have resources mapped at all (second function may
+ 	 * have been disabled by firmware)
+ 	 */
+-	if (pci_resource_len(pdev, 5) == 0)
++	if (pci_resource_len(pdev, bar_pos) == 0) {
++		/* In IDE mode we need to pin the device to ensure that
++			pcim_release does not clear the busmaster bit in config
++			space, clearing causes busmaster DMA to fail on
++			ports 3 & 4 */
++		pcim_pin_device(pdev);
+ 		return -ENODEV;
++	}
+ 
+ 	/* Request and iomap PCI regions */
+-	rc = pcim_iomap_regions(pdev, 1 << 5, DRV_NAME);
++	rc = pcim_iomap_regions(pdev, 1 << bar_pos, DRV_NAME);
+ 	if (rc == -EBUSY)
+ 		pcim_pin_device(pdev);
+ 	if (rc)
+ 		return rc;
+ 	host->iomap = pcim_iomap_table(pdev);
+-	mmio_base = host->iomap[5];
++	mmio_base = host->iomap[bar_pos];
+ 
+ 	/* different controllers have different number of ports - currently 4 or 8 */
+ 	/* All ports are on the same function. Multi-function device is no
+@@ -483,11 +530,13 @@ static int k2_sata_init_one(struct pci_dev *pdev, const struct pci_device_id *en
+  * controller
+  * */
+ static const struct pci_device_id k2_sata_pci_tbl[] = {
+-	{ PCI_VDEVICE(SERVERWORKS, 0x0240), board_svw4 },
+-	{ PCI_VDEVICE(SERVERWORKS, 0x0241), board_svw4 },
+-	{ PCI_VDEVICE(SERVERWORKS, 0x0242), board_svw8 },
+-	{ PCI_VDEVICE(SERVERWORKS, 0x024a), board_svw4 },
+-	{ PCI_VDEVICE(SERVERWORKS, 0x024b), board_svw4 },
++	{ PCI_VDEVICE(SERVERWORKS, 0x0240), chip_svw4 },
++	{ PCI_VDEVICE(SERVERWORKS, 0x0241), chip_svw4 },
++	{ PCI_VDEVICE(SERVERWORKS, 0x0242), chip_svw8 },
++	{ PCI_VDEVICE(SERVERWORKS, 0x024a), chip_svw4 },
++	{ PCI_VDEVICE(SERVERWORKS, 0x024b), chip_svw4 },
++	{ PCI_VDEVICE(SERVERWORKS, 0x0410), chip_svw42 },
++	{ PCI_VDEVICE(SERVERWORKS, 0x0411), chip_svw43 },
+ 
+ 	{ }
+ };
+diff --git a/drivers/char/rtc.c b/drivers/char/rtc.c
+index 78b151c..5c3142b 100644
+--- a/drivers/char/rtc.c
++++ b/drivers/char/rtc.c
+@@ -110,8 +110,8 @@ static int rtc_has_irq = 1;
+ #define hpet_set_rtc_irq_bit(arg)		0
+ #define hpet_rtc_timer_init()			do { } while (0)
+ #define hpet_rtc_dropped_irq()			0
+-#define hpet_register_irq_handler(h)		0
+-#define hpet_unregister_irq_handler(h)		0
++#define hpet_register_irq_handler(h)		({ 0; })
++#define hpet_unregister_irq_handler(h)		({ 0; })
+ #ifdef RTC_IRQ
+ static irqreturn_t hpet_rtc_interrupt(int irq, void *dev_id)
+ {
+diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
+index fea2d3e..85e2ba7 100644
+--- a/drivers/connector/connector.c
++++ b/drivers/connector/connector.c
+@@ -47,7 +47,7 @@ static LIST_HEAD(notify_list);
+ 
+ static struct cn_dev cdev;
+ 
+-int cn_already_initialized = 0;
++static int cn_already_initialized;
+ 
+ /*
+  * msg->seq and msg->ack are used to determine message genealogy.
+diff --git a/drivers/firewire/fw-card.c b/drivers/firewire/fw-card.c
+index 3e97199..a034627 100644
+--- a/drivers/firewire/fw-card.c
++++ b/drivers/firewire/fw-card.c
+@@ -18,6 +18,7 @@
+ 
+ #include <linux/module.h>
+ #include <linux/errno.h>
++#include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/mutex.h>
+ #include <linux/crc-itu-t.h>
+@@ -214,17 +215,29 @@ static void
+ fw_card_bm_work(struct work_struct *work)
+ {
+ 	struct fw_card *card = container_of(work, struct fw_card, work.work);
+-	struct fw_device *root;
++	struct fw_device *root_device;
++	struct fw_node *root_node, *local_node;
+ 	struct bm_data bmd;
+ 	unsigned long flags;
+ 	int root_id, new_root_id, irm_id, gap_count, generation, grace;
+ 	int do_reset = 0;
+ 
+ 	spin_lock_irqsave(&card->lock, flags);
++	local_node = card->local_node;
++	root_node  = card->root_node;
++
++	if (local_node == NULL) {
++		spin_unlock_irqrestore(&card->lock, flags);
++		return;
++	}
++	fw_node_get(local_node);
++	fw_node_get(root_node);
+ 
+ 	generation = card->generation;
+-	root = card->root_node->data;
+-	root_id = card->root_node->node_id;
++	root_device = root_node->data;
++	if (root_device)
++		fw_device_get(root_device);
++	root_id = root_node->node_id;
+ 	grace = time_after(jiffies, card->reset_jiffies + DIV_ROUND_UP(HZ, 10));
+ 
+ 	if (card->bm_generation + 1 == generation ||
+@@ -243,14 +256,14 @@ fw_card_bm_work(struct work_struct *work)
+ 
+ 		irm_id = card->irm_node->node_id;
+ 		if (!card->irm_node->link_on) {
+-			new_root_id = card->local_node->node_id;
++			new_root_id = local_node->node_id;
+ 			fw_notify("IRM has link off, making local node (%02x) root.\n",
+ 				  new_root_id);
+ 			goto pick_me;
+ 		}
+ 
+ 		bmd.lock.arg = cpu_to_be32(0x3f);
+-		bmd.lock.data = cpu_to_be32(card->local_node->node_id);
++		bmd.lock.data = cpu_to_be32(local_node->node_id);
+ 
+ 		spin_unlock_irqrestore(&card->lock, flags);
+ 
+@@ -267,12 +280,12 @@ fw_card_bm_work(struct work_struct *work)
+ 			 * Another bus reset happened. Just return,
+ 			 * the BM work has been rescheduled.
+ 			 */
+-			return;
++			goto out;
+ 		}
+ 
+ 		if (bmd.rcode == RCODE_COMPLETE && bmd.old != 0x3f)
+ 			/* Somebody else is BM, let them do the work. */
+-			return;
++			goto out;
+ 
+ 		spin_lock_irqsave(&card->lock, flags);
+ 		if (bmd.rcode != RCODE_COMPLETE) {
+@@ -282,7 +295,7 @@ fw_card_bm_work(struct work_struct *work)
+ 			 * do a bus reset and pick the local node as
+ 			 * root, and thus, IRM.
+ 			 */
+-			new_root_id = card->local_node->node_id;
++			new_root_id = local_node->node_id;
+ 			fw_notify("BM lock failed, making local node (%02x) root.\n",
+ 				  new_root_id);
+ 			goto pick_me;
+@@ -295,7 +308,7 @@ fw_card_bm_work(struct work_struct *work)
+ 		 */
+ 		spin_unlock_irqrestore(&card->lock, flags);
+ 		schedule_delayed_work(&card->work, DIV_ROUND_UP(HZ, 10));
+-		return;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -305,20 +318,20 @@ fw_card_bm_work(struct work_struct *work)
+ 	 */
+ 	card->bm_generation = generation;
+ 
+-	if (root == NULL) {
++	if (root_device == NULL) {
+ 		/*
+ 		 * Either link_on is false, or we failed to read the
+ 		 * config rom.  In either case, pick another root.
+ 		 */
+-		new_root_id = card->local_node->node_id;
+-	} else if (atomic_read(&root->state) != FW_DEVICE_RUNNING) {
++		new_root_id = local_node->node_id;
++	} else if (atomic_read(&root_device->state) != FW_DEVICE_RUNNING) {
+ 		/*
+ 		 * If we haven't probed this device yet, bail out now
+ 		 * and let's try again once that's done.
+ 		 */
+ 		spin_unlock_irqrestore(&card->lock, flags);
+-		return;
+-	} else if (root->config_rom[2] & BIB_CMC) {
++		goto out;
++	} else if (root_device->config_rom[2] & BIB_CMC) {
+ 		/*
+ 		 * FIXME: I suppose we should set the cmstr bit in the
+ 		 * STATE_CLEAR register of this node, as described in
+@@ -332,7 +345,7 @@ fw_card_bm_work(struct work_struct *work)
+ 		 * successfully read the config rom, but it's not
+ 		 * cycle master capable.
+ 		 */
+-		new_root_id = card->local_node->node_id;
++		new_root_id = local_node->node_id;
+ 	}
+ 
+  pick_me:
+@@ -341,8 +354,8 @@ fw_card_bm_work(struct work_struct *work)
+ 	 * the typically much larger 1394b beta repeater delays though.
+ 	 */
+ 	if (!card->beta_repeaters_present &&
+-	    card->root_node->max_hops < ARRAY_SIZE(gap_count_table))
+-		gap_count = gap_count_table[card->root_node->max_hops];
++	    root_node->max_hops < ARRAY_SIZE(gap_count_table))
++		gap_count = gap_count_table[root_node->max_hops];
+ 	else
+ 		gap_count = 63;
+ 
+@@ -364,6 +377,11 @@ fw_card_bm_work(struct work_struct *work)
+ 		fw_send_phy_config(card, new_root_id, generation, gap_count);
+ 		fw_core_initiate_bus_reset(card, 1);
+ 	}
++ out:
++	if (root_device)
++		fw_device_put(root_device);
++	fw_node_put(root_node);
++	fw_node_put(local_node);
+ }
+ 
+ static void
+@@ -381,6 +399,7 @@ fw_card_initialize(struct fw_card *card, const struct fw_card_driver *driver,
+ 	static atomic_t index = ATOMIC_INIT(-1);
+ 
+ 	kref_init(&card->kref);
++	atomic_set(&card->device_count, 0);
+ 	card->index = atomic_inc_return(&index);
+ 	card->driver = driver;
+ 	card->device = device;
+@@ -511,8 +530,14 @@ fw_core_remove_card(struct fw_card *card)
+ 	card->driver = &dummy_driver;
+ 
+ 	fw_destroy_nodes(card);
+-	flush_scheduled_work();
++	/*
++	 * Wait for all device workqueue jobs to finish.  Otherwise the
++	 * firewire-core module could be unloaded before the jobs ran.
++	 */
++	while (atomic_read(&card->device_count) > 0)
++		msleep(100);
+ 
++	cancel_delayed_work_sync(&card->work);
+ 	fw_flush_transactions(card);
+ 	del_timer_sync(&card->flush_timer);
+ 
+diff --git a/drivers/firewire/fw-cdev.c b/drivers/firewire/fw-cdev.c
+index 7e73cba..46bc197 100644
+--- a/drivers/firewire/fw-cdev.c
++++ b/drivers/firewire/fw-cdev.c
+@@ -109,15 +109,17 @@ static int fw_device_op_open(struct inode *inode, struct file *file)
+ 	struct client *client;
+ 	unsigned long flags;
+ 
+-	device = fw_device_from_devt(inode->i_rdev);
++	device = fw_device_get_by_devt(inode->i_rdev);
+ 	if (device == NULL)
+ 		return -ENODEV;
+ 
+ 	client = kzalloc(sizeof(*client), GFP_KERNEL);
+-	if (client == NULL)
++	if (client == NULL) {
++		fw_device_put(device);
+ 		return -ENOMEM;
++	}
+ 
+-	client->device = fw_device_get(device);
++	client->device = device;
+ 	INIT_LIST_HEAD(&client->event_list);
+ 	INIT_LIST_HEAD(&client->resource_list);
+ 	spin_lock_init(&client->lock);
+@@ -644,6 +646,10 @@ static int ioctl_create_iso_context(struct client *client, void *buffer)
+ 	struct fw_cdev_create_iso_context *request = buffer;
+ 	struct fw_iso_context *context;
+ 
++	/* We only support one context at this time. */
++	if (client->iso_context != NULL)
++		return -EBUSY;
++
+ 	if (request->channel > 63)
+ 		return -EINVAL;
+ 
+@@ -790,8 +796,9 @@ static int ioctl_start_iso(struct client *client, void *buffer)
+ {
+ 	struct fw_cdev_start_iso *request = buffer;
+ 
+-	if (request->handle != 0)
++	if (client->iso_context == NULL || request->handle != 0)
+ 		return -EINVAL;
++
+ 	if (client->iso_context->type == FW_ISO_CONTEXT_RECEIVE) {
+ 		if (request->tags == 0 || request->tags > 15)
+ 			return -EINVAL;
+@@ -808,7 +815,7 @@ static int ioctl_stop_iso(struct client *client, void *buffer)
+ {
+ 	struct fw_cdev_stop_iso *request = buffer;
+ 
+-	if (request->handle != 0)
++	if (client->iso_context == NULL || request->handle != 0)
+ 		return -EINVAL;
+ 
+ 	return fw_iso_context_stop(client->iso_context);
+diff --git a/drivers/firewire/fw-device.c b/drivers/firewire/fw-device.c
+index de9066e..870125a 100644
+--- a/drivers/firewire/fw-device.c
++++ b/drivers/firewire/fw-device.c
+@@ -150,21 +150,10 @@ struct bus_type fw_bus_type = {
+ };
+ EXPORT_SYMBOL(fw_bus_type);
+ 
+-struct fw_device *fw_device_get(struct fw_device *device)
+-{
+-	get_device(&device->device);
+-
+-	return device;
+-}
+-
+-void fw_device_put(struct fw_device *device)
+-{
+-	put_device(&device->device);
+-}
+-
+ static void fw_device_release(struct device *dev)
+ {
+ 	struct fw_device *device = fw_device(dev);
++	struct fw_card *card = device->card;
+ 	unsigned long flags;
+ 
+ 	/*
+@@ -176,9 +165,9 @@ static void fw_device_release(struct device *dev)
+ 	spin_unlock_irqrestore(&device->card->lock, flags);
+ 
+ 	fw_node_put(device->node);
+-	fw_card_put(device->card);
+ 	kfree(device->config_rom);
+ 	kfree(device);
++	atomic_dec(&card->device_count);
+ }
+ 
+ int fw_device_enable_phys_dma(struct fw_device *device)
+@@ -358,12 +347,9 @@ static ssize_t
+ guid_show(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+ 	struct fw_device *device = fw_device(dev);
+-	u64 guid;
+-
+-	guid = ((u64)device->config_rom[3] << 32) | device->config_rom[4];
+ 
+-	return snprintf(buf, PAGE_SIZE, "0x%016llx\n",
+-			(unsigned long long)guid);
++	return snprintf(buf, PAGE_SIZE, "0x%08x%08x\n",
++			device->config_rom[3], device->config_rom[4]);
+ }
+ 
+ static struct device_attribute fw_device_attributes[] = {
+@@ -610,12 +596,14 @@ static DECLARE_RWSEM(idr_rwsem);
+ static DEFINE_IDR(fw_device_idr);
+ int fw_cdev_major;
+ 
+-struct fw_device *fw_device_from_devt(dev_t devt)
++struct fw_device *fw_device_get_by_devt(dev_t devt)
+ {
+ 	struct fw_device *device;
+ 
+ 	down_read(&idr_rwsem);
+ 	device = idr_find(&fw_device_idr, MINOR(devt));
++	if (device)
++		fw_device_get(device);
+ 	up_read(&idr_rwsem);
+ 
+ 	return device;
+@@ -627,13 +615,14 @@ static void fw_device_shutdown(struct work_struct *work)
+ 		container_of(work, struct fw_device, work.work);
+ 	int minor = MINOR(device->device.devt);
+ 
+-	down_write(&idr_rwsem);
+-	idr_remove(&fw_device_idr, minor);
+-	up_write(&idr_rwsem);
+-
+ 	fw_device_cdev_remove(device);
+ 	device_for_each_child(&device->device, NULL, shutdown_unit);
+ 	device_unregister(&device->device);
++
++	down_write(&idr_rwsem);
++	idr_remove(&fw_device_idr, minor);
++	up_write(&idr_rwsem);
++	fw_device_put(device);
+ }
+ 
+ static struct device_type fw_device_type = {
+@@ -668,7 +657,8 @@ static void fw_device_init(struct work_struct *work)
+ 	 */
+ 
+ 	if (read_bus_info_block(device, device->generation) < 0) {
+-		if (device->config_rom_retries < MAX_RETRIES) {
++		if (device->config_rom_retries < MAX_RETRIES &&
++		    atomic_read(&device->state) == FW_DEVICE_INITIALIZING) {
+ 			device->config_rom_retries++;
+ 			schedule_delayed_work(&device->work, RETRY_DELAY);
+ 		} else {
+@@ -682,10 +672,13 @@ static void fw_device_init(struct work_struct *work)
+ 	}
+ 
+ 	err = -ENOMEM;
++
++	fw_device_get(device);
+ 	down_write(&idr_rwsem);
+ 	if (idr_pre_get(&fw_device_idr, GFP_KERNEL))
+ 		err = idr_get_new(&fw_device_idr, device, &minor);
+ 	up_write(&idr_rwsem);
++
+ 	if (err < 0)
+ 		goto error;
+ 
+@@ -717,13 +710,22 @@ static void fw_device_init(struct work_struct *work)
+ 	 */
+ 	if (atomic_cmpxchg(&device->state,
+ 		    FW_DEVICE_INITIALIZING,
+-		    FW_DEVICE_RUNNING) == FW_DEVICE_SHUTDOWN)
++		    FW_DEVICE_RUNNING) == FW_DEVICE_SHUTDOWN) {
+ 		fw_device_shutdown(&device->work.work);
+-	else
+-		fw_notify("created new fw device %s "
+-			  "(%d config rom retries, S%d00)\n",
+-			  device->device.bus_id, device->config_rom_retries,
+-			  1 << device->max_speed);
++	} else {
++		if (device->config_rom_retries)
++			fw_notify("created device %s: GUID %08x%08x, S%d00, "
++				  "%d config ROM retries\n",
++				  device->device.bus_id,
++				  device->config_rom[3], device->config_rom[4],
++				  1 << device->max_speed,
++				  device->config_rom_retries);
++		else
++			fw_notify("created device %s: GUID %08x%08x, S%d00\n",
++				  device->device.bus_id,
++				  device->config_rom[3], device->config_rom[4],
++				  1 << device->max_speed);
++	}
+ 
+ 	/*
+ 	 * Reschedule the IRM work if we just finished reading the
+@@ -741,7 +743,9 @@ static void fw_device_init(struct work_struct *work)
+ 	idr_remove(&fw_device_idr, minor);
+ 	up_write(&idr_rwsem);
+  error:
+-	put_device(&device->device);
++	fw_device_put(device);		/* fw_device_idr's reference */
++
++	put_device(&device->device);	/* our reference */
+ }
+ 
+ static int update_unit(struct device *dev, void *data)
+@@ -791,7 +795,8 @@ void fw_node_event(struct fw_card *card, struct fw_node *node, int event)
+ 		 */
+ 		device_initialize(&device->device);
+ 		atomic_set(&device->state, FW_DEVICE_INITIALIZING);
+-		device->card = fw_card_get(card);
++		atomic_inc(&card->device_count);
++		device->card = card;
+ 		device->node = fw_node_get(node);
+ 		device->node_id = node->node_id;
+ 		device->generation = card->generation;
+diff --git a/drivers/firewire/fw-device.h b/drivers/firewire/fw-device.h
+index 0854fe2..78ecd39 100644
+--- a/drivers/firewire/fw-device.h
++++ b/drivers/firewire/fw-device.h
+@@ -76,14 +76,26 @@ fw_device_is_shutdown(struct fw_device *device)
+ 	return atomic_read(&device->state) == FW_DEVICE_SHUTDOWN;
+ }
+ 
+-struct fw_device *fw_device_get(struct fw_device *device);
+-void fw_device_put(struct fw_device *device);
++static inline struct fw_device *
++fw_device_get(struct fw_device *device)
++{
++	get_device(&device->device);
++
++	return device;
++}
++
++static inline void
++fw_device_put(struct fw_device *device)
++{
++	put_device(&device->device);
++}
++
++struct fw_device *fw_device_get_by_devt(dev_t devt);
+ int fw_device_enable_phys_dma(struct fw_device *device);
+ 
+ void fw_device_cdev_update(struct fw_device *device);
+ void fw_device_cdev_remove(struct fw_device *device);
+ 
+-struct fw_device *fw_device_from_devt(dev_t devt);
+ extern int fw_cdev_major;
+ 
+ struct fw_unit {
+diff --git a/drivers/firewire/fw-sbp2.c b/drivers/firewire/fw-sbp2.c
+index 19ece9b..03069a4 100644
+--- a/drivers/firewire/fw-sbp2.c
++++ b/drivers/firewire/fw-sbp2.c
+@@ -28,14 +28,15 @@
+  * and many others.
+  */
+ 
++#include <linux/blkdev.h>
++#include <linux/delay.h>
++#include <linux/device.h>
++#include <linux/dma-mapping.h>
+ #include <linux/kernel.h>
++#include <linux/mod_devicetable.h>
+ #include <linux/module.h>
+ #include <linux/moduleparam.h>
+-#include <linux/mod_devicetable.h>
+-#include <linux/device.h>
+ #include <linux/scatterlist.h>
+-#include <linux/dma-mapping.h>
+-#include <linux/blkdev.h>
+ #include <linux/string.h>
+ #include <linux/stringify.h>
+ #include <linux/timer.h>
+@@ -47,9 +48,9 @@
+ #include <scsi/scsi_device.h>
+ #include <scsi/scsi_host.h>
+ 
+-#include "fw-transaction.h"
+-#include "fw-topology.h"
+ #include "fw-device.h"
++#include "fw-topology.h"
++#include "fw-transaction.h"
+ 
+ /*
+  * So far only bridges from Oxford Semiconductor are known to support
+@@ -82,6 +83,9 @@ MODULE_PARM_DESC(exclusive_login, "Exclusive login to sbp2 device "
+  *   Avoids access beyond actual disk limits on devices with an off-by-one bug.
+  *   Don't use this with devices which don't have this bug.
+  *
++ * - delay inquiry
++ *   Wait extra SBP2_INQUIRY_DELAY seconds after login before SCSI inquiry.
++ *
+  * - override internal blacklist
+  *   Instead of adding to the built-in blacklist, use only the workarounds
+  *   specified in the module load parameter.
+@@ -91,6 +95,8 @@ MODULE_PARM_DESC(exclusive_login, "Exclusive login to sbp2 device "
+ #define SBP2_WORKAROUND_INQUIRY_36	0x2
+ #define SBP2_WORKAROUND_MODE_SENSE_8	0x4
+ #define SBP2_WORKAROUND_FIX_CAPACITY	0x8
++#define SBP2_WORKAROUND_DELAY_INQUIRY	0x10
++#define SBP2_INQUIRY_DELAY		12
+ #define SBP2_WORKAROUND_OVERRIDE	0x100
+ 
+ static int sbp2_param_workarounds;
+@@ -100,6 +106,7 @@ MODULE_PARM_DESC(workarounds, "Work around device bugs (default = 0"
+ 	", 36 byte inquiry = "    __stringify(SBP2_WORKAROUND_INQUIRY_36)
+ 	", skip mode page 8 = "   __stringify(SBP2_WORKAROUND_MODE_SENSE_8)
+ 	", fix capacity = "       __stringify(SBP2_WORKAROUND_FIX_CAPACITY)
++	", delay inquiry = "      __stringify(SBP2_WORKAROUND_DELAY_INQUIRY)
+ 	", override internal blacklist = " __stringify(SBP2_WORKAROUND_OVERRIDE)
+ 	", or a combination)");
+ 
+@@ -115,7 +122,6 @@ static const char sbp2_driver_name[] = "sbp2";
+ struct sbp2_logical_unit {
+ 	struct sbp2_target *tgt;
+ 	struct list_head link;
+-	struct scsi_device *sdev;
+ 	struct fw_address_handler address_handler;
+ 	struct list_head orb_list;
+ 
+@@ -132,6 +138,8 @@ struct sbp2_logical_unit {
+ 	int generation;
+ 	int retries;
+ 	struct delayed_work work;
++	bool has_sdev;
++	bool blocked;
+ };
+ 
+ /*
+@@ -141,16 +149,18 @@ struct sbp2_logical_unit {
+ struct sbp2_target {
+ 	struct kref kref;
+ 	struct fw_unit *unit;
++	const char *bus_id;
++	struct list_head lu_list;
+ 
+ 	u64 management_agent_address;
+ 	int directory_id;
+ 	int node_id;
+ 	int address_high;
+-
+-	unsigned workarounds;
+-	struct list_head lu_list;
+-
++	unsigned int workarounds;
+ 	unsigned int mgt_orb_timeout;
++
++	int dont_block;	/* counter for each logical unit */
++	int blocked;	/* ditto */
+ };
+ 
+ /*
+@@ -160,7 +170,7 @@ struct sbp2_target {
+  */
+ #define SBP2_MIN_LOGIN_ORB_TIMEOUT	5000U	/* Timeout in ms */
+ #define SBP2_MAX_LOGIN_ORB_TIMEOUT	40000U	/* Timeout in ms */
+-#define SBP2_ORB_TIMEOUT		2000	/* Timeout in ms */
++#define SBP2_ORB_TIMEOUT		2000U	/* Timeout in ms */
+ #define SBP2_ORB_NULL			0x80000000
+ #define SBP2_MAX_SG_ELEMENT_LENGTH	0xf000
+ 
+@@ -297,7 +307,7 @@ struct sbp2_command_orb {
+ static const struct {
+ 	u32 firmware_revision;
+ 	u32 model;
+-	unsigned workarounds;
++	unsigned int workarounds;
+ } sbp2_workarounds_table[] = {
+ 	/* DViCO Momobay CX-1 with TSB42AA9 bridge */ {
+ 		.firmware_revision	= 0x002800,
+@@ -305,6 +315,11 @@ static const struct {
+ 		.workarounds		= SBP2_WORKAROUND_INQUIRY_36 |
+ 					  SBP2_WORKAROUND_MODE_SENSE_8,
+ 	},
++	/* DViCO Momobay FX-3A with TSB42AA9A bridge */ {
++		.firmware_revision	= 0x002800,
++		.model			= 0x000000,
++		.workarounds		= SBP2_WORKAROUND_DELAY_INQUIRY,
++	},
+ 	/* Initio bridges, actually only needed for some older ones */ {
+ 		.firmware_revision	= 0x000200,
+ 		.model			= ~0,
+@@ -501,6 +516,9 @@ sbp2_send_management_orb(struct sbp2_logical_unit *lu, int node_id,
+ 	unsigned int timeout;
+ 	int retval = -ENOMEM;
+ 
++	if (function == SBP2_LOGOUT_REQUEST && fw_device_is_shutdown(device))
++		return 0;
++
+ 	orb = kzalloc(sizeof(*orb), GFP_ATOMIC);
+ 	if (orb == NULL)
+ 		return -ENOMEM;
+@@ -553,20 +571,20 @@ sbp2_send_management_orb(struct sbp2_logical_unit *lu, int node_id,
+ 
+ 	retval = -EIO;
+ 	if (sbp2_cancel_orbs(lu) == 0) {
+-		fw_error("orb reply timed out, rcode=0x%02x\n",
+-			 orb->base.rcode);
++		fw_error("%s: orb reply timed out, rcode=0x%02x\n",
++			 lu->tgt->bus_id, orb->base.rcode);
+ 		goto out;
+ 	}
+ 
+ 	if (orb->base.rcode != RCODE_COMPLETE) {
+-		fw_error("management write failed, rcode 0x%02x\n",
+-			 orb->base.rcode);
++		fw_error("%s: management write failed, rcode 0x%02x\n",
++			 lu->tgt->bus_id, orb->base.rcode);
+ 		goto out;
+ 	}
+ 
+ 	if (STATUS_GET_RESPONSE(orb->status) != 0 ||
+ 	    STATUS_GET_SBP_STATUS(orb->status) != 0) {
+-		fw_error("error status: %d:%d\n",
++		fw_error("%s: error status: %d:%d\n", lu->tgt->bus_id,
+ 			 STATUS_GET_RESPONSE(orb->status),
+ 			 STATUS_GET_SBP_STATUS(orb->status));
+ 		goto out;
+@@ -590,29 +608,158 @@ sbp2_send_management_orb(struct sbp2_logical_unit *lu, int node_id,
+ 
+ static void
+ complete_agent_reset_write(struct fw_card *card, int rcode,
+-			   void *payload, size_t length, void *data)
++			   void *payload, size_t length, void *done)
+ {
+-	struct fw_transaction *t = data;
++	complete(done);
++}
+ 
+-	kfree(t);
++static void sbp2_agent_reset(struct sbp2_logical_unit *lu)
++{
++	struct fw_device *device = fw_device(lu->tgt->unit->device.parent);
++	DECLARE_COMPLETION_ONSTACK(done);
++	struct fw_transaction t;
++	static u32 z;
++
++	fw_send_request(device->card, &t, TCODE_WRITE_QUADLET_REQUEST,
++			lu->tgt->node_id, lu->generation, device->max_speed,
++			lu->command_block_agent_address + SBP2_AGENT_RESET,
++			&z, sizeof(z), complete_agent_reset_write, &done);
++	wait_for_completion(&done);
++}
++
++static void
++complete_agent_reset_write_no_wait(struct fw_card *card, int rcode,
++				   void *payload, size_t length, void *data)
++{
++	kfree(data);
+ }
+ 
+-static int sbp2_agent_reset(struct sbp2_logical_unit *lu)
++static void sbp2_agent_reset_no_wait(struct sbp2_logical_unit *lu)
+ {
+ 	struct fw_device *device = fw_device(lu->tgt->unit->device.parent);
+ 	struct fw_transaction *t;
+-	static u32 zero;
++	static u32 z;
+ 
+-	t = kzalloc(sizeof(*t), GFP_ATOMIC);
++	t = kmalloc(sizeof(*t), GFP_ATOMIC);
+ 	if (t == NULL)
+-		return -ENOMEM;
++		return;
+ 
+ 	fw_send_request(device->card, t, TCODE_WRITE_QUADLET_REQUEST,
+ 			lu->tgt->node_id, lu->generation, device->max_speed,
+ 			lu->command_block_agent_address + SBP2_AGENT_RESET,
+-			&zero, sizeof(zero), complete_agent_reset_write, t);
++			&z, sizeof(z), complete_agent_reset_write_no_wait, t);
++}
+ 
+-	return 0;
++static void sbp2_set_generation(struct sbp2_logical_unit *lu, int generation)
++{
++	struct fw_card *card = fw_device(lu->tgt->unit->device.parent)->card;
++	unsigned long flags;
++
++	/* serialize with comparisons of lu->generation and card->generation */
++	spin_lock_irqsave(&card->lock, flags);
++	lu->generation = generation;
++	spin_unlock_irqrestore(&card->lock, flags);
++}
++
++static inline void sbp2_allow_block(struct sbp2_logical_unit *lu)
++{
++	/*
++	 * We may access dont_block without taking card->lock here:
++	 * All callers of sbp2_allow_block() and all callers of sbp2_unblock()
++	 * are currently serialized against each other.
++	 * And a wrong result in sbp2_conditionally_block()'s access of
++	 * dont_block is rather harmless, it simply misses its first chance.
++	 */
++	--lu->tgt->dont_block;
++}
++
++/*
++ * Blocks lu->tgt if all of the following conditions are met:
++ *   - Login, INQUIRY, and high-level SCSI setup of all of the target's
++ *     logical units have been finished (indicated by dont_block == 0).
++ *   - lu->generation is stale.
++ *
++ * Note, scsi_block_requests() must be called while holding card->lock,
++ * otherwise it might foil sbp2_[conditionally_]unblock()'s attempt to
++ * unblock the target.
++ */
++static void sbp2_conditionally_block(struct sbp2_logical_unit *lu)
++{
++	struct sbp2_target *tgt = lu->tgt;
++	struct fw_card *card = fw_device(tgt->unit->device.parent)->card;
++	struct Scsi_Host *shost =
++		container_of((void *)tgt, struct Scsi_Host, hostdata[0]);
++	unsigned long flags;
++
++	spin_lock_irqsave(&card->lock, flags);
++	if (!tgt->dont_block && !lu->blocked &&
++	    lu->generation != card->generation) {
++		lu->blocked = true;
++		if (++tgt->blocked == 1) {
++			scsi_block_requests(shost);
++			fw_notify("blocked %s\n", lu->tgt->bus_id);
++		}
++	}
++	spin_unlock_irqrestore(&card->lock, flags);
++}
++
++/*
++ * Unblocks lu->tgt as soon as all its logical units can be unblocked.
++ * Note, it is harmless to run scsi_unblock_requests() outside the
++ * card->lock protected section.  On the other hand, running it inside
++ * the section might clash with shost->host_lock.
++ */
++static void sbp2_conditionally_unblock(struct sbp2_logical_unit *lu)
++{
++	struct sbp2_target *tgt = lu->tgt;
++	struct fw_card *card = fw_device(tgt->unit->device.parent)->card;
++	struct Scsi_Host *shost =
++		container_of((void *)tgt, struct Scsi_Host, hostdata[0]);
++	unsigned long flags;
++	bool unblock = false;
++
++	spin_lock_irqsave(&card->lock, flags);
++	if (lu->blocked && lu->generation == card->generation) {
++		lu->blocked = false;
++		unblock = --tgt->blocked == 0;
++	}
++	spin_unlock_irqrestore(&card->lock, flags);
++
++	if (unblock) {
++		scsi_unblock_requests(shost);
++		fw_notify("unblocked %s\n", lu->tgt->bus_id);
++	}
++}
++
++/*
++ * Prevents future blocking of tgt and unblocks it.
++ * Note, it is harmless to run scsi_unblock_requests() outside the
++ * card->lock protected section.  On the other hand, running it inside
++ * the section might clash with shost->host_lock.
++ */
++static void sbp2_unblock(struct sbp2_target *tgt)
++{
++	struct fw_card *card = fw_device(tgt->unit->device.parent)->card;
++	struct Scsi_Host *shost =
++		container_of((void *)tgt, struct Scsi_Host, hostdata[0]);
++	unsigned long flags;
++
++	spin_lock_irqsave(&card->lock, flags);
++	++tgt->dont_block;
++	spin_unlock_irqrestore(&card->lock, flags);
++
++	scsi_unblock_requests(shost);
++}
++
++static int sbp2_lun2int(u16 lun)
++{
++	struct scsi_lun eight_bytes_lun;
++
++	memset(&eight_bytes_lun, 0, sizeof(eight_bytes_lun));
++	eight_bytes_lun.scsi_lun[0] = (lun >> 8) & 0xff;
++	eight_bytes_lun.scsi_lun[1] = lun & 0xff;
++
++	return scsilun_to_int(&eight_bytes_lun);
+ }
+ 
+ static void sbp2_release_target(struct kref *kref)
+@@ -621,26 +768,31 @@ static void sbp2_release_target(struct kref *kref)
+ 	struct sbp2_logical_unit *lu, *next;
+ 	struct Scsi_Host *shost =
+ 		container_of((void *)tgt, struct Scsi_Host, hostdata[0]);
++	struct scsi_device *sdev;
+ 	struct fw_device *device = fw_device(tgt->unit->device.parent);
+ 
+-	list_for_each_entry_safe(lu, next, &tgt->lu_list, link) {
+-		if (lu->sdev)
+-			scsi_remove_device(lu->sdev);
++	/* prevent deadlocks */
++	sbp2_unblock(tgt);
+ 
+-		if (!fw_device_is_shutdown(device))
+-			sbp2_send_management_orb(lu, tgt->node_id,
+-					lu->generation, SBP2_LOGOUT_REQUEST,
+-					lu->login_id, NULL);
++	list_for_each_entry_safe(lu, next, &tgt->lu_list, link) {
++		sdev = scsi_device_lookup(shost, 0, 0, sbp2_lun2int(lu->lun));
++		if (sdev) {
++			scsi_remove_device(sdev);
++			scsi_device_put(sdev);
++		}
++		sbp2_send_management_orb(lu, tgt->node_id, lu->generation,
++				SBP2_LOGOUT_REQUEST, lu->login_id, NULL);
+ 
+ 		fw_core_remove_address_handler(&lu->address_handler);
+ 		list_del(&lu->link);
+ 		kfree(lu);
+ 	}
+ 	scsi_remove_host(shost);
+-	fw_notify("released %s\n", tgt->unit->device.bus_id);
++	fw_notify("released %s\n", tgt->bus_id);
+ 
+ 	put_device(&tgt->unit->device);
+ 	scsi_host_put(shost);
++	fw_device_put(device);
+ }
+ 
+ static struct workqueue_struct *sbp2_wq;
+@@ -666,33 +818,42 @@ static void sbp2_login(struct work_struct *work)
+ {
+ 	struct sbp2_logical_unit *lu =
+ 		container_of(work, struct sbp2_logical_unit, work.work);
+-	struct Scsi_Host *shost =
+-		container_of((void *)lu->tgt, struct Scsi_Host, hostdata[0]);
++	struct sbp2_target *tgt = lu->tgt;
++	struct fw_device *device = fw_device(tgt->unit->device.parent);
++	struct Scsi_Host *shost;
+ 	struct scsi_device *sdev;
+-	struct scsi_lun eight_bytes_lun;
+-	struct fw_unit *unit = lu->tgt->unit;
+-	struct fw_device *device = fw_device(unit->device.parent);
+ 	struct sbp2_login_response response;
+ 	int generation, node_id, local_node_id;
+ 
++	if (fw_device_is_shutdown(device))
++		goto out;
++
+ 	generation    = device->generation;
+ 	smp_rmb();    /* node_id must not be older than generation */
+ 	node_id       = device->node_id;
+ 	local_node_id = device->card->node_id;
+ 
++	/* If this is a re-login attempt, log out, or we might be rejected. */
++	if (lu->has_sdev)
++		sbp2_send_management_orb(lu, device->node_id, generation,
++				SBP2_LOGOUT_REQUEST, lu->login_id, NULL);
++
+ 	if (sbp2_send_management_orb(lu, node_id, generation,
+ 				SBP2_LOGIN_REQUEST, lu->lun, &response) < 0) {
+-		if (lu->retries++ < 5)
++		if (lu->retries++ < 5) {
+ 			sbp2_queue_work(lu, DIV_ROUND_UP(HZ, 5));
+-		else
+-			fw_error("failed to login to %s LUN %04x\n",
+-				 unit->device.bus_id, lu->lun);
++		} else {
++			fw_error("%s: failed to login to LUN %04x\n",
++				 tgt->bus_id, lu->lun);
++			/* Let any waiting I/O fail from now on. */
++			sbp2_unblock(lu->tgt);
++		}
+ 		goto out;
+ 	}
+ 
+-	lu->generation        = generation;
+-	lu->tgt->node_id      = node_id;
+-	lu->tgt->address_high = local_node_id << 16;
++	tgt->node_id	  = node_id;
++	tgt->address_high = local_node_id << 16;
++	sbp2_set_generation(lu, generation);
+ 
+ 	/* Get command block agent offset and login id. */
+ 	lu->command_block_agent_address =
+@@ -700,8 +861,8 @@ static void sbp2_login(struct work_struct *work)
+ 		response.command_block_agent.low;
+ 	lu->login_id = LOGIN_RESPONSE_GET_LOGIN_ID(response);
+ 
+-	fw_notify("logged in to %s LUN %04x (%d retries)\n",
+-		  unit->device.bus_id, lu->lun, lu->retries);
++	fw_notify("%s: logged in to LUN %04x (%d retries)\n",
++		  tgt->bus_id, lu->lun, lu->retries);
+ 
+ #if 0
+ 	/* FIXME: The linux1394 sbp2 does this last step. */
+@@ -711,26 +872,58 @@ static void sbp2_login(struct work_struct *work)
+ 	PREPARE_DELAYED_WORK(&lu->work, sbp2_reconnect);
+ 	sbp2_agent_reset(lu);
+ 
+-	memset(&eight_bytes_lun, 0, sizeof(eight_bytes_lun));
+-	eight_bytes_lun.scsi_lun[0] = (lu->lun >> 8) & 0xff;
+-	eight_bytes_lun.scsi_lun[1] = lu->lun & 0xff;
++	/* This was a re-login. */
++	if (lu->has_sdev) {
++		sbp2_cancel_orbs(lu);
++		sbp2_conditionally_unblock(lu);
++		goto out;
++	}
+ 
+-	sdev = __scsi_add_device(shost, 0, 0,
+-				 scsilun_to_int(&eight_bytes_lun), lu);
+-	if (IS_ERR(sdev)) {
+-		sbp2_send_management_orb(lu, node_id, generation,
+-				SBP2_LOGOUT_REQUEST, lu->login_id, NULL);
+-		/*
+-		 * Set this back to sbp2_login so we fall back and
+-		 * retry login on bus reset.
+-		 */
+-		PREPARE_DELAYED_WORK(&lu->work, sbp2_login);
+-	} else {
+-		lu->sdev = sdev;
++	if (lu->tgt->workarounds & SBP2_WORKAROUND_DELAY_INQUIRY)
++		ssleep(SBP2_INQUIRY_DELAY);
++
++	shost = container_of((void *)tgt, struct Scsi_Host, hostdata[0]);
++	sdev = __scsi_add_device(shost, 0, 0, sbp2_lun2int(lu->lun), lu);
++	/*
++	 * FIXME:  We are unable to perform reconnects while in sbp2_login().
++	 * Therefore __scsi_add_device() will get into trouble if a bus reset
++	 * happens in parallel.  It will either fail or leave us with an
++	 * unusable sdev.  As a workaround we check for this and retry the
++	 * whole login and SCSI probing.
++	 */
++
++	/* Reported error during __scsi_add_device() */
++	if (IS_ERR(sdev))
++		goto out_logout_login;
++
++	/* Unreported error during __scsi_add_device() */
++	smp_rmb(); /* get current card generation */
++	if (generation != device->card->generation) {
++		scsi_remove_device(sdev);
+ 		scsi_device_put(sdev);
++		goto out_logout_login;
+ 	}
++
++	/* No error during __scsi_add_device() */
++	lu->has_sdev = true;
++	scsi_device_put(sdev);
++	sbp2_allow_block(lu);
++	goto out;
++
++ out_logout_login:
++	smp_rmb(); /* generation may have changed */
++	generation = device->generation;
++	smp_rmb(); /* node_id must not be older than generation */
++
++	sbp2_send_management_orb(lu, device->node_id, generation,
++				 SBP2_LOGOUT_REQUEST, lu->login_id, NULL);
++	/*
++	 * If a bus reset happened, sbp2_update will have requeued
++	 * lu->work already.  Reset the work from reconnect to login.
++	 */
++	PREPARE_DELAYED_WORK(&lu->work, sbp2_login);
+  out:
+-	sbp2_target_put(lu->tgt);
++	sbp2_target_put(tgt);
+ }
+ 
+ static int sbp2_add_logical_unit(struct sbp2_target *tgt, int lun_entry)
+@@ -751,10 +944,12 @@ static int sbp2_add_logical_unit(struct sbp2_target *tgt, int lun_entry)
+ 		return -ENOMEM;
+ 	}
+ 
+-	lu->tgt  = tgt;
+-	lu->sdev = NULL;
+-	lu->lun  = lun_entry & 0xffff;
+-	lu->retries = 0;
++	lu->tgt      = tgt;
++	lu->lun      = lun_entry & 0xffff;
++	lu->retries  = 0;
++	lu->has_sdev = false;
++	lu->blocked  = false;
++	++tgt->dont_block;
+ 	INIT_LIST_HEAD(&lu->orb_list);
+ 	INIT_DELAYED_WORK(&lu->work, sbp2_login);
+ 
+@@ -813,7 +1008,7 @@ static int sbp2_scan_unit_dir(struct sbp2_target *tgt, u32 *directory,
+ 			if (timeout > tgt->mgt_orb_timeout)
+ 				fw_notify("%s: config rom contains %ds "
+ 					  "management ORB timeout, limiting "
+-					  "to %ds\n", tgt->unit->device.bus_id,
++					  "to %ds\n", tgt->bus_id,
+ 					  timeout / 1000,
+ 					  tgt->mgt_orb_timeout / 1000);
+ 			break;
+@@ -836,12 +1031,12 @@ static void sbp2_init_workarounds(struct sbp2_target *tgt, u32 model,
+ 				  u32 firmware_revision)
+ {
+ 	int i;
+-	unsigned w = sbp2_param_workarounds;
++	unsigned int w = sbp2_param_workarounds;
+ 
+ 	if (w)
+ 		fw_notify("Please notify linux1394-devel at lists.sourceforge.net "
+ 			  "if you need the workarounds parameter for %s\n",
+-			  tgt->unit->device.bus_id);
++			  tgt->bus_id);
+ 
+ 	if (w & SBP2_WORKAROUND_OVERRIDE)
+ 		goto out;
+@@ -863,8 +1058,7 @@ static void sbp2_init_workarounds(struct sbp2_target *tgt, u32 model,
+ 	if (w)
+ 		fw_notify("Workarounds for %s: 0x%x "
+ 			  "(firmware_revision 0x%06x, model_id 0x%06x)\n",
+-			  tgt->unit->device.bus_id,
+-			  w, firmware_revision, model);
++			  tgt->bus_id, w, firmware_revision, model);
+ 	tgt->workarounds = w;
+ }
+ 
+@@ -888,6 +1082,7 @@ static int sbp2_probe(struct device *dev)
+ 	tgt->unit = unit;
+ 	kref_init(&tgt->kref);
+ 	INIT_LIST_HEAD(&tgt->lu_list);
++	tgt->bus_id = unit->device.bus_id;
+ 
+ 	if (fw_device_enable_phys_dma(device) < 0)
+ 		goto fail_shost_put;
+@@ -895,6 +1090,8 @@ static int sbp2_probe(struct device *dev)
+ 	if (scsi_add_host(shost, &unit->device) < 0)
+ 		goto fail_shost_put;
+ 
++	fw_device_get(device);
++
+ 	/* Initialize to values that won't match anything in our table. */
+ 	firmware_revision = 0xff000000;
+ 	model = 0xff000000;
+@@ -938,10 +1135,13 @@ static void sbp2_reconnect(struct work_struct *work)
+ {
+ 	struct sbp2_logical_unit *lu =
+ 		container_of(work, struct sbp2_logical_unit, work.work);
+-	struct fw_unit *unit = lu->tgt->unit;
+-	struct fw_device *device = fw_device(unit->device.parent);
++	struct sbp2_target *tgt = lu->tgt;
++	struct fw_device *device = fw_device(tgt->unit->device.parent);
+ 	int generation, node_id, local_node_id;
+ 
++	if (fw_device_is_shutdown(device))
++		goto out;
++
+ 	generation    = device->generation;
+ 	smp_rmb();    /* node_id must not be older than generation */
+ 	node_id       = device->node_id;
+@@ -950,10 +1150,17 @@ static void sbp2_reconnect(struct work_struct *work)
+ 	if (sbp2_send_management_orb(lu, node_id, generation,
+ 				     SBP2_RECONNECT_REQUEST,
+ 				     lu->login_id, NULL) < 0) {
+-		if (lu->retries++ >= 5) {
+-			fw_error("failed to reconnect to %s\n",
+-				 unit->device.bus_id);
+-			/* Fall back and try to log in again. */
++		/*
++		 * If reconnect was impossible even though we are in the
++		 * current generation, fall back and try to log in again.
++		 *
++		 * We could check for "Function rejected" status, but
++		 * looking at the bus generation as simpler and more general.
++		 */
++		smp_rmb(); /* get current card generation */
++		if (generation == device->card->generation ||
++		    lu->retries++ >= 5) {
++			fw_error("%s: failed to reconnect\n", tgt->bus_id);
+ 			lu->retries = 0;
+ 			PREPARE_DELAYED_WORK(&lu->work, sbp2_login);
+ 		}
+@@ -961,17 +1168,18 @@ static void sbp2_reconnect(struct work_struct *work)
+ 		goto out;
+ 	}
+ 
+-	lu->generation        = generation;
+-	lu->tgt->node_id      = node_id;
+-	lu->tgt->address_high = local_node_id << 16;
++	tgt->node_id      = node_id;
++	tgt->address_high = local_node_id << 16;
++	sbp2_set_generation(lu, generation);
+ 
+-	fw_notify("reconnected to %s LUN %04x (%d retries)\n",
+-		  unit->device.bus_id, lu->lun, lu->retries);
++	fw_notify("%s: reconnected to LUN %04x (%d retries)\n",
++		  tgt->bus_id, lu->lun, lu->retries);
+ 
+ 	sbp2_agent_reset(lu);
+ 	sbp2_cancel_orbs(lu);
++	sbp2_conditionally_unblock(lu);
+  out:
+-	sbp2_target_put(lu->tgt);
++	sbp2_target_put(tgt);
+ }
+ 
+ static void sbp2_update(struct fw_unit *unit)
+@@ -986,6 +1194,7 @@ static void sbp2_update(struct fw_unit *unit)
+ 	 * Iteration over tgt->lu_list is therefore safe here.
+ 	 */
+ 	list_for_each_entry(lu, &tgt->lu_list, link) {
++		sbp2_conditionally_block(lu);
+ 		lu->retries = 0;
+ 		sbp2_queue_work(lu, 0);
+ 	}
+@@ -1063,7 +1272,7 @@ complete_command_orb(struct sbp2_orb *base_orb, struct sbp2_status *status)
+ 
+ 	if (status != NULL) {
+ 		if (STATUS_GET_DEAD(*status))
+-			sbp2_agent_reset(orb->lu);
++			sbp2_agent_reset_no_wait(orb->lu);
+ 
+ 		switch (STATUS_GET_RESPONSE(*status)) {
+ 		case SBP2_STATUS_REQUEST_COMPLETE:
+@@ -1089,6 +1298,7 @@ complete_command_orb(struct sbp2_orb *base_orb, struct sbp2_status *status)
+ 		 * or when sending the write (less likely).
+ 		 */
+ 		result = DID_BUS_BUSY << 16;
++		sbp2_conditionally_block(orb->lu);
+ 	}
+ 
+ 	dma_unmap_single(device->card->device, orb->base.request_bus,
+@@ -1197,7 +1407,7 @@ static int sbp2_scsi_queuecommand(struct scsi_cmnd *cmd, scsi_done_fn_t done)
+ 	struct sbp2_logical_unit *lu = cmd->device->hostdata;
+ 	struct fw_device *device = fw_device(lu->tgt->unit->device.parent);
+ 	struct sbp2_command_orb *orb;
+-	unsigned max_payload;
++	unsigned int max_payload;
+ 	int retval = SCSI_MLQUEUE_HOST_BUSY;
+ 
+ 	/*
+@@ -1275,6 +1485,10 @@ static int sbp2_scsi_slave_alloc(struct scsi_device *sdev)
+ {
+ 	struct sbp2_logical_unit *lu = sdev->hostdata;
+ 
++	/* (Re-)Adding logical units via the SCSI stack is not supported. */
++	if (!lu)
++		return -ENOSYS;
++
+ 	sdev->allow_restart = 1;
+ 
+ 	/*
+@@ -1319,7 +1533,7 @@ static int sbp2_scsi_abort(struct scsi_cmnd *cmd)
+ {
+ 	struct sbp2_logical_unit *lu = cmd->device->hostdata;
+ 
+-	fw_notify("sbp2_scsi_abort\n");
++	fw_notify("%s: sbp2_scsi_abort\n", lu->tgt->bus_id);
+ 	sbp2_agent_reset(lu);
+ 	sbp2_cancel_orbs(lu);
+ 
+diff --git a/drivers/firewire/fw-topology.c b/drivers/firewire/fw-topology.c
+index 172c186..e47bb04 100644
+--- a/drivers/firewire/fw-topology.c
++++ b/drivers/firewire/fw-topology.c
+@@ -383,6 +383,7 @@ void fw_destroy_nodes(struct fw_card *card)
+ 	card->color++;
+ 	if (card->local_node != NULL)
+ 		for_each_fw_node(card, card->local_node, report_lost_node);
++	card->local_node = NULL;
+ 	spin_unlock_irqrestore(&card->lock, flags);
+ }
+ 
+diff --git a/drivers/firewire/fw-transaction.h b/drivers/firewire/fw-transaction.h
+index fa7967b..09cb728 100644
+--- a/drivers/firewire/fw-transaction.h
++++ b/drivers/firewire/fw-transaction.h
+@@ -26,6 +26,7 @@
+ #include <linux/fs.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/firewire-constants.h>
++#include <asm/atomic.h>
+ 
+ #define TCODE_IS_READ_REQUEST(tcode)	(((tcode) & ~1) == 4)
+ #define TCODE_IS_BLOCK_PACKET(tcode)	(((tcode) &  1) != 0)
+@@ -219,6 +220,7 @@ extern struct bus_type fw_bus_type;
+ struct fw_card {
+ 	const struct fw_card_driver *driver;
+ 	struct device *device;
++	atomic_t device_count;
+ 	struct kref kref;
+ 
+ 	int node_id;
+diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
+index 310e497..c8d0e87 100644
+--- a/drivers/ide/ide-cd.c
++++ b/drivers/ide/ide-cd.c
+@@ -670,8 +670,8 @@ static void cdrom_buffer_sectors (ide_drive_t *drive, unsigned long sector,
+  * and attempt to recover if there are problems.  Returns  0 if everything's
+  * ok; nonzero if the request has been terminated.
+  */
+-static
+-int ide_cd_check_ireason(ide_drive_t *drive, int len, int ireason, int rw)
++static int ide_cd_check_ireason(ide_drive_t *drive, struct request *rq,
++				int len, int ireason, int rw)
+ {
+ 	/*
+ 	 * ireason == 0: the drive wants to receive data from us
+@@ -701,6 +701,9 @@ int ide_cd_check_ireason(ide_drive_t *drive, int len, int ireason, int rw)
+ 				drive->name, __FUNCTION__, ireason);
+ 	}
+ 
++	if (rq->cmd_type == REQ_TYPE_ATA_PC)
++		rq->cmd_flags |= REQ_FAILED;
++
+ 	cdrom_end_request(drive, 0);
+ 	return -1;
+ }
+@@ -1071,11 +1074,11 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
+ 	/*
+ 	 * check which way to transfer data
+ 	 */
+-	if (blk_fs_request(rq) || blk_pc_request(rq)) {
+-		if (ide_cd_check_ireason(drive, len, ireason, write))
+-			return ide_stopped;
++	if (ide_cd_check_ireason(drive, rq, len, ireason, write))
++		return ide_stopped;
+ 
+-		if (blk_fs_request(rq) && write == 0) {
++	if (blk_fs_request(rq)) {
++		if (write == 0) {
+ 			int nskip;
+ 
+ 			if (ide_cd_check_transfer_size(drive, len)) {
+@@ -1101,16 +1104,9 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
+ 	if (ireason == 0) {
+ 		write = 1;
+ 		xferfunc = HWIF(drive)->atapi_output_bytes;
+-	} else if (ireason == 2 || (ireason == 1 &&
+-		   (blk_fs_request(rq) || blk_pc_request(rq)))) {
++	} else {
+ 		write = 0;
+ 		xferfunc = HWIF(drive)->atapi_input_bytes;
+-	} else {
+-		printk(KERN_ERR "%s: %s: The drive "
+-				"appears confused (ireason = 0x%02x). "
+-				"Trying to recover by ending request.\n",
+-				drive->name, __FUNCTION__, ireason);
+-		goto end_request;
+ 	}
+ 
+ 	/*
+@@ -1182,11 +1178,10 @@ static ide_startstop_t cdrom_newpc_intr(ide_drive_t *drive)
+ 			else
+ 				rq->data += blen;
+ 		}
++		if (!write && blk_sense_request(rq))
++			rq->sense_len += blen;
+ 	}
+ 
+-	if (write && blk_sense_request(rq))
+-		rq->sense_len += thislen;
+-
+ 	/*
+ 	 * pad, if necessary
+ 	 */
+@@ -1931,6 +1926,7 @@ static const struct cd_list_entry ide_cd_quirks_list[] = {
+ 	{ "MATSHITADVD-ROM SR-8186", NULL,   IDE_CD_FLAG_PLAY_AUDIO_OK	    },
+ 	{ "MATSHITADVD-ROM SR-8176", NULL,   IDE_CD_FLAG_PLAY_AUDIO_OK	    },
+ 	{ "MATSHITADVD-ROM SR-8174", NULL,   IDE_CD_FLAG_PLAY_AUDIO_OK	    },
++	{ "Optiarc DVD RW AD-5200A", NULL,   IDE_CD_FLAG_PLAY_AUDIO_OK      },
+ 	{ NULL, NULL, 0 }
+ };
+ 
+diff --git a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c
+index 8f5bed4..39501d1 100644
+--- a/drivers/ide/ide-disk.c
++++ b/drivers/ide/ide-disk.c
+@@ -867,7 +867,7 @@ static void idedisk_setup (ide_drive_t *drive)
+ 
+ 	/* Only print cache size when it was specified */
+ 	if (id->buf_size)
+-		printk (" w/%dKiB Cache", id->buf_size/2);
++		printk(KERN_CONT " w/%dKiB Cache", id->buf_size / 2);
+ 
+ 	printk(KERN_CONT ", CHS=%d/%d/%d\n",
+ 			 drive->bios_cyl, drive->bios_head, drive->bios_sect);
+@@ -949,7 +949,8 @@ static void ide_device_shutdown(ide_drive_t *drive)
+ 		return;
+ 	}
+ 
+-	printk("Shutdown: %s\n", drive->name);
++	printk(KERN_INFO "Shutdown: %s\n", drive->name);
++
+ 	drive->gendev.bus->suspend(&drive->gendev, PMSG_SUSPEND);
+ }
+ 
+diff --git a/drivers/ide/ide-dma.c b/drivers/ide/ide-dma.c
+index d0e7b53..2de99e4 100644
+--- a/drivers/ide/ide-dma.c
++++ b/drivers/ide/ide-dma.c
+@@ -1,9 +1,13 @@
+ /*
++ *  IDE DMA support (including IDE PCI BM-DMA).
++ *
+  *  Copyright (C) 1995-1998   Mark Lord
+  *  Copyright (C) 1999-2000   Andre Hedrick <andre at linux-ide.org>
+  *  Copyright (C) 2004, 2007  Bartlomiej Zolnierkiewicz
+  *
+  *  May be copied or modified under the terms of the GNU General Public License
++ *
++ *  DMA is supported for all IDE devices (disk drives, cdroms, tapes, floppies).
+  */
+ 
+ /*
+@@ -11,49 +15,6 @@
+  */
+ 
+ /*
+- * This module provides support for the bus-master IDE DMA functions
+- * of various PCI chipsets, including the Intel PIIX (i82371FB for
+- * the 430 FX chipset), the PIIX3 (i82371SB for the 430 HX/VX and 
+- * 440 chipsets), and the PIIX4 (i82371AB for the 430 TX chipset)
+- * ("PIIX" stands for "PCI ISA IDE Xcellerator").
+- *
+- * Pretty much the same code works for other IDE PCI bus-mastering chipsets.
+- *
+- * DMA is supported for all IDE devices (disk drives, cdroms, tapes, floppies).
+- *
+- * By default, DMA support is prepared for use, but is currently enabled only
+- * for drives which already have DMA enabled (UltraDMA or mode 2 multi/single),
+- * or which are recognized as "good" (see table below).  Drives with only mode0
+- * or mode1 (multi/single) DMA should also work with this chipset/driver
+- * (eg. MC2112A) but are not enabled by default.
+- *
+- * Use "hdparm -i" to view modes supported by a given drive.
+- *
+- * The hdparm-3.5 (or later) utility can be used for manually enabling/disabling
+- * DMA support, but must be (re-)compiled against this kernel version or later.
+- *
+- * To enable DMA, use "hdparm -d1 /dev/hd?" on a per-drive basis after booting.
+- * If problems arise, ide.c will disable DMA operation after a few retries.
+- * This error recovery mechanism works and has been extremely well exercised.
+- *
+- * IDE drives, depending on their vintage, may support several different modes
+- * of DMA operation.  The boot-time modes are indicated with a "*" in
+- * the "hdparm -i" listing, and can be changed with *knowledgeable* use of
+- * the "hdparm -X" feature.  There is seldom a need to do this, as drives
+- * normally power-up with their "best" PIO/DMA modes enabled.
+- *
+- * Testing has been done with a rather extensive number of drives,
+- * with Quantum & Western Digital models generally outperforming the pack,
+- * and Fujitsu & Conner (and some Seagate which are really Conner) drives
+- * showing more lackluster throughput.
+- *
+- * Keep an eye on /var/adm/messages for "DMA disabled" messages.
+- *
+- * Some people have reported trouble with Intel Zappa motherboards.
+- * This can be fixed by upgrading the AMI BIOS to version 1.00.04.BS0,
+- * available from ftp://ftp.intel.com/pub/bios/10004bs0.exe
+- * (thanks to Glen Morrell <glen at spin.Stanford.edu> for researching this).
+- *
+  * Thanks to "Christopher J. Reimer" <reimer at doe.carleton.ca> for
+  * fixing the problem with the BIOS on some Acer motherboards.
+  *
+@@ -65,11 +26,6 @@
+  *
+  * Most importantly, thanks to Robert Bringman <rob at mars.trion.com>
+  * for supplying a Promise UDMA board & WD UDMA drive for this work!
+- *
+- * And, yes, Intel Zappa boards really *do* use both PIIX IDE ports.
+- *
+- * ATA-66/100 and recovery functions, I forgot the rest......
+- *
+  */
+ 
+ #include <linux/module.h>
+diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
+index 4a2cb28..194ecb0 100644
+--- a/drivers/ide/ide-probe.c
++++ b/drivers/ide/ide-probe.c
+@@ -756,7 +756,8 @@ static int ide_probe_port(ide_hwif_t *hwif)
+ 
+ 	BUG_ON(hwif->present);
+ 
+-	if (hwif->noprobe)
++	if (hwif->noprobe ||
++	    (hwif->drives[0].noprobe && hwif->drives[1].noprobe))
+ 		return -EACCES;
+ 
+ 	/*
+diff --git a/drivers/ide/ide-tape.c b/drivers/ide/ide-tape.c
+index 0598ecf..43e0e05 100644
+--- a/drivers/ide/ide-tape.c
++++ b/drivers/ide/ide-tape.c
+@@ -3765,6 +3765,11 @@ static int ide_tape_probe(ide_drive_t *drive)
+ 	g->fops = &idetape_block_ops;
+ 	ide_register_region(g);
+ 
++	printk(KERN_WARNING "It is possible that this driver does not have any"
++		" users anymore and, as a result, it will be REMOVED soon."
++		" Please notify Bart <bzolnier at gmail.com> or Boris"
++		" <petkovbb at gmail.com> in case you still need it.\n");
++
+ 	return 0;
+ 
+ out_free_tape:
+diff --git a/drivers/ide/ide.c b/drivers/ide/ide.c
+index 477833f..fa16bc3 100644
+--- a/drivers/ide/ide.c
++++ b/drivers/ide/ide.c
+@@ -590,11 +590,6 @@ void ide_unregister(unsigned int index, int init_default, int restore)
+ 		hwif->extra_ports = 0;
+ 	}
+ 
+-	/*
+-	 * Note that we only release the standard ports,
+-	 * and do not even try to handle any extra ports
+-	 * allocated for weird IDE interface chipsets.
+-	 */
+ 	ide_hwif_release_regions(hwif);
+ 
+ 	/* copy original settings */
+@@ -1036,10 +1031,9 @@ int generic_ide_ioctl(ide_drive_t *drive, struct file *file, struct block_device
+ 			drive->nice1 = (arg >> IDE_NICE_1) & 1;
+ 			return 0;
+ 		case HDIO_DRIVE_RESET:
+-		{
+-			unsigned long flags;
+-			if (!capable(CAP_SYS_ADMIN)) return -EACCES;
+-			
++			if (!capable(CAP_SYS_ADMIN))
++				return -EACCES;
++
+ 			/*
+ 			 *	Abort the current command on the
+ 			 *	group if there is one, taking
+@@ -1058,17 +1052,15 @@ int generic_ide_ioctl(ide_drive_t *drive, struct file *file, struct block_device
+ 			ide_abort(drive, "drive reset");
+ 
+ 			BUG_ON(HWGROUP(drive)->handler);
+-				
++
+ 			/* Ensure nothing gets queued after we
+ 			   drop the lock. Reset will clear the busy */
+-		   
++
+ 			HWGROUP(drive)->busy = 1;
+ 			spin_unlock_irqrestore(&ide_lock, flags);
+ 			(void) ide_do_reset(drive);
+ 
+ 			return 0;
+-		}
+-
+ 		case HDIO_GET_BUSSTATE:
+ 			if (!capable(CAP_SYS_ADMIN))
+ 				return -EACCES;
+@@ -1449,7 +1441,7 @@ static int __init ide_setup(char *s)
+ 
+ 			case -1: /* "noprobe" */
+ 				hwif->noprobe = 1;
+-				goto done;
++				goto obsolete_option;
+ 
+ 			case 1:	/* base */
+ 				vals[1] = vals[0] + 0x206; /* default ctl */
+diff --git a/drivers/ide/legacy/qd65xx.c b/drivers/ide/legacy/qd65xx.c
+index bba29df..2f4f47a 100644
+--- a/drivers/ide/legacy/qd65xx.c
++++ b/drivers/ide/legacy/qd65xx.c
+@@ -334,43 +334,6 @@ static void __init qd6580_port_init_devs(ide_hwif_t *hwif)
+ 	hwif->drives[1].drive_data = t2;
+ }
+ 
+-/*
+- * qd_unsetup:
+- *
+- * called to unsetup an ata channel : back to default values, unlinks tuning
+- */
+-/*
+-static void __exit qd_unsetup(ide_hwif_t *hwif)
+-{
+-	u8 config = hwif->config_data;
+-	int base = hwif->select_data;
+-	void *set_pio_mode = (void *)hwif->set_pio_mode;
+-
+-	if (hwif->chipset != ide_qd65xx)
+-		return;
+-
+-	printk(KERN_NOTICE "%s: back to defaults\n", hwif->name);
+-
+-	hwif->selectproc = NULL;
+-	hwif->set_pio_mode = NULL;
+-
+-	if (set_pio_mode == (void *)qd6500_set_pio_mode) {
+-		// will do it for both
+-		outb(QD6500_DEF_DATA, QD_TIMREG(&hwif->drives[0]));
+-	} else if (set_pio_mode == (void *)qd6580_set_pio_mode) {
+-		if (QD_CONTROL(hwif) & QD_CONTR_SEC_DISABLED) {
+-			outb(QD6580_DEF_DATA, QD_TIMREG(&hwif->drives[0]));
+-			outb(QD6580_DEF_DATA2, QD_TIMREG(&hwif->drives[1]));
+-		} else {
+-			outb(hwif->channel ? QD6580_DEF_DATA2 : QD6580_DEF_DATA, QD_TIMREG(&hwif->drives[0]));
+-		}
+-	} else {
+-		printk(KERN_WARNING "Unknown qd65xx tuning fonction !\n");
+-		printk(KERN_WARNING "keeping settings !\n");
+-	}
+-}
+-*/
+-
+ static const struct ide_port_info qd65xx_port_info __initdata = {
+ 	.chipset		= ide_qd65xx,
+ 	.host_flags		= IDE_HFLAG_IO_32BIT |
+@@ -444,6 +407,8 @@ static int __init qd_probe(int base)
+ 		printk(KERN_DEBUG "qd6580: config=%#x, control=%#x, ID3=%u\n",
+ 			config, control, QD_ID3);
+ 
++		outb(QD_DEF_CONTR, QD_CONTROL_PORT);
++
+ 		if (control & QD_CONTR_SEC_DISABLED) {
+ 			/* secondary disabled */
+ 
+@@ -460,8 +425,6 @@ static int __init qd_probe(int base)
+ 
+ 			ide_device_add(idx, &qd65xx_port_info);
+ 
+-			outb(QD_DEF_CONTR, QD_CONTROL_PORT);
+-
+ 			return 1;
+ 		} else {
+ 			ide_hwif_t *mate;
+@@ -487,8 +450,6 @@ static int __init qd_probe(int base)
+ 
+ 			ide_device_add(idx, &qd65xx_port_info);
+ 
+-			outb(QD_DEF_CONTR, QD_CONTROL_PORT);
+-
+ 			return 0; /* no other qd65xx possible */
+ 		}
+ 	}
+diff --git a/drivers/ide/pci/cmd640.c b/drivers/ide/pci/cmd640.c
+index bd24dad..ec66798 100644
+--- a/drivers/ide/pci/cmd640.c
++++ b/drivers/ide/pci/cmd640.c
+@@ -787,7 +787,8 @@ static int __init cmd640x_init(void)
+ 	/*
+ 	 * Try to enable the secondary interface, if not already enabled
+ 	 */
+-	if (cmd_hwif1->noprobe) {
++	if (cmd_hwif1->noprobe ||
++	    (cmd_hwif1->drives[0].noprobe && cmd_hwif1->drives[1].noprobe)) {
+ 		port2 = "not probed";
+ 	} else {
+ 		b = get_cmd640_reg(CNTRL);
+diff --git a/drivers/ide/pci/hpt366.c b/drivers/ide/pci/hpt366.c
+index d0f7bb8..6357bb6 100644
+--- a/drivers/ide/pci/hpt366.c
++++ b/drivers/ide/pci/hpt366.c
+@@ -1570,10 +1570,12 @@ static int __devinit hpt366_init_one(struct pci_dev *dev, const struct pci_devic
+ 		if (rev < 3)
+ 			info = &hpt36x;
+ 		else {
+-			static const struct hpt_info *hpt37x_info[] =
+-				{ &hpt370, &hpt370a, &hpt372, &hpt372n };
+-
+-			info = hpt37x_info[min_t(u8, rev, 6) - 3];
++			switch (min_t(u8, rev, 6)) {
++			case 3: info = &hpt370;  break;
++			case 4: info = &hpt370a; break;
++			case 5: info = &hpt372;  break;
++			case 6: info = &hpt372n; break;
++			}
+ 			idx++;
+ 		}
+ 		break;
+@@ -1626,7 +1628,7 @@ static int __devinit hpt366_init_one(struct pci_dev *dev, const struct pci_devic
+ 	return ide_setup_pci_device(dev, &d);
+ }
+ 
+-static const struct pci_device_id hpt366_pci_tbl[] = {
++static const struct pci_device_id hpt366_pci_tbl[] __devinitconst = {
+ 	{ PCI_VDEVICE(TTI, PCI_DEVICE_ID_TTI_HPT366),  0 },
+ 	{ PCI_VDEVICE(TTI, PCI_DEVICE_ID_TTI_HPT372),  1 },
+ 	{ PCI_VDEVICE(TTI, PCI_DEVICE_ID_TTI_HPT302),  2 },
+diff --git a/drivers/ieee1394/sbp2.c b/drivers/ieee1394/sbp2.c
+index 28e155a..9e2b196 100644
+--- a/drivers/ieee1394/sbp2.c
++++ b/drivers/ieee1394/sbp2.c
+@@ -183,6 +183,9 @@ MODULE_PARM_DESC(exclusive_login, "Exclusive login to sbp2 device "
+  *   Avoids access beyond actual disk limits on devices with an off-by-one bug.
+  *   Don't use this with devices which don't have this bug.
+  *
++ * - delay inquiry
++ *   Wait extra SBP2_INQUIRY_DELAY seconds after login before SCSI inquiry.
++ *
+  * - override internal blacklist
+  *   Instead of adding to the built-in blacklist, use only the workarounds
+  *   specified in the module load parameter.
+@@ -195,6 +198,7 @@ MODULE_PARM_DESC(workarounds, "Work around device bugs (default = 0"
+ 	", 36 byte inquiry = "    __stringify(SBP2_WORKAROUND_INQUIRY_36)
+ 	", skip mode page 8 = "   __stringify(SBP2_WORKAROUND_MODE_SENSE_8)
+ 	", fix capacity = "       __stringify(SBP2_WORKAROUND_FIX_CAPACITY)
++	", delay inquiry = "      __stringify(SBP2_WORKAROUND_DELAY_INQUIRY)
+ 	", override internal blacklist = " __stringify(SBP2_WORKAROUND_OVERRIDE)
+ 	", or a combination)");
+ 
+@@ -357,6 +361,11 @@ static const struct {
+ 		.workarounds		= SBP2_WORKAROUND_INQUIRY_36 |
+ 					  SBP2_WORKAROUND_MODE_SENSE_8,
+ 	},
++	/* DViCO Momobay FX-3A with TSB42AA9A bridge */ {
++		.firmware_revision	= 0x002800,
++		.model_id		= 0x000000,
++		.workarounds		= SBP2_WORKAROUND_DELAY_INQUIRY,
++	},
+ 	/* Initio bridges, actually only needed for some older ones */ {
+ 		.firmware_revision	= 0x000200,
+ 		.model_id		= SBP2_ROM_VALUE_WILDCARD,
+@@ -914,6 +923,9 @@ static int sbp2_start_device(struct sbp2_lu *lu)
+ 	sbp2_agent_reset(lu, 1);
+ 	sbp2_max_speed_and_size(lu);
+ 
++	if (lu->workarounds & SBP2_WORKAROUND_DELAY_INQUIRY)
++		ssleep(SBP2_INQUIRY_DELAY);
++
+ 	error = scsi_add_device(lu->shost, 0, lu->ud->id, 0);
+ 	if (error) {
+ 		SBP2_ERR("scsi_add_device failed");
+@@ -1962,6 +1974,9 @@ static int sbp2scsi_slave_alloc(struct scsi_device *sdev)
+ {
+ 	struct sbp2_lu *lu = (struct sbp2_lu *)sdev->host->hostdata[0];
+ 
++	if (sdev->lun != 0 || sdev->id != lu->ud->id || sdev->channel != 0)
++		return -ENODEV;
++
+ 	lu->sdev = sdev;
+ 	sdev->allow_restart = 1;
+ 
+diff --git a/drivers/ieee1394/sbp2.h b/drivers/ieee1394/sbp2.h
+index d2ecb0d..80d8e09 100644
+--- a/drivers/ieee1394/sbp2.h
++++ b/drivers/ieee1394/sbp2.h
+@@ -343,6 +343,8 @@ enum sbp2lu_state_types {
+ #define SBP2_WORKAROUND_INQUIRY_36	0x2
+ #define SBP2_WORKAROUND_MODE_SENSE_8	0x4
+ #define SBP2_WORKAROUND_FIX_CAPACITY	0x8
++#define SBP2_WORKAROUND_DELAY_INQUIRY	0x10
++#define SBP2_INQUIRY_DELAY		12
+ #define SBP2_WORKAROUND_OVERRIDE	0x100
+ 
+ #endif /* SBP2_H */
+diff --git a/drivers/infiniband/hw/cxgb3/iwch_mem.c b/drivers/infiniband/hw/cxgb3/iwch_mem.c
+index 73bfd16..b8797c6 100644
+--- a/drivers/infiniband/hw/cxgb3/iwch_mem.c
++++ b/drivers/infiniband/hw/cxgb3/iwch_mem.c
+@@ -136,14 +136,8 @@ int build_phys_page_list(struct ib_phys_buf *buffer_list,
+ 
+ 	/* Find largest page shift we can use to cover buffers */
+ 	for (*shift = PAGE_SHIFT; *shift < 27; ++(*shift))
+-		if (num_phys_buf > 1) {
+-			if ((1ULL << *shift) & mask)
+-				break;
+-		} else
+-			if (1ULL << *shift >=
+-			    buffer_list[0].size +
+-			    (buffer_list[0].addr & ((1ULL << *shift) - 1)))
+-				break;
++		if ((1ULL << *shift) & mask)
++			break;
+ 
+ 	buffer_list[0].size += buffer_list[0].addr & ((1ULL << *shift) - 1);
+ 	buffer_list[0].addr &= ~0ull << *shift;
+diff --git a/drivers/infiniband/hw/nes/nes.c b/drivers/infiniband/hw/nes/nes.c
+index 7f8853b..b2112f5 100644
+--- a/drivers/infiniband/hw/nes/nes.c
++++ b/drivers/infiniband/hw/nes/nes.c
+@@ -567,12 +567,12 @@ static int __devinit nes_probe(struct pci_dev *pcidev, const struct pci_device_i
+ 
+ 	/* Init the adapter */
+ 	nesdev->nesadapter = nes_init_adapter(nesdev, hw_rev);
+-	nesdev->nesadapter->et_rx_coalesce_usecs_irq = interrupt_mod_interval;
+ 	if (!nesdev->nesadapter) {
+ 		printk(KERN_ERR PFX "Unable to initialize adapter.\n");
+ 		ret = -ENOMEM;
+ 		goto bail5;
+ 	}
++	nesdev->nesadapter->et_rx_coalesce_usecs_irq = interrupt_mod_interval;
+ 
+ 	/* nesdev->base_doorbell_index =
+ 			nesdev->nesadapter->pd_config_base[PCI_FUNC(nesdev->pcidev->devfn)]; */
+diff --git a/drivers/infiniband/hw/nes/nes.h b/drivers/infiniband/hw/nes/nes.h
+index fd57e8a..a48b288 100644
+--- a/drivers/infiniband/hw/nes/nes.h
++++ b/drivers/infiniband/hw/nes/nes.h
+@@ -285,6 +285,21 @@ struct nes_device {
+ };
+ 
+ 
++static inline __le32 get_crc_value(struct nes_v4_quad *nes_quad)
++{
++	u32 crc_value;
++	crc_value = crc32c(~0, (void *)nes_quad, sizeof (struct nes_v4_quad));
++
++	/*
++	 * With commit ef19454b ("[LIB] crc32c: Keep intermediate crc
++	 * state in cpu order"), behavior of crc32c changes on
++	 * big-endian platforms.  Our algorithm expects the previous
++	 * behavior; otherwise we have RDMA connection establishment
++	 * issue on big-endian.
++	 */
++	return cpu_to_le32(crc_value);
++}
++
+ static inline void
+ set_wqe_64bit_value(__le32 *wqe_words, u32 index, u64 value)
+ {
+diff --git a/drivers/infiniband/hw/nes/nes_cm.c b/drivers/infiniband/hw/nes/nes_cm.c
+index bd5cfea..39adb26 100644
+--- a/drivers/infiniband/hw/nes/nes_cm.c
++++ b/drivers/infiniband/hw/nes/nes_cm.c
+@@ -370,11 +370,11 @@ int schedule_nes_timer(struct nes_cm_node *cm_node, struct sk_buff *skb,
+ 	int ret = 0;
+ 	u32 was_timer_set;
+ 
++	if (!cm_node)
++		return -EINVAL;
+ 	new_send = kzalloc(sizeof(*new_send), GFP_ATOMIC);
+ 	if (!new_send)
+ 		return -1;
+-	if (!cm_node)
+-		return -EINVAL;
+ 
+ 	/* new_send->timetosend = currenttime */
+ 	new_send->retrycount = NES_DEFAULT_RETRYS;
+@@ -947,6 +947,7 @@ static int mini_cm_dec_refcnt_listen(struct nes_cm_core *cm_core,
+ 		nes_debug(NES_DBG_CM, "destroying listener (%p)\n", listener);
+ 
+ 		kfree(listener);
++		listener = NULL;
+ 		ret = 0;
+ 		cm_listens_destroyed++;
+ 	} else {
+@@ -2319,6 +2320,7 @@ int nes_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ 	struct iw_cm_event cm_event;
+ 	struct nes_hw_qp_wqe *wqe;
+ 	struct nes_v4_quad nes_quad;
++	u32 crc_value;
+ 	int ret;
+ 
+ 	ibqp = nes_get_qp(cm_id->device, conn_param->qpn);
+@@ -2435,8 +2437,8 @@ int nes_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ 	nes_quad.TcpPorts[1]   = cm_id->local_addr.sin_port;
+ 
+ 	/* Produce hash key */
+-	nesqp->hte_index = cpu_to_be32(
+-			crc32c(~0, (void *)&nes_quad, sizeof(nes_quad)) ^ 0xffffffff);
++	crc_value = get_crc_value(&nes_quad);
++	nesqp->hte_index = cpu_to_be32(crc_value ^ 0xffffffff);
+ 	nes_debug(NES_DBG_CM, "HTE Index = 0x%08X, CRC = 0x%08X\n",
+ 			nesqp->hte_index, nesqp->hte_index & adapter->hte_index_mask);
+ 
+@@ -2750,6 +2752,7 @@ void cm_event_connected(struct nes_cm_event *event)
+ 	struct iw_cm_event cm_event;
+ 	struct nes_hw_qp_wqe *wqe;
+ 	struct nes_v4_quad nes_quad;
++	u32 crc_value;
+ 	int ret;
+ 
+ 	/* get all our handles */
+@@ -2827,8 +2830,8 @@ void cm_event_connected(struct nes_cm_event *event)
+ 	nes_quad.TcpPorts[1] = cm_id->local_addr.sin_port;
+ 
+ 	/* Produce hash key */
+-	nesqp->hte_index = cpu_to_be32(
+-			crc32c(~0, (void *)&nes_quad, sizeof(nes_quad)) ^ 0xffffffff);
++	crc_value = get_crc_value(&nes_quad);
++	nesqp->hte_index = cpu_to_be32(crc_value ^ 0xffffffff);
+ 	nes_debug(NES_DBG_CM, "HTE Index = 0x%08X, After CRC = 0x%08X\n",
+ 			nesqp->hte_index, nesqp->hte_index & nesadapter->hte_index_mask);
+ 
+diff --git a/drivers/infiniband/hw/nes/nes_hw.c b/drivers/infiniband/hw/nes/nes_hw.c
+index 7c4c0fb..49e53e4 100644
+--- a/drivers/infiniband/hw/nes/nes_hw.c
++++ b/drivers/infiniband/hw/nes/nes_hw.c
+@@ -156,15 +156,14 @@ static void nes_nic_tune_timer(struct nes_device *nesdev)
+ 
+ 	spin_lock_irqsave(&nesadapter->periodic_timer_lock, flags);
+ 
+-	if (shared_timer->cq_count_old < cq_count) {
+-		if (cq_count > shared_timer->threshold_low)
+-			shared_timer->cq_direction_downward=0;
+-	}
+-	if (shared_timer->cq_count_old >= cq_count)
++	if (shared_timer->cq_count_old <= cq_count)
++		shared_timer->cq_direction_downward = 0;
++	else
+ 		shared_timer->cq_direction_downward++;
+ 	shared_timer->cq_count_old = cq_count;
+ 	if (shared_timer->cq_direction_downward > NES_NIC_CQ_DOWNWARD_TREND) {
+-		if (cq_count <= shared_timer->threshold_low) {
++		if (cq_count <= shared_timer->threshold_low &&
++		    shared_timer->threshold_low > 4) {
+ 			shared_timer->threshold_low = shared_timer->threshold_low/2;
+ 			shared_timer->cq_direction_downward=0;
+ 			nesdev->currcq_count = 0;
+@@ -1728,7 +1727,6 @@ int nes_napi_isr(struct nes_device *nesdev)
+ 			nesdev->int_req &= ~NES_INT_TIMER;
+ 			nes_write32(nesdev->regs+NES_INTF_INT_MASK, ~(nesdev->intf_int_req));
+ 			nes_write32(nesdev->regs+NES_INT_MASK, ~nesdev->int_req);
+-			nesadapter->tune_timer.timer_in_use_old = 0;
+ 		}
+ 		nesdev->deepcq_count = 0;
+ 		return 1;
+@@ -1867,7 +1865,6 @@ void nes_dpc(unsigned long param)
+ 					nesdev->int_req &= ~NES_INT_TIMER;
+ 					nes_write32(nesdev->regs + NES_INTF_INT_MASK, ~(nesdev->intf_int_req));
+ 					nes_write32(nesdev->regs+NES_INT_MASK, ~nesdev->int_req);
+-					nesdev->nesadapter->tune_timer.timer_in_use_old = 0;
+ 				} else {
+ 					nes_write32(nesdev->regs+NES_INT_MASK, 0x0000ffff|(~nesdev->int_req));
+ 				}
+diff --git a/drivers/infiniband/hw/nes/nes_hw.h b/drivers/infiniband/hw/nes/nes_hw.h
+index 1e10df5..b7e2844 100644
+--- a/drivers/infiniband/hw/nes/nes_hw.h
++++ b/drivers/infiniband/hw/nes/nes_hw.h
+@@ -962,7 +962,7 @@ struct nes_arp_entry {
+ #define DEFAULT_JUMBO_NES_QL_LOW    12
+ #define DEFAULT_JUMBO_NES_QL_TARGET 40
+ #define DEFAULT_JUMBO_NES_QL_HIGH   128
+-#define NES_NIC_CQ_DOWNWARD_TREND   8
++#define NES_NIC_CQ_DOWNWARD_TREND   16
+ 
+ struct nes_hw_tune_timer {
+     //u16 cq_count;
+diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
+index 4dafbe1..a651e9d 100644
+--- a/drivers/infiniband/hw/nes/nes_verbs.c
++++ b/drivers/infiniband/hw/nes/nes_verbs.c
+@@ -929,7 +929,7 @@ static struct ib_pd *nes_alloc_pd(struct ib_device *ibdev,
+ 				NES_MAX_USER_DB_REGIONS, nesucontext->first_free_db);
+ 		nes_debug(NES_DBG_PD, "find_first_zero_biton doorbells returned %u, mapping pd_id %u.\n",
+ 				nespd->mmap_db_index, nespd->pd_id);
+-		if (nespd->mmap_db_index > NES_MAX_USER_DB_REGIONS) {
++		if (nespd->mmap_db_index >= NES_MAX_USER_DB_REGIONS) {
+ 			nes_debug(NES_DBG_PD, "mmap_db_index > MAX\n");
+ 			nes_free_resource(nesadapter, nesadapter->allocated_pds, pd_num);
+ 			kfree(nespd);
+@@ -1327,7 +1327,7 @@ static struct ib_qp *nes_create_qp(struct ib_pd *ibpd,
+ 								  (long long unsigned int)req.user_wqe_buffers);
+ 							nes_free_resource(nesadapter, nesadapter->allocated_qps, qp_num);
+ 							kfree(nesqp->allocated_buffer);
+-							return ERR_PTR(-ENOMEM);
++							return ERR_PTR(-EFAULT);
+ 						}
+ 					}
+ 
+@@ -1674,6 +1674,7 @@ static struct ib_cq *nes_create_cq(struct ib_device *ibdev, int entries,
+ 		}
+ 		nes_debug(NES_DBG_CQ, "CQ Virtual Address = %08lX, size = %u.\n",
+ 				(unsigned long)req.user_cq_buffer, entries);
++		err = 1;
+ 		list_for_each_entry(nespbl, &nes_ucontext->cq_reg_mem_list, list) {
+ 			if (nespbl->user_base == (unsigned long )req.user_cq_buffer) {
+ 				list_del(&nespbl->list);
+@@ -1686,7 +1687,7 @@ static struct ib_cq *nes_create_cq(struct ib_device *ibdev, int entries,
+ 		if (err) {
+ 			nes_free_resource(nesadapter, nesadapter->allocated_cqs, cq_num);
+ 			kfree(nescq);
+-			return ERR_PTR(err);
++			return ERR_PTR(-EFAULT);
+ 		}
+ 
+ 		pbl_entries = nespbl->pbl_size >> 3;
+@@ -1831,9 +1832,6 @@ static struct ib_cq *nes_create_cq(struct ib_device *ibdev, int entries,
+ 				spin_unlock_irqrestore(&nesdev->cqp.lock, flags);
+ 			}
+ 		}
+-		nes_debug(NES_DBG_CQ, "iWARP CQ%u create timeout expired, major code = 0x%04X,"
+-				" minor code = 0x%04X\n",
+-				nescq->hw_cq.cq_number, cqp_request->major_code, cqp_request->minor_code);
+ 		if (!context)
+ 			pci_free_consistent(nesdev->pcidev, nescq->cq_mem_size, mem,
+ 					nescq->hw_cq.cq_pbase);
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index f337800..a0f0e60 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -90,6 +90,11 @@ config MACVLAN
+ 	  This allows one to create virtual interfaces that map packets to
+ 	  or from specific MAC addresses to a particular interface.
+ 
++	  Macvlan devices can be added using the "ip" command from the
++	  iproute2 package starting with the iproute2-2.6.23 release:
++
++	  "ip link add link <real dev> [ address MAC ] [ NAME ] type macvlan"
++
+ 	  To compile this driver as a module, choose M here: the module
+ 	  will be called macvlan.
+ 
+@@ -2363,6 +2368,7 @@ config GELIC_NET
+ config GELIC_WIRELESS
+        bool "PS3 Wireless support"
+        depends on GELIC_NET
++       select WIRELESS_EXT
+        help
+         This option adds the support for the wireless feature of PS3.
+         If you have the wireless-less model of PS3 or have no plan to
+diff --git a/drivers/net/bnx2x.c b/drivers/net/bnx2x.c
+index afc7f34..8af142c 100644
+--- a/drivers/net/bnx2x.c
++++ b/drivers/net/bnx2x.c
+@@ -1,6 +1,6 @@
+ /* bnx2x.c: Broadcom Everest network driver.
+  *
+- * Copyright (c) 2007 Broadcom Corporation
++ * Copyright (c) 2007-2008 Broadcom Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License as published by
+@@ -10,13 +10,13 @@
+  * Based on code from Michael Chan's bnx2 driver
+  * UDP CSUM errata workaround by Arik Gendelman
+  * Slowpath rework by Vladislav Zolotarov
+- * Statistics and Link managment by Yitchak Gertner
++ * Statistics and Link management by Yitchak Gertner
+  *
+  */
+ 
+ /* define this to make the driver freeze on error
+  * to allow getting debug info
+- * (you will need to reboot afterwords)
++ * (you will need to reboot afterwards)
+  */
+ /*#define BNX2X_STOP_ON_ERROR*/
+ 
+@@ -63,22 +63,21 @@
+ #include "bnx2x.h"
+ #include "bnx2x_init.h"
+ 
+-#define DRV_MODULE_VERSION      "0.40.15"
+-#define DRV_MODULE_RELDATE      "$DateTime: 2007/11/15 07:28:37 $"
+-#define BNX2X_BC_VER    	0x040009
++#define DRV_MODULE_VERSION      "1.40.22"
++#define DRV_MODULE_RELDATE      "2007/11/27"
++#define BNX2X_BC_VER    	0x040200
+ 
+ /* Time in jiffies before concluding the transmitter is hung. */
+ #define TX_TIMEOUT      	(5*HZ)
+ 
+ static char version[] __devinitdata =
+-	"Broadcom NetXtreme II 577xx 10Gigabit Ethernet Driver "
++	"Broadcom NetXtreme II 5771X 10Gigabit Ethernet Driver "
+ 	DRV_MODULE_NAME " " DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n";
+ 
+ MODULE_AUTHOR("Eliezer Tamir <eliezert at broadcom.com>");
+ MODULE_DESCRIPTION("Broadcom NetXtreme II BCM57710 Driver");
+ MODULE_LICENSE("GPL");
+ MODULE_VERSION(DRV_MODULE_VERSION);
+-MODULE_INFO(cvs_version, "$Revision: #356 $");
+ 
+ static int use_inta;
+ static int poll;
+@@ -94,8 +93,8 @@ module_param(debug, int, 0);
+ MODULE_PARM_DESC(use_inta, "use INT#A instead of MSI-X");
+ MODULE_PARM_DESC(poll, "use polling (for debug)");
+ MODULE_PARM_DESC(onefunc, "enable only first function");
+-MODULE_PARM_DESC(nomcp, "ignore managment CPU (Implies onefunc)");
+-MODULE_PARM_DESC(debug, "defualt debug msglevel");
++MODULE_PARM_DESC(nomcp, "ignore management CPU (Implies onefunc)");
++MODULE_PARM_DESC(debug, "default debug msglevel");
+ 
+ #ifdef BNX2X_MULTI
+ module_param(use_multi, int, 0);
+@@ -298,8 +297,7 @@ static void bnx2x_read_dmae(struct bnx2x *bp, u32 src_addr, u32 len32)
+ 
+ static int bnx2x_mc_assert(struct bnx2x *bp)
+ {
+-	int i, j;
+-	int rc = 0;
++	int i, j, rc = 0;
+ 	char last_idx;
+ 	const char storm[] = {"XTCU"};
+ 	const u32 intmem_base[] = {
+@@ -313,8 +311,9 @@ static int bnx2x_mc_assert(struct bnx2x *bp)
+ 	for (i = 0; i < 4; i++) {
+ 		last_idx = REG_RD8(bp, XSTORM_ASSERT_LIST_INDEX_OFFSET +
+ 				   intmem_base[i]);
+-		BNX2X_ERR("DATA %cSTORM_ASSERT_LIST_INDEX 0x%x\n",
+-			  storm[i], last_idx);
++		if (last_idx)
++			BNX2X_LOG("DATA %cSTORM_ASSERT_LIST_INDEX 0x%x\n",
++				  storm[i], last_idx);
+ 
+ 		/* print the asserts */
+ 		for (j = 0; j < STROM_ASSERT_ARRAY_SIZE; j++) {
+@@ -330,7 +329,7 @@ static int bnx2x_mc_assert(struct bnx2x *bp)
+ 				      intmem_base[i]);
+ 
+ 			if (row0 != COMMON_ASM_INVALID_ASSERT_OPCODE) {
+-				BNX2X_ERR("DATA %cSTORM_ASSERT_INDEX 0x%x ="
++				BNX2X_LOG("DATA %cSTORM_ASSERT_INDEX 0x%x ="
+ 					  " 0x%08x 0x%08x 0x%08x 0x%08x\n",
+ 					  storm[i], j, row3, row2, row1, row0);
+ 				rc++;
+@@ -341,6 +340,7 @@ static int bnx2x_mc_assert(struct bnx2x *bp)
+ 	}
+ 	return rc;
+ }
++
+ static void bnx2x_fw_dump(struct bnx2x *bp)
+ {
+ 	u32 mark, offset;
+@@ -348,21 +348,22 @@ static void bnx2x_fw_dump(struct bnx2x *bp)
+ 	int word;
+ 
+ 	mark = REG_RD(bp, MCP_REG_MCPR_SCRATCH + 0xf104);
+-	printk(KERN_ERR PFX "begin fw dump (mark 0x%x)\n", mark);
++	mark = ((mark + 0x3) & ~0x3);
++	printk(KERN_ERR PFX "begin fw dump (mark 0x%x)\n" KERN_ERR, mark);
+ 
+ 	for (offset = mark - 0x08000000; offset <= 0xF900; offset += 0x8*4) {
+ 		for (word = 0; word < 8; word++)
+ 			data[word] = htonl(REG_RD(bp, MCP_REG_MCPR_SCRATCH +
+ 						  offset + 4*word));
+ 		data[8] = 0x0;
+-		printk(KERN_ERR PFX "%s", (char *)data);
++		printk(KERN_CONT "%s", (char *)data);
+ 	}
+ 	for (offset = 0xF108; offset <= mark - 0x08000000; offset += 0x8*4) {
+ 		for (word = 0; word < 8; word++)
+ 			data[word] = htonl(REG_RD(bp, MCP_REG_MCPR_SCRATCH +
+ 						  offset + 4*word));
+ 		data[8] = 0x0;
+-		printk(KERN_ERR PFX "%s", (char *)data);
++		printk(KERN_CONT "%s", (char *)data);
+ 	}
+ 	printk("\n" KERN_ERR PFX "end of fw dump\n");
+ }
+@@ -427,10 +428,10 @@ static void bnx2x_panic_dump(struct bnx2x *bp)
+ 		}
+ 	}
+ 
+-	BNX2X_ERR("def_c_idx(%u)  def_u_idx(%u)  def_t_idx(%u)"
+-		  "  def_x_idx(%u)  def_att_idx(%u)  attn_state(%u)"
++	BNX2X_ERR("def_c_idx(%u)  def_u_idx(%u)  def_x_idx(%u)"
++		  "  def_t_idx(%u)  def_att_idx(%u)  attn_state(%u)"
+ 		  "  spq_prod_idx(%u)\n",
+-		  bp->def_c_idx, bp->def_u_idx, bp->def_t_idx, bp->def_x_idx,
++		  bp->def_c_idx, bp->def_u_idx, bp->def_x_idx, bp->def_t_idx,
+ 		  bp->def_att_idx, bp->attn_state, bp->spq_prod_idx);
+ 
+ 
+@@ -441,7 +442,7 @@ static void bnx2x_panic_dump(struct bnx2x *bp)
+ 	DP(BNX2X_MSG_STATS, "stats_state - DISABLE\n");
+ }
+ 
+-static void bnx2x_enable_int(struct bnx2x *bp)
++static void bnx2x_int_enable(struct bnx2x *bp)
+ {
+ 	int port = bp->port;
+ 	u32 addr = port ? HC_REG_CONFIG_1 : HC_REG_CONFIG_0;
+@@ -454,18 +455,26 @@ static void bnx2x_enable_int(struct bnx2x *bp)
+ 			HC_CONFIG_0_REG_ATTN_BIT_EN_0);
+ 	} else {
+ 		val |= (HC_CONFIG_0_REG_SINGLE_ISR_EN_0 |
++			HC_CONFIG_0_REG_MSI_MSIX_INT_EN_0 |
+ 			HC_CONFIG_0_REG_INT_LINE_EN_0 |
+ 			HC_CONFIG_0_REG_ATTN_BIT_EN_0);
++
++		/* Errata A0.158 workaround */
++		DP(NETIF_MSG_INTR, "write %x to HC %d (addr 0x%x)  MSI-X %d\n",
++		   val, port, addr, msix);
++
++		REG_WR(bp, addr, val);
++
+ 		val &= ~HC_CONFIG_0_REG_MSI_MSIX_INT_EN_0;
+ 	}
+ 
+-	DP(NETIF_MSG_INTR, "write %x to HC %d (addr 0x%x)  msi %d\n",
++	DP(NETIF_MSG_INTR, "write %x to HC %d (addr 0x%x)  MSI-X %d\n",
+ 	   val, port, addr, msix);
+ 
+ 	REG_WR(bp, addr, val);
+ }
+ 
+-static void bnx2x_disable_int(struct bnx2x *bp)
++static void bnx2x_int_disable(struct bnx2x *bp)
+ {
+ 	int port = bp->port;
+ 	u32 addr = port ? HC_REG_CONFIG_1 : HC_REG_CONFIG_0;
+@@ -484,15 +493,15 @@ static void bnx2x_disable_int(struct bnx2x *bp)
+ 		BNX2X_ERR("BUG! proper val not read from IGU!\n");
+ }
+ 
+-static void bnx2x_disable_int_sync(struct bnx2x *bp)
++static void bnx2x_int_disable_sync(struct bnx2x *bp)
+ {
+ 
+ 	int msix = (bp->flags & USING_MSIX_FLAG) ? 1 : 0;
+ 	int i;
+ 
+ 	atomic_inc(&bp->intr_sem);
+-	/* prevent the HW from sending interrupts*/
+-	bnx2x_disable_int(bp);
++	/* prevent the HW from sending interrupts */
++	bnx2x_int_disable(bp);
+ 
+ 	/* make sure all ISRs are done */
+ 	if (msix) {
+@@ -775,6 +784,7 @@ static void bnx2x_sp_event(struct bnx2x_fastpath *fp,
+ 		mb(); /* force bnx2x_wait_ramrod to see the change */
+ 		return;
+ 	}
++
+ 	switch (command | bp->state) {
+ 	case (RAMROD_CMD_ID_ETH_PORT_SETUP | BNX2X_STATE_OPENING_WAIT4_PORT):
+ 		DP(NETIF_MSG_IFUP, "got setup ramrod\n");
+@@ -787,20 +797,20 @@ static void bnx2x_sp_event(struct bnx2x_fastpath *fp,
+ 		fp->state = BNX2X_FP_STATE_HALTED;
+ 		break;
+ 
+-	case (RAMROD_CMD_ID_ETH_PORT_DEL | BNX2X_STATE_CLOSING_WAIT4_DELETE):
+-		DP(NETIF_MSG_IFDOWN, "got delete ramrod\n");
+-		bp->state = BNX2X_STATE_CLOSING_WAIT4_UNLOAD;
+-		break;
+-
+ 	case (RAMROD_CMD_ID_ETH_CFC_DEL | BNX2X_STATE_CLOSING_WAIT4_HALT):
+-		DP(NETIF_MSG_IFDOWN, "got delete ramrod for MULTI[%d]\n", cid);
+-		bnx2x_fp(bp, cid, state) = BNX2X_FP_STATE_DELETED;
++		DP(NETIF_MSG_IFDOWN, "got delete ramrod for MULTI[%d]\n",
++		   cid);
++		bnx2x_fp(bp, cid, state) = BNX2X_FP_STATE_CLOSED;
+ 		break;
+ 
+ 	case (RAMROD_CMD_ID_ETH_SET_MAC | BNX2X_STATE_OPEN):
+ 		DP(NETIF_MSG_IFUP, "got set mac ramrod\n");
+ 		break;
+ 
++	case (RAMROD_CMD_ID_ETH_SET_MAC | BNX2X_STATE_CLOSING_WAIT4_HALT):
++		DP(NETIF_MSG_IFUP, "got (un)set mac ramrod\n");
++		break;
++
+ 	default:
+ 		BNX2X_ERR("unexpected ramrod (%d)  state is %x\n",
+ 			  command, bp->state);
+@@ -1179,12 +1189,175 @@ static u32 bnx2x_bits_dis(struct bnx2x *bp, u32 reg, u32 bits)
+ 	return val;
+ }
+ 
++static int bnx2x_hw_lock(struct bnx2x *bp, u32 resource)
++{
++	u32 cnt;
++	u32 lock_status;
++	u32 resource_bit = (1 << resource);
++	u8 func = bp->port;
++
++	/* Validating that the resource is within range */
++	if (resource > HW_LOCK_MAX_RESOURCE_VALUE) {
++		DP(NETIF_MSG_HW,
++		   "resource(0x%x) > HW_LOCK_MAX_RESOURCE_VALUE(0x%x)\n",
++		   resource, HW_LOCK_MAX_RESOURCE_VALUE);
++		return -EINVAL;
++	}
++
++	/* Validating that the resource is not already taken */
++	lock_status = REG_RD(bp, MISC_REG_DRIVER_CONTROL_1 + func*8);
++	if (lock_status & resource_bit) {
++		DP(NETIF_MSG_HW, "lock_status 0x%x  resource_bit 0x%x\n",
++		   lock_status, resource_bit);
++		return -EEXIST;
++	}
++
++	/* Try for 1 second every 5ms */
++	for (cnt = 0; cnt < 200; cnt++) {
++		/* Try to acquire the lock */
++		REG_WR(bp, MISC_REG_DRIVER_CONTROL_1 + func*8 + 4,
++		       resource_bit);
++		lock_status = REG_RD(bp, MISC_REG_DRIVER_CONTROL_1 + func*8);
++		if (lock_status & resource_bit)
++			return 0;
++
++		msleep(5);
++	}
++	DP(NETIF_MSG_HW, "Timeout\n");
++	return -EAGAIN;
++}
++
++static int bnx2x_hw_unlock(struct bnx2x *bp, u32 resource)
++{
++	u32 lock_status;
++	u32 resource_bit = (1 << resource);
++	u8 func = bp->port;
++
++	/* Validating that the resource is within range */
++	if (resource > HW_LOCK_MAX_RESOURCE_VALUE) {
++		DP(NETIF_MSG_HW,
++		   "resource(0x%x) > HW_LOCK_MAX_RESOURCE_VALUE(0x%x)\n",
++		   resource, HW_LOCK_MAX_RESOURCE_VALUE);
++		return -EINVAL;
++	}
++
++	/* Validating that the resource is currently taken */
++	lock_status = REG_RD(bp, MISC_REG_DRIVER_CONTROL_1 + func*8);
++	if (!(lock_status & resource_bit)) {
++		DP(NETIF_MSG_HW, "lock_status 0x%x  resource_bit 0x%x\n",
++		   lock_status, resource_bit);
++		return -EFAULT;
++	}
++
++	REG_WR(bp, MISC_REG_DRIVER_CONTROL_1 + func*8, resource_bit);
++	return 0;
++}
++
++static int bnx2x_set_gpio(struct bnx2x *bp, int gpio_num, u32 mode)
++{
++	/* The GPIO should be swapped if swap register is set and active */
++	int gpio_port = (REG_RD(bp, NIG_REG_PORT_SWAP) &&
++			 REG_RD(bp, NIG_REG_STRAP_OVERRIDE)) ^ bp->port;
++	int gpio_shift = gpio_num +
++			(gpio_port ? MISC_REGISTERS_GPIO_PORT_SHIFT : 0);
++	u32 gpio_mask = (1 << gpio_shift);
++	u32 gpio_reg;
++
++	if (gpio_num > MISC_REGISTERS_GPIO_3) {
++		BNX2X_ERR("Invalid GPIO %d\n", gpio_num);
++		return -EINVAL;
++	}
++
++	bnx2x_hw_lock(bp, HW_LOCK_RESOURCE_GPIO);
++	/* read GPIO and mask except the float bits */
++	gpio_reg = (REG_RD(bp, MISC_REG_GPIO) & MISC_REGISTERS_GPIO_FLOAT);
++
++	switch (mode) {
++	case MISC_REGISTERS_GPIO_OUTPUT_LOW:
++		DP(NETIF_MSG_LINK, "Set GPIO %d (shift %d) -> output low\n",
++		   gpio_num, gpio_shift);
++		/* clear FLOAT and set CLR */
++		gpio_reg &= ~(gpio_mask << MISC_REGISTERS_GPIO_FLOAT_POS);
++		gpio_reg |=  (gpio_mask << MISC_REGISTERS_GPIO_CLR_POS);
++		break;
++
++	case MISC_REGISTERS_GPIO_OUTPUT_HIGH:
++		DP(NETIF_MSG_LINK, "Set GPIO %d (shift %d) -> output high\n",
++		   gpio_num, gpio_shift);
++		/* clear FLOAT and set SET */
++		gpio_reg &= ~(gpio_mask << MISC_REGISTERS_GPIO_FLOAT_POS);
++		gpio_reg |=  (gpio_mask << MISC_REGISTERS_GPIO_SET_POS);
++		break;
++
++	case MISC_REGISTERS_GPIO_INPUT_HI_Z :
++		DP(NETIF_MSG_LINK, "Set GPIO %d (shift %d) -> input\n",
++		   gpio_num, gpio_shift);
++		/* set FLOAT */
++		gpio_reg |= (gpio_mask << MISC_REGISTERS_GPIO_FLOAT_POS);
++		break;
++
++	default:
++		break;
++	}
++
++	REG_WR(bp, MISC_REG_GPIO, gpio_reg);
++	bnx2x_hw_unlock(bp, HW_LOCK_RESOURCE_GPIO);
++
++	return 0;
++}
++
++static int bnx2x_set_spio(struct bnx2x *bp, int spio_num, u32 mode)
++{
++	u32 spio_mask = (1 << spio_num);
++	u32 spio_reg;
++
++	if ((spio_num < MISC_REGISTERS_SPIO_4) ||
++	    (spio_num > MISC_REGISTERS_SPIO_7)) {
++		BNX2X_ERR("Invalid SPIO %d\n", spio_num);
++		return -EINVAL;
++	}
++
++	bnx2x_hw_lock(bp, HW_LOCK_RESOURCE_SPIO);
++	/* read SPIO and mask except the float bits */
++	spio_reg = (REG_RD(bp, MISC_REG_SPIO) & MISC_REGISTERS_SPIO_FLOAT);
++
++	switch (mode) {
++	case MISC_REGISTERS_SPIO_OUTPUT_LOW :
++		DP(NETIF_MSG_LINK, "Set SPIO %d -> output low\n", spio_num);
++		/* clear FLOAT and set CLR */
++		spio_reg &= ~(spio_mask << MISC_REGISTERS_SPIO_FLOAT_POS);
++		spio_reg |=  (spio_mask << MISC_REGISTERS_SPIO_CLR_POS);
++		break;
++
++	case MISC_REGISTERS_SPIO_OUTPUT_HIGH :
++		DP(NETIF_MSG_LINK, "Set SPIO %d -> output high\n", spio_num);
++		/* clear FLOAT and set SET */
++		spio_reg &= ~(spio_mask << MISC_REGISTERS_SPIO_FLOAT_POS);
++		spio_reg |=  (spio_mask << MISC_REGISTERS_SPIO_SET_POS);
++		break;
++
++	case MISC_REGISTERS_SPIO_INPUT_HI_Z:
++		DP(NETIF_MSG_LINK, "Set SPIO %d -> input\n", spio_num);
++		/* set FLOAT */
++		spio_reg |= (spio_mask << MISC_REGISTERS_SPIO_FLOAT_POS);
++		break;
++
++	default:
++		break;
++	}
++
++	REG_WR(bp, MISC_REG_SPIO, spio_reg);
++	bnx2x_hw_unlock(bp, HW_LOCK_RESOURCE_SPIO);
++
++	return 0;
++}
++
+ static int bnx2x_mdio22_write(struct bnx2x *bp, u32 reg, u32 val)
+ {
+-	int rc;
+-	u32 tmp, i;
+ 	int port = bp->port;
+ 	u32 emac_base = port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
++	u32 tmp;
++	int i, rc;
+ 
+ /*      DP(NETIF_MSG_HW, "phy_addr 0x%x  reg 0x%x  val 0x%08x\n",
+ 	   bp->phy_addr, reg, val); */
+@@ -1236,8 +1409,8 @@ static int bnx2x_mdio22_read(struct bnx2x *bp, u32 reg, u32 *ret_val)
+ {
+ 	int port = bp->port;
+ 	u32 emac_base = port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
+-	u32 val, i;
+-	int rc;
++	u32 val;
++	int i, rc;
+ 
+ 	if (bp->phy_flags & PHY_INT_MODE_AUTO_POLLING_FLAG) {
+ 
+@@ -1286,58 +1459,54 @@ static int bnx2x_mdio22_read(struct bnx2x *bp, u32 reg, u32 *ret_val)
+ 	return rc;
+ }
+ 
+-static int bnx2x_mdio45_write(struct bnx2x *bp, u32 reg, u32 addr, u32 val)
++static int bnx2x_mdio45_ctrl_write(struct bnx2x *bp, u32 mdio_ctrl,
++				   u32 phy_addr, u32 reg, u32 addr, u32 val)
+ {
+-	int rc = 0;
+-	u32 tmp, i;
+-	int port = bp->port;
+-	u32 emac_base = port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
+-
+-	if (bp->phy_flags & PHY_INT_MODE_AUTO_POLLING_FLAG) {
+-
+-		tmp = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_MODE);
+-		tmp &= ~EMAC_MDIO_MODE_AUTO_POLL;
+-		EMAC_WR(EMAC_REG_EMAC_MDIO_MODE, tmp);
+-		REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_MODE);
+-		udelay(40);
+-	}
++	u32 tmp;
++	int i, rc = 0;
+ 
+-	/* set clause 45 mode */
+-	tmp = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_MODE);
+-	tmp |= EMAC_MDIO_MODE_CLAUSE_45;
+-	EMAC_WR(EMAC_REG_EMAC_MDIO_MODE, tmp);
++	/* set clause 45 mode, slow down the MDIO clock to 2.5MHz
++	 * (a value of 49==0x31) and make sure that the AUTO poll is off
++	 */
++	tmp = REG_RD(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE);
++	tmp &= ~(EMAC_MDIO_MODE_AUTO_POLL | EMAC_MDIO_MODE_CLOCK_CNT);
++	tmp |= (EMAC_MDIO_MODE_CLAUSE_45 |
++		(49 << EMAC_MDIO_MODE_CLOCK_CNT_BITSHIFT));
++	REG_WR(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE, tmp);
++	REG_RD(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE);
++	udelay(40);
+ 
+ 	/* address */
+-	tmp = ((bp->phy_addr << 21) | (reg << 16) | addr |
++	tmp = ((phy_addr << 21) | (reg << 16) | addr |
+ 	       EMAC_MDIO_COMM_COMMAND_ADDRESS |
+ 	       EMAC_MDIO_COMM_START_BUSY);
+-	EMAC_WR(EMAC_REG_EMAC_MDIO_COMM, tmp);
++	REG_WR(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_COMM, tmp);
+ 
+ 	for (i = 0; i < 50; i++) {
+ 		udelay(10);
+ 
+-		tmp = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_COMM);
++		tmp = REG_RD(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_COMM);
+ 		if (!(tmp & EMAC_MDIO_COMM_START_BUSY)) {
+ 			udelay(5);
+ 			break;
+ 		}
+ 	}
+-
+ 	if (tmp & EMAC_MDIO_COMM_START_BUSY) {
+ 		BNX2X_ERR("write phy register failed\n");
+ 
+ 		rc = -EBUSY;
++
+ 	} else {
+ 		/* data */
+-		tmp = ((bp->phy_addr << 21) | (reg << 16) | val |
++		tmp = ((phy_addr << 21) | (reg << 16) | val |
+ 		       EMAC_MDIO_COMM_COMMAND_WRITE_45 |
+ 		       EMAC_MDIO_COMM_START_BUSY);
+-		EMAC_WR(EMAC_REG_EMAC_MDIO_COMM, tmp);
++		REG_WR(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_COMM, tmp);
+ 
+ 		for (i = 0; i < 50; i++) {
+ 			udelay(10);
+ 
+-			tmp = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_COMM);
++			tmp = REG_RD(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_COMM);
+ 			if (!(tmp & EMAC_MDIO_COMM_START_BUSY)) {
+ 				udelay(5);
+ 				break;
+@@ -1351,75 +1520,78 @@ static int bnx2x_mdio45_write(struct bnx2x *bp, u32 reg, u32 addr, u32 val)
+ 		}
+ 	}
+ 
+-	/* unset clause 45 mode */
+-	tmp = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_MODE);
+-	tmp &= ~EMAC_MDIO_MODE_CLAUSE_45;
+-	EMAC_WR(EMAC_REG_EMAC_MDIO_MODE, tmp);
+-
+-	if (bp->phy_flags & PHY_INT_MODE_AUTO_POLLING_FLAG) {
+-
+-		tmp = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_MODE);
++	/* unset clause 45 mode, set the MDIO clock to a faster value
++	 * (0x13 => 6.25Mhz) and restore the AUTO poll if needed
++	 */
++	tmp = REG_RD(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE);
++	tmp &= ~(EMAC_MDIO_MODE_CLAUSE_45 | EMAC_MDIO_MODE_CLOCK_CNT);
++	tmp |= (0x13 << EMAC_MDIO_MODE_CLOCK_CNT_BITSHIFT);
++	if (bp->phy_flags & PHY_INT_MODE_AUTO_POLLING_FLAG)
+ 		tmp |= EMAC_MDIO_MODE_AUTO_POLL;
+-		EMAC_WR(EMAC_REG_EMAC_MDIO_MODE, tmp);
+-	}
++	REG_WR(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE, tmp);
+ 
+ 	return rc;
+ }
+ 
+-static int bnx2x_mdio45_read(struct bnx2x *bp, u32 reg, u32 addr,
+-			     u32 *ret_val)
++static int bnx2x_mdio45_write(struct bnx2x *bp, u32 phy_addr, u32 reg,
++			      u32 addr, u32 val)
+ {
+-	int port = bp->port;
+-	u32 emac_base = port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
+-	u32 val, i;
+-	int rc = 0;
++	u32 emac_base = bp->port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
+ 
+-	if (bp->phy_flags & PHY_INT_MODE_AUTO_POLLING_FLAG) {
++	return bnx2x_mdio45_ctrl_write(bp, emac_base, phy_addr,
++				       reg, addr, val);
++}
+ 
+-		val = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_MODE);
+-		val &= ~EMAC_MDIO_MODE_AUTO_POLL;
+-		EMAC_WR(EMAC_REG_EMAC_MDIO_MODE, val);
+-		REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_MODE);
+-		udelay(40);
+-	}
++static int bnx2x_mdio45_ctrl_read(struct bnx2x *bp, u32 mdio_ctrl,
++				  u32 phy_addr, u32 reg, u32 addr,
++				  u32 *ret_val)
++{
++	u32 val;
++	int i, rc = 0;
+ 
+-	/* set clause 45 mode */
+-	val = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_MODE);
+-	val |= EMAC_MDIO_MODE_CLAUSE_45;
+-	EMAC_WR(EMAC_REG_EMAC_MDIO_MODE, val);
++	/* set clause 45 mode, slow down the MDIO clock to 2.5MHz
++	 * (a value of 49==0x31) and make sure that the AUTO poll is off
++	 */
++	val = REG_RD(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE);
++	val &= ~(EMAC_MDIO_MODE_AUTO_POLL | EMAC_MDIO_MODE_CLOCK_CNT);
++	val |= (EMAC_MDIO_MODE_CLAUSE_45 |
++		(49 << EMAC_MDIO_MODE_CLOCK_CNT_BITSHIFT));
++	REG_WR(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE, val);
++	REG_RD(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE);
++	udelay(40);
+ 
+ 	/* address */
+-	val = ((bp->phy_addr << 21) | (reg << 16) | addr |
++	val = ((phy_addr << 21) | (reg << 16) | addr |
+ 	       EMAC_MDIO_COMM_COMMAND_ADDRESS |
+ 	       EMAC_MDIO_COMM_START_BUSY);
+-	EMAC_WR(EMAC_REG_EMAC_MDIO_COMM, val);
++	REG_WR(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_COMM, val);
+ 
+ 	for (i = 0; i < 50; i++) {
+ 		udelay(10);
+ 
+-		val = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_COMM);
++		val = REG_RD(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_COMM);
+ 		if (!(val & EMAC_MDIO_COMM_START_BUSY)) {
+ 			udelay(5);
+ 			break;
+ 		}
+ 	}
+-
+ 	if (val & EMAC_MDIO_COMM_START_BUSY) {
+ 		BNX2X_ERR("read phy register failed\n");
+ 
+ 		*ret_val = 0;
+ 		rc = -EBUSY;
++
+ 	} else {
+ 		/* data */
+-		val = ((bp->phy_addr << 21) | (reg << 16) |
++		val = ((phy_addr << 21) | (reg << 16) |
+ 		       EMAC_MDIO_COMM_COMMAND_READ_45 |
+ 		       EMAC_MDIO_COMM_START_BUSY);
+-		EMAC_WR(EMAC_REG_EMAC_MDIO_COMM, val);
++		REG_WR(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_COMM, val);
+ 
+ 		for (i = 0; i < 50; i++) {
+ 			udelay(10);
+ 
+-			val = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_COMM);
++			val = REG_RD(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_COMM);
+ 			if (!(val & EMAC_MDIO_COMM_START_BUSY)) {
+ 				val &= EMAC_MDIO_COMM_DATA;
+ 				break;
+@@ -1436,31 +1608,39 @@ static int bnx2x_mdio45_read(struct bnx2x *bp, u32 reg, u32 addr,
+ 		*ret_val = val;
+ 	}
+ 
+-	/* unset clause 45 mode */
+-	val = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_MODE);
+-	val &= ~EMAC_MDIO_MODE_CLAUSE_45;
+-	EMAC_WR(EMAC_REG_EMAC_MDIO_MODE, val);
+-
+-	if (bp->phy_flags & PHY_INT_MODE_AUTO_POLLING_FLAG) {
+-
+-		val = REG_RD(bp, emac_base + EMAC_REG_EMAC_MDIO_MODE);
++	/* unset clause 45 mode, set the MDIO clock to a faster value
++	 * (0x13 => 6.25Mhz) and restore the AUTO poll if needed
++	 */
++	val = REG_RD(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE);
++	val &= ~(EMAC_MDIO_MODE_CLAUSE_45 | EMAC_MDIO_MODE_CLOCK_CNT);
++	val |= (0x13 << EMAC_MDIO_MODE_CLOCK_CNT_BITSHIFT);
++	if (bp->phy_flags & PHY_INT_MODE_AUTO_POLLING_FLAG)
+ 		val |= EMAC_MDIO_MODE_AUTO_POLL;
+-		EMAC_WR(EMAC_REG_EMAC_MDIO_MODE, val);
+-	}
++	REG_WR(bp, mdio_ctrl + EMAC_REG_EMAC_MDIO_MODE, val);
+ 
+ 	return rc;
+ }
+ 
+-static int bnx2x_mdio45_vwrite(struct bnx2x *bp, u32 reg, u32 addr, u32 val)
++static int bnx2x_mdio45_read(struct bnx2x *bp, u32 phy_addr, u32 reg,
++			     u32 addr, u32 *ret_val)
++{
++	u32 emac_base = bp->port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
++
++	return bnx2x_mdio45_ctrl_read(bp, emac_base, phy_addr,
++				      reg, addr, ret_val);
++}
++
++static int bnx2x_mdio45_vwrite(struct bnx2x *bp, u32 phy_addr, u32 reg,
++			       u32 addr, u32 val)
+ {
+ 	int i;
+ 	u32 rd_val;
+ 
+ 	might_sleep();
+ 	for (i = 0; i < 10; i++) {
+-		bnx2x_mdio45_write(bp, reg, addr, val);
++		bnx2x_mdio45_write(bp, phy_addr, reg, addr, val);
+ 		msleep(5);
+-		bnx2x_mdio45_read(bp, reg, addr, &rd_val);
++		bnx2x_mdio45_read(bp, phy_addr, reg, addr, &rd_val);
+ 		/* if the read value is not the same as the value we wrote,
+ 		   we should write it again */
+ 		if (rd_val == val)
+@@ -1471,18 +1651,81 @@ static int bnx2x_mdio45_vwrite(struct bnx2x *bp, u32 reg, u32 addr, u32 val)
+ }
+ 
+ /*
+- * link managment
++ * link management
+  */
+ 
++static void bnx2x_pause_resolve(struct bnx2x *bp, u32 pause_result)
++{
++	switch (pause_result) {			/* ASYM P ASYM P */
++	case 0xb:				/*   1  0   1  1 */
++		bp->flow_ctrl = FLOW_CTRL_TX;
++		break;
++
++	case 0xe:				/*   1  1   1  0 */
++		bp->flow_ctrl = FLOW_CTRL_RX;
++		break;
++
++	case 0x5:				/*   0  1   0  1 */
++	case 0x7:				/*   0  1   1  1 */
++	case 0xd:				/*   1  1   0  1 */
++	case 0xf:				/*   1  1   1  1 */
++		bp->flow_ctrl = FLOW_CTRL_BOTH;
++		break;
++
++	default:
++		break;
++	}
++}
++
++static u8 bnx2x_ext_phy_resove_fc(struct bnx2x *bp)
++{
++	u32 ext_phy_addr;
++	u32 ld_pause;	/* local */
++	u32 lp_pause;	/* link partner */
++	u32 an_complete; /* AN complete */
++	u32 pause_result;
++	u8 ret = 0;
++
++	ext_phy_addr = ((bp->ext_phy_config &
++			 PORT_HW_CFG_XGXS_EXT_PHY_ADDR_MASK) >>
++					PORT_HW_CFG_XGXS_EXT_PHY_ADDR_SHIFT);
++
++	/* read twice */
++	bnx2x_mdio45_read(bp, ext_phy_addr,
++			  EXT_PHY_KR_AUTO_NEG_DEVAD,
++			  EXT_PHY_KR_STATUS, &an_complete);
++	bnx2x_mdio45_read(bp, ext_phy_addr,
++			  EXT_PHY_KR_AUTO_NEG_DEVAD,
++			  EXT_PHY_KR_STATUS, &an_complete);
++
++	if (an_complete & EXT_PHY_KR_AUTO_NEG_COMPLETE) {
++		ret = 1;
++		bnx2x_mdio45_read(bp, ext_phy_addr,
++				  EXT_PHY_KR_AUTO_NEG_DEVAD,
++				  EXT_PHY_KR_AUTO_NEG_ADVERT, &ld_pause);
++		bnx2x_mdio45_read(bp, ext_phy_addr,
++				  EXT_PHY_KR_AUTO_NEG_DEVAD,
++				  EXT_PHY_KR_LP_AUTO_NEG, &lp_pause);
++		pause_result = (ld_pause &
++				EXT_PHY_KR_AUTO_NEG_ADVERT_PAUSE_MASK) >> 8;
++		pause_result |= (lp_pause &
++				 EXT_PHY_KR_AUTO_NEG_ADVERT_PAUSE_MASK) >> 10;
++		DP(NETIF_MSG_LINK, "Ext PHY pause result 0x%x \n",
++		   pause_result);
++		bnx2x_pause_resolve(bp, pause_result);
++	}
++	return ret;
++}
++
+ static void bnx2x_flow_ctrl_resolve(struct bnx2x *bp, u32 gp_status)
+ {
+-	u32 ld_pause;   /* local driver */
+-	u32 lp_pause;   /* link partner */
++	u32 ld_pause;	/* local driver */
++	u32 lp_pause;	/* link partner */
+ 	u32 pause_result;
+ 
+ 	bp->flow_ctrl = 0;
+ 
+-	/* reolve from gp_status in case of AN complete and not sgmii */
++	/* resolve from gp_status in case of AN complete and not sgmii */
+ 	if ((bp->req_autoneg & AUTONEG_FLOW_CTRL) &&
+ 	    (gp_status & MDIO_AN_CL73_OR_37_COMPLETE) &&
+ 	    (!(bp->phy_flags & PHY_SGMII_FLAG)) &&
+@@ -1499,45 +1742,57 @@ static void bnx2x_flow_ctrl_resolve(struct bnx2x *bp, u32 gp_status)
+ 		pause_result |= (lp_pause &
+ 				 MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_MASK)>>7;
+ 		DP(NETIF_MSG_LINK, "pause_result 0x%x\n", pause_result);
++		bnx2x_pause_resolve(bp, pause_result);
++	} else if (!(bp->req_autoneg & AUTONEG_FLOW_CTRL) ||
++		   !(bnx2x_ext_phy_resove_fc(bp))) {
++		/* forced speed */
++		if (bp->req_autoneg & AUTONEG_FLOW_CTRL) {
++			switch (bp->req_flow_ctrl) {
++			case FLOW_CTRL_AUTO:
++				if (bp->dev->mtu <= 4500)
++					bp->flow_ctrl = FLOW_CTRL_BOTH;
++				else
++					bp->flow_ctrl = FLOW_CTRL_TX;
++				break;
+ 
+-		switch (pause_result) { 		/* ASYM P ASYM P */
+-		case 0xb:       			/*   1  0   1  1 */
+-			bp->flow_ctrl = FLOW_CTRL_TX;
+-			break;
+-
+-		case 0xe:       			/*   1  1   1  0 */
+-			bp->flow_ctrl = FLOW_CTRL_RX;
+-			break;
++			case FLOW_CTRL_TX:
++				bp->flow_ctrl = FLOW_CTRL_TX;
++				break;
+ 
+-		case 0x5:       			/*   0  1   0  1 */
+-		case 0x7:       			/*   0  1   1  1 */
+-		case 0xd:       			/*   1  1   0  1 */
+-		case 0xf:       			/*   1  1   1  1 */
+-			bp->flow_ctrl = FLOW_CTRL_BOTH;
+-			break;
++			case FLOW_CTRL_RX:
++				if (bp->dev->mtu <= 4500)
++					bp->flow_ctrl = FLOW_CTRL_RX;
++				break;
+ 
+-		default:
+-			break;
+-		}
++			case FLOW_CTRL_BOTH:
++				if (bp->dev->mtu <= 4500)
++					bp->flow_ctrl = FLOW_CTRL_BOTH;
++				else
++					bp->flow_ctrl = FLOW_CTRL_TX;
++				break;
+ 
+-	} else { /* forced mode */
+-		switch (bp->req_flow_ctrl) {
+-		case FLOW_CTRL_AUTO:
+-			if (bp->dev->mtu <= 4500)
+-				bp->flow_ctrl = FLOW_CTRL_BOTH;
+-			else
+-				bp->flow_ctrl = FLOW_CTRL_TX;
+-			break;
++			case FLOW_CTRL_NONE:
++			default:
++				break;
++			}
++		} else { /* forced mode */
++			switch (bp->req_flow_ctrl) {
++			case FLOW_CTRL_AUTO:
++				DP(NETIF_MSG_LINK, "req_flow_ctrl 0x%x while"
++						   " req_autoneg 0x%x\n",
++				   bp->req_flow_ctrl, bp->req_autoneg);
++				break;
+ 
+-		case FLOW_CTRL_TX:
+-		case FLOW_CTRL_RX:
+-		case FLOW_CTRL_BOTH:
+-			bp->flow_ctrl = bp->req_flow_ctrl;
+-			break;
++			case FLOW_CTRL_TX:
++			case FLOW_CTRL_RX:
++			case FLOW_CTRL_BOTH:
++				bp->flow_ctrl = bp->req_flow_ctrl;
++				break;
+ 
+-		case FLOW_CTRL_NONE:
+-		default:
+-			break;
++			case FLOW_CTRL_NONE:
++			default:
++				break;
++			}
+ 		}
+ 	}
+ 	DP(NETIF_MSG_LINK, "flow_ctrl 0x%x\n", bp->flow_ctrl);
+@@ -1548,9 +1803,9 @@ static void bnx2x_link_settings_status(struct bnx2x *bp, u32 gp_status)
+ 	bp->link_status = 0;
+ 
+ 	if (gp_status & MDIO_GP_STATUS_TOP_AN_STATUS1_LINK_STATUS) {
+-		DP(NETIF_MSG_LINK, "link up\n");
++		DP(NETIF_MSG_LINK, "phy link up\n");
+ 
+-		bp->link_up = 1;
++		bp->phy_link_up = 1;
+ 		bp->link_status |= LINK_STATUS_LINK_UP;
+ 
+ 		if (gp_status & MDIO_GP_STATUS_TOP_AN_STATUS1_DUPLEX_STATUS)
+@@ -1659,20 +1914,20 @@ static void bnx2x_link_settings_status(struct bnx2x *bp, u32 gp_status)
+ 		       bp->link_status |= LINK_STATUS_RX_FLOW_CONTROL_ENABLED;
+ 
+ 	} else { /* link_down */
+-		DP(NETIF_MSG_LINK, "link down\n");
++		DP(NETIF_MSG_LINK, "phy link down\n");
+ 
+-		bp->link_up = 0;
++		bp->phy_link_up = 0;
+ 
+ 		bp->line_speed = 0;
+ 		bp->duplex = DUPLEX_FULL;
+ 		bp->flow_ctrl = 0;
+ 	}
+ 
+-	DP(NETIF_MSG_LINK, "gp_status 0x%x  link_up %d\n"
++	DP(NETIF_MSG_LINK, "gp_status 0x%x  phy_link_up %d\n"
+ 	   DP_LEVEL "  line_speed %d  duplex %d  flow_ctrl 0x%x"
+ 		    "  link_status 0x%x\n",
+-	   gp_status, bp->link_up, bp->line_speed, bp->duplex, bp->flow_ctrl,
+-	   bp->link_status);
++	   gp_status, bp->phy_link_up, bp->line_speed, bp->duplex,
++	   bp->flow_ctrl, bp->link_status);
+ }
+ 
+ static void bnx2x_link_int_ack(struct bnx2x *bp, int is_10g)
+@@ -1680,40 +1935,40 @@ static void bnx2x_link_int_ack(struct bnx2x *bp, int is_10g)
+ 	int port = bp->port;
+ 
+ 	/* first reset all status
+-	 * we asume only one line will be change at a time */
++	 * we assume only one line will be change at a time */
+ 	bnx2x_bits_dis(bp, NIG_REG_STATUS_INTERRUPT_PORT0 + port*4,
+-		       (NIG_XGXS0_LINK_STATUS |
+-			NIG_SERDES0_LINK_STATUS |
+-			NIG_STATUS_INTERRUPT_XGXS0_LINK10G));
+-	if (bp->link_up) {
++		       (NIG_STATUS_XGXS0_LINK10G |
++			NIG_STATUS_XGXS0_LINK_STATUS |
++			NIG_STATUS_SERDES0_LINK_STATUS));
++	if (bp->phy_link_up) {
+ 		if (is_10g) {
+ 			/* Disable the 10G link interrupt
+ 			 * by writing 1 to the status register
+ 			 */
+-			DP(NETIF_MSG_LINK, "10G XGXS link up\n");
++			DP(NETIF_MSG_LINK, "10G XGXS phy link up\n");
+ 			bnx2x_bits_en(bp,
+ 				      NIG_REG_STATUS_INTERRUPT_PORT0 + port*4,
+-				      NIG_STATUS_INTERRUPT_XGXS0_LINK10G);
++				      NIG_STATUS_XGXS0_LINK10G);
+ 
+ 		} else if (bp->phy_flags & PHY_XGXS_FLAG) {
+ 			/* Disable the link interrupt
+ 			 * by writing 1 to the relevant lane
+ 			 * in the status register
+ 			 */
+-			DP(NETIF_MSG_LINK, "1G XGXS link up\n");
++			DP(NETIF_MSG_LINK, "1G XGXS phy link up\n");
+ 			bnx2x_bits_en(bp,
+ 				      NIG_REG_STATUS_INTERRUPT_PORT0 + port*4,
+ 				      ((1 << bp->ser_lane) <<
+-				       NIG_XGXS0_LINK_STATUS_SIZE));
++				       NIG_STATUS_XGXS0_LINK_STATUS_SIZE));
+ 
+ 		} else { /* SerDes */
+-			DP(NETIF_MSG_LINK, "SerDes link up\n");
++			DP(NETIF_MSG_LINK, "SerDes phy link up\n");
+ 			/* Disable the link interrupt
+ 			 * by writing 1 to the status register
+ 			 */
+ 			bnx2x_bits_en(bp,
+ 				      NIG_REG_STATUS_INTERRUPT_PORT0 + port*4,
+-				      NIG_SERDES0_LINK_STATUS);
++				      NIG_STATUS_SERDES0_LINK_STATUS);
+ 		}
+ 
+ 	} else { /* link_down */
+@@ -1724,91 +1979,182 @@ static int bnx2x_ext_phy_is_link_up(struct bnx2x *bp)
+ {
+ 	u32 ext_phy_type;
+ 	u32 ext_phy_addr;
+-	u32 local_phy;
+-	u32 val = 0;
++	u32 val1 = 0, val2;
+ 	u32 rx_sd, pcs_status;
+ 
+ 	if (bp->phy_flags & PHY_XGXS_FLAG) {
+-		local_phy = bp->phy_addr;
+ 		ext_phy_addr = ((bp->ext_phy_config &
+ 				 PORT_HW_CFG_XGXS_EXT_PHY_ADDR_MASK) >>
+ 				PORT_HW_CFG_XGXS_EXT_PHY_ADDR_SHIFT);
+-		bp->phy_addr = (u8)ext_phy_addr;
+ 
+ 		ext_phy_type = XGXS_EXT_PHY_TYPE(bp);
+ 		switch (ext_phy_type) {
+ 		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT:
+ 			DP(NETIF_MSG_LINK, "XGXS Direct\n");
+-			val = 1;
++			val1 = 1;
+ 			break;
+ 
+ 		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8705:
+ 			DP(NETIF_MSG_LINK, "XGXS 8705\n");
+-			bnx2x_mdio45_read(bp, EXT_PHY_OPT_WIS_DEVAD,
+-					  EXT_PHY_OPT_LASI_STATUS, &val);
+-			DP(NETIF_MSG_LINK, "8705 LASI status is %d\n", val);
+-
+-			bnx2x_mdio45_read(bp, EXT_PHY_OPT_WIS_DEVAD,
+-					  EXT_PHY_OPT_LASI_STATUS, &val);
+-			DP(NETIF_MSG_LINK, "8705 LASI status is %d\n", val);
+-
+-			bnx2x_mdio45_read(bp, EXT_PHY_OPT_PMA_PMD_DEVAD,
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_WIS_DEVAD,
++					  EXT_PHY_OPT_LASI_STATUS, &val1);
++			DP(NETIF_MSG_LINK, "8705 LASI status 0x%x\n", val1);
++
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_WIS_DEVAD,
++					  EXT_PHY_OPT_LASI_STATUS, &val1);
++			DP(NETIF_MSG_LINK, "8705 LASI status 0x%x\n", val1);
++
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_PMA_PMD_DEVAD,
+ 					  EXT_PHY_OPT_PMD_RX_SD, &rx_sd);
+-			val = (rx_sd & 0x1);
++			DP(NETIF_MSG_LINK, "8705 rx_sd 0x%x\n", rx_sd);
++			val1 = (rx_sd & 0x1);
+ 			break;
+ 
+ 		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8706:
+ 			DP(NETIF_MSG_LINK, "XGXS 8706\n");
+-			bnx2x_mdio45_read(bp, EXT_PHY_OPT_PMA_PMD_DEVAD,
+-					  EXT_PHY_OPT_LASI_STATUS, &val);
+-			DP(NETIF_MSG_LINK, "8706 LASI status is %d\n", val);
+-
+-			bnx2x_mdio45_read(bp, EXT_PHY_OPT_PMA_PMD_DEVAD,
+-					  EXT_PHY_OPT_LASI_STATUS, &val);
+-			DP(NETIF_MSG_LINK, "8706 LASI status is %d\n", val);
+-
+-			bnx2x_mdio45_read(bp, EXT_PHY_OPT_PMA_PMD_DEVAD,
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_PMA_PMD_DEVAD,
++					  EXT_PHY_OPT_LASI_STATUS, &val1);
++			DP(NETIF_MSG_LINK, "8706 LASI status 0x%x\n", val1);
++
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_PMA_PMD_DEVAD,
++					  EXT_PHY_OPT_LASI_STATUS, &val1);
++			DP(NETIF_MSG_LINK, "8706 LASI status 0x%x\n", val1);
++
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_PMA_PMD_DEVAD,
+ 					  EXT_PHY_OPT_PMD_RX_SD, &rx_sd);
+-			bnx2x_mdio45_read(bp, EXT_PHY_OPT_PCS_DEVAD,
+-					 EXT_PHY_OPT_PCS_STATUS, &pcs_status);
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_PCS_DEVAD,
++					  EXT_PHY_OPT_PCS_STATUS, &pcs_status);
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_AUTO_NEG_DEVAD,
++					  EXT_PHY_OPT_AN_LINK_STATUS, &val2);
++
+ 			DP(NETIF_MSG_LINK, "8706 rx_sd 0x%x"
+-			   "  pcs_status 0x%x\n", rx_sd, pcs_status);
+-			/* link is up if both bit 0 of pmd_rx and
+-			 * bit 0 of pcs_status are set
++			   "  pcs_status 0x%x 1Gbps link_status 0x%x 0x%x\n",
++			   rx_sd, pcs_status, val2, (val2 & (1<<1)));
++			/* link is up if both bit 0 of pmd_rx_sd and
++			 * bit 0 of pcs_status are set, or if the autoneg bit
++			   1 is set
+ 			 */
+-			val = (rx_sd & pcs_status);
++			val1 = ((rx_sd & pcs_status & 0x1) || (val2 & (1<<1)));
++			break;
++
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8072:
++			bnx2x_hw_lock(bp, HW_LOCK_RESOURCE_8072_MDIO);
++
++			/* clear the interrupt LASI status register */
++			bnx2x_mdio45_ctrl_read(bp, GRCBASE_EMAC0,
++					       ext_phy_addr,
++					       EXT_PHY_KR_PCS_DEVAD,
++					       EXT_PHY_KR_LASI_STATUS, &val2);
++			bnx2x_mdio45_ctrl_read(bp, GRCBASE_EMAC0,
++					       ext_phy_addr,
++					       EXT_PHY_KR_PCS_DEVAD,
++					       EXT_PHY_KR_LASI_STATUS, &val1);
++			DP(NETIF_MSG_LINK, "KR LASI status 0x%x->0x%x\n",
++			   val2, val1);
++			/* Check the LASI */
++			bnx2x_mdio45_ctrl_read(bp, GRCBASE_EMAC0,
++					       ext_phy_addr,
++					       EXT_PHY_KR_PMA_PMD_DEVAD,
++					       0x9003, &val2);
++			bnx2x_mdio45_ctrl_read(bp, GRCBASE_EMAC0,
++					       ext_phy_addr,
++					       EXT_PHY_KR_PMA_PMD_DEVAD,
++					       0x9003, &val1);
++			DP(NETIF_MSG_LINK, "KR 0x9003 0x%x->0x%x\n",
++			   val2, val1);
++			/* Check the link status */
++			bnx2x_mdio45_ctrl_read(bp, GRCBASE_EMAC0,
++					       ext_phy_addr,
++					       EXT_PHY_KR_PCS_DEVAD,
++					       EXT_PHY_KR_PCS_STATUS, &val2);
++			DP(NETIF_MSG_LINK, "KR PCS status 0x%x\n", val2);
++			/* Check the link status on 1.1.2 */
++			bnx2x_mdio45_ctrl_read(bp, GRCBASE_EMAC0,
++					  ext_phy_addr,
++					  EXT_PHY_OPT_PMA_PMD_DEVAD,
++					  EXT_PHY_KR_STATUS, &val2);
++			bnx2x_mdio45_ctrl_read(bp, GRCBASE_EMAC0,
++					  ext_phy_addr,
++					  EXT_PHY_OPT_PMA_PMD_DEVAD,
++					  EXT_PHY_KR_STATUS, &val1);
++			DP(NETIF_MSG_LINK,
++			   "KR PMA status 0x%x->0x%x\n", val2, val1);
++			val1 = ((val1 & 4) == 4);
++			/* If 1G was requested assume the link is up */
++			if (!(bp->req_autoneg & AUTONEG_SPEED) &&
++			    (bp->req_line_speed == SPEED_1000))
++				val1 = 1;
++			bnx2x_hw_unlock(bp, HW_LOCK_RESOURCE_8072_MDIO);
++			break;
++
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_SFX7101:
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_PMA_PMD_DEVAD,
++					  EXT_PHY_OPT_LASI_STATUS, &val2);
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_PMA_PMD_DEVAD,
++					  EXT_PHY_OPT_LASI_STATUS, &val1);
++			DP(NETIF_MSG_LINK,
++			   "10G-base-T LASI status 0x%x->0x%x\n", val2, val1);
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_PMA_PMD_DEVAD,
++					  EXT_PHY_KR_STATUS, &val2);
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_OPT_PMA_PMD_DEVAD,
++					  EXT_PHY_KR_STATUS, &val1);
++			DP(NETIF_MSG_LINK,
++			   "10G-base-T PMA status 0x%x->0x%x\n", val2, val1);
++			val1 = ((val1 & 4) == 4);
++			/* if link is up
++			 * print the AN outcome of the SFX7101 PHY
++			 */
++			if (val1) {
++				bnx2x_mdio45_read(bp, ext_phy_addr,
++						  EXT_PHY_KR_AUTO_NEG_DEVAD,
++						  0x21, &val2);
++				DP(NETIF_MSG_LINK,
++				   "SFX7101 AN status 0x%x->%s\n", val2,
++				   (val2 & (1<<14)) ? "Master" : "Slave");
++			}
+ 			break;
+ 
+ 		default:
+ 			DP(NETIF_MSG_LINK, "BAD XGXS ext_phy_config 0x%x\n",
+ 			   bp->ext_phy_config);
+-			val = 0;
++			val1 = 0;
+ 			break;
+ 		}
+-		bp->phy_addr = local_phy;
+ 
+ 	} else { /* SerDes */
+ 		ext_phy_type = SERDES_EXT_PHY_TYPE(bp);
+ 		switch (ext_phy_type) {
+ 		case PORT_HW_CFG_SERDES_EXT_PHY_TYPE_DIRECT:
+ 			DP(NETIF_MSG_LINK, "SerDes Direct\n");
+-			val = 1;
++			val1 = 1;
+ 			break;
+ 
+ 		case PORT_HW_CFG_SERDES_EXT_PHY_TYPE_BCM5482:
+ 			DP(NETIF_MSG_LINK, "SerDes 5482\n");
+-			val = 1;
++			val1 = 1;
+ 			break;
+ 
+ 		default:
+ 			DP(NETIF_MSG_LINK, "BAD SerDes ext_phy_config 0x%x\n",
+ 			   bp->ext_phy_config);
+-			val = 0;
++			val1 = 0;
+ 			break;
+ 		}
+ 	}
+ 
+-	return val;
++	return val1;
+ }
+ 
+ static void bnx2x_bmac_enable(struct bnx2x *bp, int is_lb)
+@@ -1819,7 +2165,7 @@ static void bnx2x_bmac_enable(struct bnx2x *bp, int is_lb)
+ 	u32 wb_write[2];
+ 	u32 val;
+ 
+-	DP(NETIF_MSG_LINK, "enableing BigMAC\n");
++	DP(NETIF_MSG_LINK, "enabling BigMAC\n");
+ 	/* reset and unreset the BigMac */
+ 	REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_CLEAR,
+ 	       (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << port));
+@@ -1933,6 +2279,35 @@ static void bnx2x_bmac_enable(struct bnx2x *bp, int is_lb)
+ 	bp->stats_state = STATS_STATE_ENABLE;
+ }
+ 
++static void bnx2x_bmac_rx_disable(struct bnx2x *bp)
++{
++	int port = bp->port;
++	u32 bmac_addr = port ? NIG_REG_INGRESS_BMAC1_MEM :
++			       NIG_REG_INGRESS_BMAC0_MEM;
++	u32 wb_write[2];
++
++	/* Only if the bmac is out of reset */
++	if (REG_RD(bp, MISC_REG_RESET_REG_2) &
++			(MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << port)) {
++		/* Clear Rx Enable bit in BMAC_CONTROL register */
++#ifdef BNX2X_DMAE_RD
++		bnx2x_read_dmae(bp, bmac_addr +
++				BIGMAC_REGISTER_BMAC_CONTROL, 2);
++		wb_write[0] = *bnx2x_sp(bp, wb_data[0]);
++		wb_write[1] = *bnx2x_sp(bp, wb_data[1]);
++#else
++		wb_write[0] = REG_RD(bp,
++				bmac_addr + BIGMAC_REGISTER_BMAC_CONTROL);
++		wb_write[1] = REG_RD(bp,
++				bmac_addr + BIGMAC_REGISTER_BMAC_CONTROL + 4);
++#endif
++		wb_write[0] &= ~BMAC_CONTROL_RX_ENABLE;
++		REG_WR_DMAE(bp, bmac_addr + BIGMAC_REGISTER_BMAC_CONTROL,
++			    wb_write, 2);
++		msleep(1);
++	}
++}
++
+ static void bnx2x_emac_enable(struct bnx2x *bp)
+ {
+ 	int port = bp->port;
+@@ -1940,7 +2315,7 @@ static void bnx2x_emac_enable(struct bnx2x *bp)
+ 	u32 val;
+ 	int timeout;
+ 
+-	DP(NETIF_MSG_LINK, "enableing EMAC\n");
++	DP(NETIF_MSG_LINK, "enabling EMAC\n");
+ 	/* reset and unreset the emac core */
+ 	REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_CLEAR,
+ 	       (MISC_REGISTERS_RESET_REG_2_RST_EMAC0_HARD_CORE << port));
+@@ -2033,7 +2408,7 @@ static void bnx2x_emac_enable(struct bnx2x *bp)
+ 				      EMAC_TX_MODE_EXT_PAUSE_EN);
+ 	}
+ 
+-	/* KEEP_VLAN_TAG, promiscous */
++	/* KEEP_VLAN_TAG, promiscuous */
+ 	val = REG_RD(bp, emac_base + EMAC_REG_EMAC_RX_MODE);
+ 	val |= EMAC_RX_MODE_KEEP_VLAN_TAG | EMAC_RX_MODE_PROMISCUOUS;
+ 	EMAC_WR(EMAC_REG_EMAC_RX_MODE, val);
+@@ -2161,7 +2536,6 @@ static void bnx2x_pbf_update(struct bnx2x *bp)
+ 	u32 count = 1000;
+ 	u32 pause = 0;
+ 
+-
+ 	/* disable port */
+ 	REG_WR(bp, PBF_REG_DISABLE_NEW_TASK_PROC_P0 + port*4, 0x1);
+ 
+@@ -2232,7 +2606,7 @@ static void bnx2x_pbf_update(struct bnx2x *bp)
+ static void bnx2x_update_mng(struct bnx2x *bp)
+ {
+ 	if (!nomcp)
+-		SHMEM_WR(bp, drv_fw_mb[bp->port].link_status,
++		SHMEM_WR(bp, port_mb[bp->port].link_status,
+ 			 bp->link_status);
+ }
+ 
+@@ -2294,19 +2668,19 @@ static void bnx2x_link_down(struct bnx2x *bp)
+ 		DP(BNX2X_MSG_STATS, "stats_state - STOP\n");
+ 	}
+ 
+-	/* indicate link down */
++	/* indicate no mac active */
+ 	bp->phy_flags &= ~(PHY_BMAC_FLAG | PHY_EMAC_FLAG);
+ 
+-	/* reset BigMac */
+-	REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_CLEAR,
+-	       (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << port));
++	/* update shared memory */
++	bnx2x_update_mng(bp);
+ 
+-	/* ignore drain flag interrupt */
+ 	/* activate nig drain */
+ 	NIG_WR(NIG_REG_EGRESS_DRAIN0_MODE + port*4, 1);
+ 
+-	/* update shared memory */
+-	bnx2x_update_mng(bp);
++	/* reset BigMac */
++	bnx2x_bmac_rx_disable(bp);
++	REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_CLEAR,
++	       (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << port));
+ 
+ 	/* indicate link down */
+ 	bnx2x_link_report(bp);
+@@ -2317,14 +2691,15 @@ static void bnx2x_init_mac_stats(struct bnx2x *bp);
+ /* This function is called upon link interrupt */
+ static void bnx2x_link_update(struct bnx2x *bp)
+ {
+-	u32 gp_status;
+ 	int port = bp->port;
+ 	int i;
++	u32 gp_status;
+ 	int link_10g;
+ 
+-	DP(NETIF_MSG_LINK, "port %x, is xgxs %x, stat_mask 0x%x,"
++	DP(NETIF_MSG_LINK, "port %x, %s, int_status 0x%x,"
+ 	   " int_mask 0x%x, saved_mask 0x%x, MI_INT %x, SERDES_LINK %x,"
+-	   " 10G %x, XGXS_LINK %x\n", port, (bp->phy_flags & PHY_XGXS_FLAG),
++	   " 10G %x, XGXS_LINK %x\n", port,
++	   (bp->phy_flags & PHY_XGXS_FLAG)? "XGXS":"SerDes",
+ 	   REG_RD(bp, NIG_REG_STATUS_INTERRUPT_PORT0 + port*4),
+ 	   REG_RD(bp, NIG_REG_MASK_INTERRUPT_PORT0 + port*4), bp->nig_mask,
+ 	   REG_RD(bp, NIG_REG_EMAC0_STATUS_MISC_MI_INT + port*0x18),
+@@ -2336,7 +2711,7 @@ static void bnx2x_link_update(struct bnx2x *bp)
+ 	might_sleep();
+ 	MDIO_SET_REG_BANK(bp, MDIO_REG_BANK_GP_STATUS);
+ 	/* avoid fast toggling */
+-	for (i = 0 ; i < 10 ; i++) {
++	for (i = 0; i < 10; i++) {
+ 		msleep(10);
+ 		bnx2x_mdio22_read(bp, MDIO_GP_STATUS_TOP_AN_STATUS1,
+ 				  &gp_status);
+@@ -2351,7 +2726,8 @@ static void bnx2x_link_update(struct bnx2x *bp)
+ 	bnx2x_link_int_ack(bp, link_10g);
+ 
+ 	/* link is up only if both local phy and external phy are up */
+-	if (bp->link_up && bnx2x_ext_phy_is_link_up(bp)) {
++	bp->link_up = (bp->phy_link_up && bnx2x_ext_phy_is_link_up(bp));
++	if (bp->link_up) {
+ 		if (link_10g) {
+ 			bnx2x_bmac_enable(bp, 0);
+ 			bnx2x_leds_set(bp, SPEED_10000);
+@@ -2427,7 +2803,9 @@ static void bnx2x_reset_unicore(struct bnx2x *bp)
+ 		}
+ 	}
+ 
+-	BNX2X_ERR("BUG! unicore is still in reset!\n");
++	BNX2X_ERR("BUG! %s (0x%x) is still in reset!\n",
++		  (bp->phy_flags & PHY_XGXS_FLAG)? "XGXS":"SerDes",
++		  bp->phy_addr);
+ }
+ 
+ static void bnx2x_set_swap_lanes(struct bnx2x *bp)
+@@ -2475,12 +2853,12 @@ static void bnx2x_set_parallel_detection(struct bnx2x *bp)
+ 		MDIO_SET_REG_BANK(bp, MDIO_REG_BANK_10G_PARALLEL_DETECT);
+ 
+ 		bnx2x_mdio22_write(bp,
+-				   MDIO_10G_PARALLEL_DETECT_PAR_DET_10G_LINK,
++				MDIO_10G_PARALLEL_DETECT_PAR_DET_10G_LINK,
+ 			       MDIO_10G_PARALLEL_DETECT_PAR_DET_10G_LINK_CNT);
+ 
+ 		bnx2x_mdio22_read(bp,
+-				 MDIO_10G_PARALLEL_DETECT_PAR_DET_10G_CONTROL,
+-				  &control2);
++				MDIO_10G_PARALLEL_DETECT_PAR_DET_10G_CONTROL,
++				&control2);
+ 
+ 		if (bp->autoneg & AUTONEG_PARALLEL) {
+ 			control2 |=
+@@ -2490,8 +2868,14 @@ static void bnx2x_set_parallel_detection(struct bnx2x *bp)
+ 		   ~MDIO_10G_PARALLEL_DETECT_PAR_DET_10G_CONTROL_PARDET10G_EN;
+ 		}
+ 		bnx2x_mdio22_write(bp,
+-				 MDIO_10G_PARALLEL_DETECT_PAR_DET_10G_CONTROL,
+-				   control2);
++				MDIO_10G_PARALLEL_DETECT_PAR_DET_10G_CONTROL,
++				control2);
++
++		/* Disable parallel detection of HiG */
++		MDIO_SET_REG_BANK(bp, MDIO_REG_BANK_XGXS_BLOCK2);
++		bnx2x_mdio22_write(bp, MDIO_XGXS_BLOCK2_UNICORE_MODE_10G,
++				MDIO_XGXS_BLOCK2_UNICORE_MODE_10G_CX4_XGXS |
++				MDIO_XGXS_BLOCK2_UNICORE_MODE_10G_HIGIG_XGXS);
+ 	}
+ }
+ 
+@@ -2625,7 +3009,7 @@ static void bnx2x_set_brcm_cl37_advertisment(struct bnx2x *bp)
+ 	MDIO_SET_REG_BANK(bp, MDIO_REG_BANK_OVER_1G);
+ 
+ 	/* set extended capabilities */
+-	if (bp->advertising & ADVERTISED_2500baseT_Full)
++	if (bp->advertising & ADVERTISED_2500baseX_Full)
+ 		val |= MDIO_OVER_1G_UP1_2_5G;
+ 	if (bp->advertising & ADVERTISED_10000baseT_Full)
+ 		val |= MDIO_OVER_1G_UP1_10G;
+@@ -2641,20 +3025,91 @@ static void bnx2x_set_ieee_aneg_advertisment(struct bnx2x *bp)
+ 	/* for AN, we are always publishing full duplex */
+ 	an_adv = MDIO_COMBO_IEEE0_AUTO_NEG_ADV_FULL_DUPLEX;
+ 
+-	/* set pause */
+-	switch (bp->pause_mode) {
+-	case PAUSE_SYMMETRIC:
+-		an_adv |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_SYMMETRIC;
+-		break;
+-	case PAUSE_ASYMMETRIC:
+-		an_adv |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_ASYMMETRIC;
+-		break;
+-	case PAUSE_BOTH:
+-		an_adv |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_BOTH;
+-		break;
+-	case PAUSE_NONE:
+-		an_adv |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_NONE;
+-		break;
++	/* resolve pause mode and advertisement
++	 * Please refer to Table 28B-3 of the 802.3ab-1999 spec */
++	if (bp->req_autoneg & AUTONEG_FLOW_CTRL) {
++		switch (bp->req_flow_ctrl) {
++		case FLOW_CTRL_AUTO:
++			if (bp->dev->mtu <= 4500) {
++				an_adv |=
++				     MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_BOTH;
++				bp->advertising |= (ADVERTISED_Pause |
++						    ADVERTISED_Asym_Pause);
++			} else {
++				an_adv |=
++			       MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_ASYMMETRIC;
++				bp->advertising |= ADVERTISED_Asym_Pause;
++			}
++			break;
++
++		case FLOW_CTRL_TX:
++			an_adv |=
++			       MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_ASYMMETRIC;
++			bp->advertising |= ADVERTISED_Asym_Pause;
++			break;
++
++		case FLOW_CTRL_RX:
++			if (bp->dev->mtu <= 4500) {
++				an_adv |=
++				     MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_BOTH;
++				bp->advertising |= (ADVERTISED_Pause |
++						    ADVERTISED_Asym_Pause);
++			} else {
++				an_adv |=
++				     MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_NONE;
++				bp->advertising &= ~(ADVERTISED_Pause |
++						     ADVERTISED_Asym_Pause);
++			}
++			break;
++
++		case FLOW_CTRL_BOTH:
++			if (bp->dev->mtu <= 4500) {
++				an_adv |=
++				     MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_BOTH;
++				bp->advertising |= (ADVERTISED_Pause |
++						    ADVERTISED_Asym_Pause);
++			} else {
++				an_adv |=
++			       MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_ASYMMETRIC;
++				bp->advertising |= ADVERTISED_Asym_Pause;
++			}
++			break;
++
++		case FLOW_CTRL_NONE:
++		default:
++			an_adv |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_NONE;
++			bp->advertising &= ~(ADVERTISED_Pause |
++					     ADVERTISED_Asym_Pause);
++			break;
++		}
++	} else { /* forced mode */
++		switch (bp->req_flow_ctrl) {
++		case FLOW_CTRL_AUTO:
++			DP(NETIF_MSG_LINK, "req_flow_ctrl 0x%x while"
++					   " req_autoneg 0x%x\n",
++			   bp->req_flow_ctrl, bp->req_autoneg);
++			break;
++
++		case FLOW_CTRL_TX:
++			an_adv |=
++			       MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_ASYMMETRIC;
++			bp->advertising |= ADVERTISED_Asym_Pause;
++			break;
++
++		case FLOW_CTRL_RX:
++		case FLOW_CTRL_BOTH:
++			an_adv |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_BOTH;
++			bp->advertising |= (ADVERTISED_Pause |
++					    ADVERTISED_Asym_Pause);
++			break;
++
++		case FLOW_CTRL_NONE:
++		default:
++			an_adv |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_NONE;
++			bp->advertising &= ~(ADVERTISED_Pause |
++					     ADVERTISED_Asym_Pause);
++			break;
++		}
+ 	}
+ 
+ 	MDIO_SET_REG_BANK(bp, MDIO_REG_BANK_COMBO_IEEE0);
+@@ -2752,47 +3207,162 @@ static void bnx2x_initialize_sgmii_process(struct bnx2x *bp)
+ static void bnx2x_link_int_enable(struct bnx2x *bp)
+ {
+ 	int port = bp->port;
++	u32 ext_phy_type;
++	u32 mask;
+ 
+ 	/* setting the status to report on link up
+ 	   for either XGXS or SerDes */
+ 	bnx2x_bits_dis(bp, NIG_REG_STATUS_INTERRUPT_PORT0 + port*4,
+-		       (NIG_XGXS0_LINK_STATUS |
+-			NIG_STATUS_INTERRUPT_XGXS0_LINK10G |
+-			NIG_SERDES0_LINK_STATUS));
++		       (NIG_STATUS_XGXS0_LINK10G |
++			NIG_STATUS_XGXS0_LINK_STATUS |
++			NIG_STATUS_SERDES0_LINK_STATUS));
+ 
+ 	if (bp->phy_flags & PHY_XGXS_FLAG) {
+-		/* TBD -
+-		 * in force mode (not AN) we can enable just the relevant
+-		 * interrupt
+-		 * Even in AN we might enable only one according to the AN
+-		 * speed mask
+-		 */
+-		bnx2x_bits_en(bp, NIG_REG_MASK_INTERRUPT_PORT0 + port*4,
+-			      (NIG_MASK_XGXS0_LINK_STATUS |
+-			       NIG_MASK_XGXS0_LINK10G));
+-		DP(NETIF_MSG_LINK, "enable XGXS interrupt\n");
++		mask = (NIG_MASK_XGXS0_LINK10G |
++			NIG_MASK_XGXS0_LINK_STATUS);
++		DP(NETIF_MSG_LINK, "enabled XGXS interrupt\n");
++		ext_phy_type = XGXS_EXT_PHY_TYPE(bp);
++		if ((ext_phy_type != PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT) &&
++		    (ext_phy_type != PORT_HW_CFG_XGXS_EXT_PHY_TYPE_FAILURE) &&
++		    (ext_phy_type !=
++				PORT_HW_CFG_XGXS_EXT_PHY_TYPE_NOT_CONN)) {
++			mask |= NIG_MASK_MI_INT;
++			DP(NETIF_MSG_LINK, "enabled external phy int\n");
++		}
+ 
+ 	} else { /* SerDes */
+-		bnx2x_bits_en(bp, NIG_REG_MASK_INTERRUPT_PORT0 + port*4,
+-			      NIG_MASK_SERDES0_LINK_STATUS);
+-		DP(NETIF_MSG_LINK, "enable SerDes interrupt\n");
++		mask = NIG_MASK_SERDES0_LINK_STATUS;
++		DP(NETIF_MSG_LINK, "enabled SerDes interrupt\n");
++		ext_phy_type = SERDES_EXT_PHY_TYPE(bp);
++		if ((ext_phy_type !=
++				PORT_HW_CFG_SERDES_EXT_PHY_TYPE_DIRECT) &&
++		    (ext_phy_type !=
++				PORT_HW_CFG_SERDES_EXT_PHY_TYPE_NOT_CONN)) {
++			mask |= NIG_MASK_MI_INT;
++			DP(NETIF_MSG_LINK, "enabled external phy int\n");
++		}
+ 	}
++	bnx2x_bits_en(bp,
++		      NIG_REG_MASK_INTERRUPT_PORT0 + port*4,
++		      mask);
++	DP(NETIF_MSG_LINK, "port %x, %s, int_status 0x%x,"
++	   " int_mask 0x%x, MI_INT %x, SERDES_LINK %x,"
++	   " 10G %x, XGXS_LINK %x\n", port,
++	   (bp->phy_flags & PHY_XGXS_FLAG)? "XGXS":"SerDes",
++	   REG_RD(bp, NIG_REG_STATUS_INTERRUPT_PORT0 + port*4),
++	   REG_RD(bp, NIG_REG_MASK_INTERRUPT_PORT0 + port*4),
++	   REG_RD(bp, NIG_REG_EMAC0_STATUS_MISC_MI_INT + port*0x18),
++	   REG_RD(bp, NIG_REG_SERDES0_STATUS_LINK_STATUS + port*0x3c),
++	   REG_RD(bp, NIG_REG_XGXS0_STATUS_LINK10G + port*0x68),
++	   REG_RD(bp, NIG_REG_XGXS0_STATUS_LINK_STATUS + port*0x68)
++	);
++}
++
++static void bnx2x_bcm8072_external_rom_boot(struct bnx2x *bp)
++{
++	u32 ext_phy_addr = ((bp->ext_phy_config &
++			     PORT_HW_CFG_XGXS_EXT_PHY_ADDR_MASK) >>
++			    PORT_HW_CFG_XGXS_EXT_PHY_ADDR_SHIFT);
++	u32 fw_ver1, fw_ver2;
++
++	/* Need to wait 200ms after reset */
++	msleep(200);
++	/* Boot port from external ROM
++	 * Set ser_boot_ctl bit in the MISC_CTRL1 register
++	 */
++	bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0, ext_phy_addr,
++				EXT_PHY_KR_PMA_PMD_DEVAD,
++				EXT_PHY_KR_MISC_CTRL1, 0x0001);
++
++	/* Reset internal microprocessor */
++	bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0, ext_phy_addr,
++				EXT_PHY_KR_PMA_PMD_DEVAD, EXT_PHY_KR_GEN_CTRL,
++				EXT_PHY_KR_ROM_RESET_INTERNAL_MP);
++	/* set micro reset = 0 */
++	bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0, ext_phy_addr,
++				EXT_PHY_KR_PMA_PMD_DEVAD, EXT_PHY_KR_GEN_CTRL,
++				EXT_PHY_KR_ROM_MICRO_RESET);
++	/* Reset internal microprocessor */
++	bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0, ext_phy_addr,
++				EXT_PHY_KR_PMA_PMD_DEVAD, EXT_PHY_KR_GEN_CTRL,
++				EXT_PHY_KR_ROM_RESET_INTERNAL_MP);
++	/* wait for 100ms for code download via SPI port */
++	msleep(100);
++
++	/* Clear ser_boot_ctl bit */
++	bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0, ext_phy_addr,
++				EXT_PHY_KR_PMA_PMD_DEVAD,
++				EXT_PHY_KR_MISC_CTRL1, 0x0000);
++	/* Wait 100ms */
++	msleep(100);
++
++	/* Print the PHY FW version */
++	bnx2x_mdio45_ctrl_read(bp, GRCBASE_EMAC0, ext_phy_addr,
++			       EXT_PHY_KR_PMA_PMD_DEVAD,
++			       0xca19, &fw_ver1);
++	bnx2x_mdio45_ctrl_read(bp, GRCBASE_EMAC0, ext_phy_addr,
++			       EXT_PHY_KR_PMA_PMD_DEVAD,
++			       0xca1a, &fw_ver2);
++	DP(NETIF_MSG_LINK,
++	   "8072 FW version 0x%x:0x%x\n", fw_ver1, fw_ver2);
++}
++
++static void bnx2x_bcm8072_force_10G(struct bnx2x *bp)
++{
++	u32 ext_phy_addr = ((bp->ext_phy_config &
++			     PORT_HW_CFG_XGXS_EXT_PHY_ADDR_MASK) >>
++			    PORT_HW_CFG_XGXS_EXT_PHY_ADDR_SHIFT);
++
++	/* Force KR or KX */
++	bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0, ext_phy_addr,
++				EXT_PHY_KR_PMA_PMD_DEVAD, EXT_PHY_KR_CTRL,
++				0x2040);
++	bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0, ext_phy_addr,
++				EXT_PHY_KR_PMA_PMD_DEVAD, EXT_PHY_KR_CTRL2,
++				0x000b);
++	bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0, ext_phy_addr,
++				EXT_PHY_KR_PMA_PMD_DEVAD, EXT_PHY_KR_PMD_CTRL,
++				0x0000);
++	bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0, ext_phy_addr,
++				EXT_PHY_KR_AUTO_NEG_DEVAD, EXT_PHY_KR_CTRL,
++				0x0000);
+ }
+ 
+ static void bnx2x_ext_phy_init(struct bnx2x *bp)
+ {
+-	int port = bp->port;
+ 	u32 ext_phy_type;
+ 	u32 ext_phy_addr;
+-	u32 local_phy;
++	u32 cnt;
++	u32 ctrl;
++	u32 val = 0;
+ 
+ 	if (bp->phy_flags & PHY_XGXS_FLAG) {
+-		local_phy = bp->phy_addr;
+ 		ext_phy_addr = ((bp->ext_phy_config &
+ 				 PORT_HW_CFG_XGXS_EXT_PHY_ADDR_MASK) >>
+ 				PORT_HW_CFG_XGXS_EXT_PHY_ADDR_SHIFT);
+ 
+ 		ext_phy_type = XGXS_EXT_PHY_TYPE(bp);
++		/* Make sure that the soft reset is off (expect for the 8072:
++		 * due to the lock, it will be done inside the specific
++		 * handling)
++		 */
++		if ((ext_phy_type != PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT) &&
++		    (ext_phy_type != PORT_HW_CFG_XGXS_EXT_PHY_TYPE_FAILURE) &&
++		   (ext_phy_type != PORT_HW_CFG_XGXS_EXT_PHY_TYPE_NOT_CONN) &&
++		    (ext_phy_type != PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8072)) {
++			/* Wait for soft reset to get cleared upto 1 sec */
++			for (cnt = 0; cnt < 1000; cnt++) {
++				bnx2x_mdio45_read(bp, ext_phy_addr,
++						  EXT_PHY_OPT_PMA_PMD_DEVAD,
++						  EXT_PHY_OPT_CNTL, &ctrl);
++				if (!(ctrl & (1<<15)))
++					break;
++				msleep(1);
++			}
++			DP(NETIF_MSG_LINK,
++			   "control reg 0x%x (after %d ms)\n", ctrl, cnt);
++		}
++
+ 		switch (ext_phy_type) {
+ 		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT:
+ 			DP(NETIF_MSG_LINK, "XGXS Direct\n");
+@@ -2800,49 +3370,235 @@ static void bnx2x_ext_phy_init(struct bnx2x *bp)
+ 
+ 		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8705:
+ 			DP(NETIF_MSG_LINK, "XGXS 8705\n");
+-			bnx2x_bits_en(bp,
+-				      NIG_REG_MASK_INTERRUPT_PORT0 + port*4,
+-				      NIG_MASK_MI_INT);
+-			DP(NETIF_MSG_LINK, "enabled extenal phy int\n");
+ 
+-			bp->phy_addr = ext_phy_type;
+-			bnx2x_mdio45_vwrite(bp, EXT_PHY_OPT_PMA_PMD_DEVAD,
++			bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++					    EXT_PHY_OPT_PMA_PMD_DEVAD,
+ 					    EXT_PHY_OPT_PMD_MISC_CNTL,
+ 					    0x8288);
+-			bnx2x_mdio45_vwrite(bp, EXT_PHY_OPT_PMA_PMD_DEVAD,
++			bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++					    EXT_PHY_OPT_PMA_PMD_DEVAD,
+ 					    EXT_PHY_OPT_PHY_IDENTIFIER,
+ 					    0x7fbf);
+-			bnx2x_mdio45_vwrite(bp, EXT_PHY_OPT_PMA_PMD_DEVAD,
++			bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++					    EXT_PHY_OPT_PMA_PMD_DEVAD,
+ 					    EXT_PHY_OPT_CMU_PLL_BYPASS,
+ 					    0x0100);
+-			bnx2x_mdio45_vwrite(bp, EXT_PHY_OPT_WIS_DEVAD,
++			bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++					    EXT_PHY_OPT_WIS_DEVAD,
+ 					    EXT_PHY_OPT_LASI_CNTL, 0x1);
+ 			break;
+ 
+ 		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8706:
+ 			DP(NETIF_MSG_LINK, "XGXS 8706\n");
+-			bnx2x_bits_en(bp,
+-				      NIG_REG_MASK_INTERRUPT_PORT0 + port*4,
+-				      NIG_MASK_MI_INT);
+-			DP(NETIF_MSG_LINK, "enabled extenal phy int\n");
+-
+-			bp->phy_addr = ext_phy_type;
+-			bnx2x_mdio45_vwrite(bp, EXT_PHY_OPT_PMA_PMD_DEVAD,
+-					    EXT_PHY_OPT_PMD_DIGITAL_CNT,
+-					    0x400);
+-			bnx2x_mdio45_vwrite(bp, EXT_PHY_OPT_PMA_PMD_DEVAD,
++
++			if (!(bp->req_autoneg & AUTONEG_SPEED)) {
++				/* Force speed */
++				if (bp->req_line_speed == SPEED_10000) {
++					DP(NETIF_MSG_LINK,
++					   "XGXS 8706 force 10Gbps\n");
++					bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++						EXT_PHY_OPT_PMA_PMD_DEVAD,
++						EXT_PHY_OPT_PMD_DIGITAL_CNT,
++						0x400);
++				} else {
++					/* Force 1Gbps */
++					DP(NETIF_MSG_LINK,
++					   "XGXS 8706 force 1Gbps\n");
++
++					bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++						EXT_PHY_OPT_PMA_PMD_DEVAD,
++						EXT_PHY_OPT_CNTL,
++						0x0040);
++
++					bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++						EXT_PHY_OPT_PMA_PMD_DEVAD,
++						EXT_PHY_OPT_CNTL2,
++						0x000D);
++				}
++
++				/* Enable LASI */
++				bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++						    EXT_PHY_OPT_PMA_PMD_DEVAD,
++						    EXT_PHY_OPT_LASI_CNTL,
++						    0x1);
++			} else {
++				/* AUTONEG */
++				/* Allow CL37 through CL73 */
++				DP(NETIF_MSG_LINK, "XGXS 8706 AutoNeg\n");
++				bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++						    EXT_PHY_AUTO_NEG_DEVAD,
++						    EXT_PHY_OPT_AN_CL37_CL73,
++						    0x040c);
++
++				/* Enable Full-Duplex advertisment on CL37 */
++				bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++						    EXT_PHY_AUTO_NEG_DEVAD,
++						    EXT_PHY_OPT_AN_CL37_FD,
++						    0x0020);
++				/* Enable CL37 AN */
++				bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++						    EXT_PHY_AUTO_NEG_DEVAD,
++						    EXT_PHY_OPT_AN_CL37_AN,
++						    0x1000);
++				/* Advertise 10G/1G support */
++				if (bp->advertising &
++				    ADVERTISED_1000baseT_Full)
++					val = (1<<5);
++				if (bp->advertising &
++				    ADVERTISED_10000baseT_Full)
++					val |= (1<<7);
++
++				bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++						    EXT_PHY_AUTO_NEG_DEVAD,
++						    EXT_PHY_OPT_AN_ADV, val);
++				/* Enable LASI */
++				bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++						    EXT_PHY_OPT_PMA_PMD_DEVAD,
++						    EXT_PHY_OPT_LASI_CNTL,
++						    0x1);
++
++				/* Enable clause 73 AN */
++				bnx2x_mdio45_write(bp, ext_phy_addr,
++						   EXT_PHY_AUTO_NEG_DEVAD,
++						   EXT_PHY_OPT_CNTL,
++						   0x1200);
++			}
++			break;
++
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8072:
++			bnx2x_hw_lock(bp, HW_LOCK_RESOURCE_8072_MDIO);
++			/* Wait for soft reset to get cleared upto 1 sec */
++			for (cnt = 0; cnt < 1000; cnt++) {
++				bnx2x_mdio45_ctrl_read(bp, GRCBASE_EMAC0,
++						ext_phy_addr,
++						EXT_PHY_OPT_PMA_PMD_DEVAD,
++						EXT_PHY_OPT_CNTL, &ctrl);
++				if (!(ctrl & (1<<15)))
++					break;
++				msleep(1);
++			}
++			DP(NETIF_MSG_LINK,
++			   "8072 control reg 0x%x (after %d ms)\n",
++			   ctrl, cnt);
++
++			bnx2x_bcm8072_external_rom_boot(bp);
++			DP(NETIF_MSG_LINK, "Finshed loading 8072 KR ROM\n");
++
++			/* enable LASI */
++			bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0,
++						ext_phy_addr,
++						EXT_PHY_KR_PMA_PMD_DEVAD,
++						0x9000, 0x0400);
++			bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0,
++						ext_phy_addr,
++						EXT_PHY_KR_PMA_PMD_DEVAD,
++						EXT_PHY_KR_LASI_CNTL, 0x0004);
++
++			/* If this is forced speed, set to KR or KX
++			 * (all other are not supported)
++			 */
++			if (!(bp->req_autoneg & AUTONEG_SPEED)) {
++				if (bp->req_line_speed == SPEED_10000) {
++					bnx2x_bcm8072_force_10G(bp);
++					DP(NETIF_MSG_LINK,
++					   "Forced speed 10G on 8072\n");
++					/* unlock */
++					bnx2x_hw_unlock(bp,
++						HW_LOCK_RESOURCE_8072_MDIO);
++					break;
++				} else
++					val = (1<<5);
++			} else {
++
++				/* Advertise 10G/1G support */
++				if (bp->advertising &
++						ADVERTISED_1000baseT_Full)
++					val = (1<<5);
++				if (bp->advertising &
++						ADVERTISED_10000baseT_Full)
++					val |= (1<<7);
++			}
++			bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0,
++					ext_phy_addr,
++					EXT_PHY_KR_AUTO_NEG_DEVAD,
++					0x11, val);
++			/* Add support for CL37 ( passive mode ) I */
++			bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0,
++						ext_phy_addr,
++						EXT_PHY_KR_AUTO_NEG_DEVAD,
++						0x8370, 0x040c);
++			/* Add support for CL37 ( passive mode ) II */
++			bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0,
++						ext_phy_addr,
++						EXT_PHY_KR_AUTO_NEG_DEVAD,
++						0xffe4, 0x20);
++			/* Add support for CL37 ( passive mode ) III */
++			bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0,
++						ext_phy_addr,
++						EXT_PHY_KR_AUTO_NEG_DEVAD,
++						0xffe0, 0x1000);
++			/* Restart autoneg */
++			msleep(500);
++			bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0,
++					ext_phy_addr,
++					EXT_PHY_KR_AUTO_NEG_DEVAD,
++					EXT_PHY_KR_CTRL, 0x1200);
++			DP(NETIF_MSG_LINK, "8072 Autoneg Restart: "
++			   "1G %ssupported  10G %ssupported\n",
++			   (val & (1<<5)) ? "" : "not ",
++			   (val & (1<<7)) ? "" : "not ");
++
++			/* unlock */
++			bnx2x_hw_unlock(bp, HW_LOCK_RESOURCE_8072_MDIO);
++			break;
++
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_SFX7101:
++			DP(NETIF_MSG_LINK,
++			   "Setting the SFX7101 LASI indication\n");
++			bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++					    EXT_PHY_OPT_PMA_PMD_DEVAD,
+ 					    EXT_PHY_OPT_LASI_CNTL, 0x1);
++			DP(NETIF_MSG_LINK,
++			   "Setting the SFX7101 LED to blink on traffic\n");
++			bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++					    EXT_PHY_OPT_PMA_PMD_DEVAD,
++					    0xC007, (1<<3));
++
++			/* read modify write pause advertizing */
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_KR_AUTO_NEG_DEVAD,
++					  EXT_PHY_KR_AUTO_NEG_ADVERT, &val);
++			val &= ~EXT_PHY_KR_AUTO_NEG_ADVERT_PAUSE_BOTH;
++			/* Please refer to Table 28B-3 of 802.3ab-1999 spec. */
++			if (bp->advertising & ADVERTISED_Pause)
++				val |= EXT_PHY_KR_AUTO_NEG_ADVERT_PAUSE;
++
++			if (bp->advertising & ADVERTISED_Asym_Pause) {
++				val |=
++				 EXT_PHY_KR_AUTO_NEG_ADVERT_PAUSE_ASYMMETRIC;
++			}
++			DP(NETIF_MSG_LINK, "SFX7101 AN advertize 0x%x\n", val);
++			bnx2x_mdio45_vwrite(bp, ext_phy_addr,
++					    EXT_PHY_KR_AUTO_NEG_DEVAD,
++					    EXT_PHY_KR_AUTO_NEG_ADVERT, val);
++			/* Restart autoneg */
++			bnx2x_mdio45_read(bp, ext_phy_addr,
++					  EXT_PHY_KR_AUTO_NEG_DEVAD,
++					  EXT_PHY_KR_CTRL, &val);
++			val |= 0x200;
++			bnx2x_mdio45_write(bp, ext_phy_addr,
++					    EXT_PHY_KR_AUTO_NEG_DEVAD,
++					    EXT_PHY_KR_CTRL, val);
+ 			break;
+ 
+ 		default:
+-			DP(NETIF_MSG_LINK, "BAD XGXS ext_phy_config 0x%x\n",
+-			   bp->ext_phy_config);
++			BNX2X_ERR("BAD XGXS ext_phy_config 0x%x\n",
++				  bp->ext_phy_config);
+ 			break;
+ 		}
+-		bp->phy_addr = local_phy;
+ 
+ 	} else { /* SerDes */
+-/*      	ext_phy_addr = ((bp->ext_phy_config &
++/*		ext_phy_addr = ((bp->ext_phy_config &
+ 				 PORT_HW_CFG_SERDES_EXT_PHY_ADDR_MASK) >>
+ 				PORT_HW_CFG_SERDES_EXT_PHY_ADDR_SHIFT);
+ */
+@@ -2854,10 +3610,6 @@ static void bnx2x_ext_phy_init(struct bnx2x *bp)
+ 
+ 		case PORT_HW_CFG_SERDES_EXT_PHY_TYPE_BCM5482:
+ 			DP(NETIF_MSG_LINK, "SerDes 5482\n");
+-			bnx2x_bits_en(bp,
+-				      NIG_REG_MASK_INTERRUPT_PORT0 + port*4,
+-				      NIG_MASK_MI_INT);
+-			DP(NETIF_MSG_LINK, "enabled extenal phy int\n");
+ 			break;
+ 
+ 		default:
+@@ -2871,8 +3623,22 @@ static void bnx2x_ext_phy_init(struct bnx2x *bp)
+ static void bnx2x_ext_phy_reset(struct bnx2x *bp)
+ {
+ 	u32 ext_phy_type;
+-	u32 ext_phy_addr;
+-	u32 local_phy;
++	u32 ext_phy_addr = ((bp->ext_phy_config &
++			     PORT_HW_CFG_XGXS_EXT_PHY_ADDR_MASK) >>
++			    PORT_HW_CFG_XGXS_EXT_PHY_ADDR_SHIFT);
++	u32 board = (bp->board & SHARED_HW_CFG_BOARD_TYPE_MASK);
++
++	/* The PHY reset is controled by GPIO 1
++	 * Give it 1ms of reset pulse
++	 */
++	if ((board != SHARED_HW_CFG_BOARD_TYPE_BCM957710T1002G) &&
++	    (board != SHARED_HW_CFG_BOARD_TYPE_BCM957710T1003G)) {
++		bnx2x_set_gpio(bp, MISC_REGISTERS_GPIO_1,
++			       MISC_REGISTERS_GPIO_OUTPUT_LOW);
++		msleep(1);
++		bnx2x_set_gpio(bp, MISC_REGISTERS_GPIO_1,
++			       MISC_REGISTERS_GPIO_OUTPUT_HIGH);
++	}
+ 
+ 	if (bp->phy_flags & PHY_XGXS_FLAG) {
+ 		ext_phy_type = XGXS_EXT_PHY_TYPE(bp);
+@@ -2883,15 +3649,24 @@ static void bnx2x_ext_phy_reset(struct bnx2x *bp)
+ 
+ 		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8705:
+ 		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8706:
+-			DP(NETIF_MSG_LINK, "XGXS 8705/6\n");
+-			local_phy = bp->phy_addr;
+-			ext_phy_addr = ((bp->ext_phy_config &
+-					PORT_HW_CFG_XGXS_EXT_PHY_ADDR_MASK) >>
+-					PORT_HW_CFG_XGXS_EXT_PHY_ADDR_SHIFT);
+-			bp->phy_addr = (u8)ext_phy_addr;
+-			bnx2x_mdio45_write(bp, EXT_PHY_OPT_PMA_PMD_DEVAD,
++			DP(NETIF_MSG_LINK, "XGXS 8705/8706\n");
++			bnx2x_mdio45_write(bp, ext_phy_addr,
++					   EXT_PHY_OPT_PMA_PMD_DEVAD,
+ 					   EXT_PHY_OPT_CNTL, 0xa040);
+-			bp->phy_addr = local_phy;
++			break;
++
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8072:
++			DP(NETIF_MSG_LINK, "XGXS 8072\n");
++			bnx2x_hw_lock(bp, HW_LOCK_RESOURCE_8072_MDIO);
++			bnx2x_mdio45_ctrl_write(bp, GRCBASE_EMAC0,
++						ext_phy_addr,
++						EXT_PHY_KR_PMA_PMD_DEVAD,
++						0, 1<<15);
++			bnx2x_hw_unlock(bp, HW_LOCK_RESOURCE_8072_MDIO);
++			break;
++
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_SFX7101:
++			DP(NETIF_MSG_LINK, "XGXS SFX7101\n");
+ 			break;
+ 
+ 		default:
+@@ -2930,6 +3705,7 @@ static void bnx2x_link_initialize(struct bnx2x *bp)
+ 			NIG_MASK_SERDES0_LINK_STATUS |
+ 			NIG_MASK_MI_INT));
+ 
++	/* Activate the external PHY */
+ 	bnx2x_ext_phy_reset(bp);
+ 
+ 	bnx2x_set_aer_mmd(bp);
+@@ -2994,13 +3770,13 @@ static void bnx2x_link_initialize(struct bnx2x *bp)
+ 			/* AN enabled */
+ 			bnx2x_set_brcm_cl37_advertisment(bp);
+ 
+-			/* program duplex & pause advertisment (for aneg) */
++			/* program duplex & pause advertisement (for aneg) */
+ 			bnx2x_set_ieee_aneg_advertisment(bp);
+ 
+ 			/* enable autoneg */
+ 			bnx2x_set_autoneg(bp);
+ 
+-			/* enalbe and restart AN */
++			/* enable and restart AN */
+ 			bnx2x_restart_autoneg(bp);
+ 		}
+ 
+@@ -3010,11 +3786,11 @@ static void bnx2x_link_initialize(struct bnx2x *bp)
+ 		bnx2x_initialize_sgmii_process(bp);
+ 	}
+ 
+-	/* enable the interrupt */
+-	bnx2x_link_int_enable(bp);
+-
+ 	/* init ext phy and enable link state int */
+ 	bnx2x_ext_phy_init(bp);
++
++	/* enable the interrupt */
++	bnx2x_link_int_enable(bp);
+ }
+ 
+ static void bnx2x_phy_deassert(struct bnx2x *bp)
+@@ -3073,6 +3849,11 @@ static int bnx2x_phy_init(struct bnx2x *bp)
+ static void bnx2x_link_reset(struct bnx2x *bp)
+ {
+ 	int port = bp->port;
++	u32 board = (bp->board & SHARED_HW_CFG_BOARD_TYPE_MASK);
++
++	/* update shared memory */
++	bp->link_status = 0;
++	bnx2x_update_mng(bp);
+ 
+ 	/* disable attentions */
+ 	bnx2x_bits_dis(bp, NIG_REG_MASK_INTERRUPT_PORT0 + port*4,
+@@ -3081,21 +3862,45 @@ static void bnx2x_link_reset(struct bnx2x *bp)
+ 			NIG_MASK_SERDES0_LINK_STATUS |
+ 			NIG_MASK_MI_INT));
+ 
+-	bnx2x_ext_phy_reset(bp);
++	/* activate nig drain */
++	NIG_WR(NIG_REG_EGRESS_DRAIN0_MODE + port*4, 1);
++
++	/* disable nig egress interface */
++	NIG_WR(NIG_REG_BMAC0_OUT_EN + port*4, 0);
++	NIG_WR(NIG_REG_EGRESS_EMAC0_OUT_EN + port*4, 0);
++
++	/* Stop BigMac rx */
++	bnx2x_bmac_rx_disable(bp);
++
++	/* disable emac */
++	NIG_WR(NIG_REG_NIG_EMAC0_EN + port*4, 0);
++
++	msleep(10);
++
++	/* The PHY reset is controled by GPIO 1
++	 * Hold it as output low
++	 */
++	if ((board != SHARED_HW_CFG_BOARD_TYPE_BCM957710T1002G) &&
++	    (board != SHARED_HW_CFG_BOARD_TYPE_BCM957710T1003G)) {
++		bnx2x_set_gpio(bp, MISC_REGISTERS_GPIO_1,
++			       MISC_REGISTERS_GPIO_OUTPUT_LOW);
++		DP(NETIF_MSG_LINK, "reset external PHY\n");
++	}
+ 
+ 	/* reset the SerDes/XGXS */
+ 	REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_3_CLEAR,
+ 	       (0x1ff << (port*16)));
+ 
+-	/* reset EMAC / BMAC and disable NIG interfaces */
+-	NIG_WR(NIG_REG_BMAC0_IN_EN + port*4, 0);
+-	NIG_WR(NIG_REG_BMAC0_OUT_EN + port*4, 0);
++	/* reset BigMac */
++	REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_2_CLEAR,
++	       (MISC_REGISTERS_RESET_REG_2_RST_BMAC0 << port));
+ 
+-	NIG_WR(NIG_REG_NIG_EMAC0_EN + port*4, 0);
++	/* disable nig ingress interface */
++	NIG_WR(NIG_REG_BMAC0_IN_EN + port*4, 0);
+ 	NIG_WR(NIG_REG_EMAC0_IN_EN + port*4, 0);
+-	NIG_WR(NIG_REG_EGRESS_EMAC0_OUT_EN + port*4, 0);
+ 
+-	NIG_WR(NIG_REG_EGRESS_DRAIN0_MODE + port*4, 1);
++	/* set link down */
++	bp->link_up = 0;
+ }
+ 
+ #ifdef BNX2X_XGXS_LB
+@@ -3158,7 +3963,7 @@ static int bnx2x_sp_post(struct bnx2x *bp, int command, int cid,
+ 	int port = bp->port;
+ 
+ 	DP(NETIF_MSG_TIMER,
+-	   "spe (%x:%x)  command %x  hw_cid %x  data (%x:%x)  left %x\n",
++	   "spe (%x:%x)  command %d  hw_cid %x  data (%x:%x)  left %x\n",
+ 	   (u32)U64_HI(bp->spq_mapping), (u32)(U64_LO(bp->spq_mapping) +
+ 	   (void *)bp->spq_prod_bd - (void *)bp->spq), command,
+ 	   HW_CID(bp, cid), data_hi, data_lo, bp->spq_left);
+@@ -3176,6 +3981,7 @@ static int bnx2x_sp_post(struct bnx2x *bp, int command, int cid,
+ 		bnx2x_panic();
+ 		return -EBUSY;
+ 	}
++
+ 	/* CID needs port number to be encoded int it */
+ 	bp->spq_prod_bd->hdr.conn_and_cmd_data =
+ 			cpu_to_le32(((command << SPE_HDR_CMD_ID_SHIFT) |
+@@ -3282,8 +4088,8 @@ static void bnx2x_attn_int_asserted(struct bnx2x *bp, u32 asserted)
+ 	u32 igu_addr = (IGU_ADDR_ATTN_BITS_SET + IGU_PORT_BASE * port) * 8;
+ 	u32 aeu_addr = port ? MISC_REG_AEU_MASK_ATTN_FUNC_1 :
+ 			      MISC_REG_AEU_MASK_ATTN_FUNC_0;
+-	u32 nig_mask_addr = port ? NIG_REG_MASK_INTERRUPT_PORT1 :
+-				   NIG_REG_MASK_INTERRUPT_PORT0;
++	u32 nig_int_mask_addr = port ? NIG_REG_MASK_INTERRUPT_PORT1 :
++				       NIG_REG_MASK_INTERRUPT_PORT0;
+ 
+ 	if (~bp->aeu_mask & (asserted & 0xff))
+ 		BNX2X_ERR("IGU ERROR\n");
+@@ -3301,15 +4107,11 @@ static void bnx2x_attn_int_asserted(struct bnx2x *bp, u32 asserted)
+ 
+ 	if (asserted & ATTN_HARD_WIRED_MASK) {
+ 		if (asserted & ATTN_NIG_FOR_FUNC) {
+-			u32 nig_status_port;
+-			u32 nig_int_addr = port ?
+-					NIG_REG_STATUS_INTERRUPT_PORT1 :
+-					NIG_REG_STATUS_INTERRUPT_PORT0;
+ 
+-			bp->nig_mask = REG_RD(bp, nig_mask_addr);
+-			REG_WR(bp, nig_mask_addr, 0);
++			/* save nig interrupt mask */
++			bp->nig_mask = REG_RD(bp, nig_int_mask_addr);
++			REG_WR(bp, nig_int_mask_addr, 0);
+ 
+-			nig_status_port = REG_RD(bp, nig_int_addr);
+ 			bnx2x_link_update(bp);
+ 
+ 			/* handle unicore attn? */
+@@ -3362,15 +4164,132 @@ static void bnx2x_attn_int_asserted(struct bnx2x *bp, u32 asserted)
+ 
+ 	/* now set back the mask */
+ 	if (asserted & ATTN_NIG_FOR_FUNC)
+-		REG_WR(bp, nig_mask_addr, bp->nig_mask);
++		REG_WR(bp, nig_int_mask_addr, bp->nig_mask);
+ }
+ 
+-static void bnx2x_attn_int_deasserted(struct bnx2x *bp, u32 deasserted)
++static inline void bnx2x_attn_int_deasserted0(struct bnx2x *bp, u32 attn)
+ {
+ 	int port = bp->port;
+-	int index;
++	int reg_offset;
++	u32 val;
++
++	if (attn & AEU_INPUTS_ATTN_BITS_SPIO5) {
++
++		reg_offset = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_0 :
++				     MISC_REG_AEU_ENABLE1_FUNC_0_OUT_0);
++
++		val = REG_RD(bp, reg_offset);
++		val &= ~AEU_INPUTS_ATTN_BITS_SPIO5;
++		REG_WR(bp, reg_offset, val);
++
++		BNX2X_ERR("SPIO5 hw attention\n");
++
++		switch (bp->board & SHARED_HW_CFG_BOARD_TYPE_MASK) {
++		case SHARED_HW_CFG_BOARD_TYPE_BCM957710A1022G:
++			/* Fan failure attention */
++
++			/* The PHY reset is controled by GPIO 1 */
++			bnx2x_set_gpio(bp, MISC_REGISTERS_GPIO_1,
++				       MISC_REGISTERS_GPIO_OUTPUT_LOW);
++			/* Low power mode is controled by GPIO 2 */
++			bnx2x_set_gpio(bp, MISC_REGISTERS_GPIO_2,
++				       MISC_REGISTERS_GPIO_OUTPUT_LOW);
++			/* mark the failure */
++			bp->ext_phy_config &=
++					~PORT_HW_CFG_XGXS_EXT_PHY_TYPE_MASK;
++			bp->ext_phy_config |=
++					PORT_HW_CFG_XGXS_EXT_PHY_TYPE_FAILURE;
++			SHMEM_WR(bp,
++				 dev_info.port_hw_config[port].
++							external_phy_config,
++				 bp->ext_phy_config);
++			/* log the failure */
++			printk(KERN_ERR PFX "Fan Failure on Network"
++			       " Controller %s has caused the driver to"
++			       " shutdown the card to prevent permanent"
++			       " damage.  Please contact Dell Support for"
++			       " assistance\n", bp->dev->name);
++			break;
++
++		default:
++			break;
++		}
++	}
++}
++
++static inline void bnx2x_attn_int_deasserted1(struct bnx2x *bp, u32 attn)
++{
++	u32 val;
++
++	if (attn & BNX2X_DOORQ_ASSERT) {
++
++		val = REG_RD(bp, DORQ_REG_DORQ_INT_STS_CLR);
++		BNX2X_ERR("DB hw attention 0x%x\n", val);
++		/* DORQ discard attention */
++		if (val & 0x2)
++			BNX2X_ERR("FATAL error from DORQ\n");
++	}
++}
++
++static inline void bnx2x_attn_int_deasserted2(struct bnx2x *bp, u32 attn)
++{
++	u32 val;
++
++	if (attn & AEU_INPUTS_ATTN_BITS_CFC_HW_INTERRUPT) {
++
++		val = REG_RD(bp, CFC_REG_CFC_INT_STS_CLR);
++		BNX2X_ERR("CFC hw attention 0x%x\n", val);
++		/* CFC error attention */
++		if (val & 0x2)
++			BNX2X_ERR("FATAL error from CFC\n");
++	}
++
++	if (attn & AEU_INPUTS_ATTN_BITS_PXP_HW_INTERRUPT) {
++
++		val = REG_RD(bp, PXP_REG_PXP_INT_STS_CLR_0);
++		BNX2X_ERR("PXP hw attention 0x%x\n", val);
++		/* RQ_USDMDP_FIFO_OVERFLOW */
++		if (val & 0x18000)
++			BNX2X_ERR("FATAL error from PXP\n");
++	}
++}
++
++static inline void bnx2x_attn_int_deasserted3(struct bnx2x *bp, u32 attn)
++{
++	if (attn & EVEREST_GEN_ATTN_IN_USE_MASK) {
++
++		if (attn & BNX2X_MC_ASSERT_BITS) {
++
++			BNX2X_ERR("MC assert!\n");
++			REG_WR(bp, MISC_REG_AEU_GENERAL_ATTN_10, 0);
++			REG_WR(bp, MISC_REG_AEU_GENERAL_ATTN_9, 0);
++			REG_WR(bp, MISC_REG_AEU_GENERAL_ATTN_8, 0);
++			REG_WR(bp, MISC_REG_AEU_GENERAL_ATTN_7, 0);
++			bnx2x_panic();
++
++		} else if (attn & BNX2X_MCP_ASSERT) {
++
++			BNX2X_ERR("MCP assert!\n");
++			REG_WR(bp, MISC_REG_AEU_GENERAL_ATTN_11, 0);
++			bnx2x_mc_assert(bp);
++
++		} else
++			BNX2X_ERR("Unknown HW assert! (attn 0x%x)\n", attn);
++	}
++
++	if (attn & EVEREST_LATCHED_ATTN_IN_USE_MASK) {
++
++		REG_WR(bp, MISC_REG_AEU_CLR_LATCH_SIGNAL, 0x7ff);
++		BNX2X_ERR("LATCHED attention 0x%x (masked)\n", attn);
++	}
++}
++
++static void bnx2x_attn_int_deasserted(struct bnx2x *bp, u32 deasserted)
++{
+ 	struct attn_route attn;
+ 	struct attn_route group_mask;
++	int port = bp->port;
++	int index;
+ 	u32 reg_addr;
+ 	u32 val;
+ 
+@@ -3391,64 +4310,14 @@ static void bnx2x_attn_int_deasserted(struct bnx2x *bp, u32 deasserted)
+ 			DP(NETIF_MSG_HW, "group[%d]: %llx\n", index,
+ 			   (unsigned long long)group_mask.sig[0]);
+ 
+-			if (attn.sig[3] & group_mask.sig[3] &
+-			    EVEREST_GEN_ATTN_IN_USE_MASK) {
+-
+-				if (attn.sig[3] & BNX2X_MC_ASSERT_BITS) {
+-
+-					BNX2X_ERR("MC assert!\n");
+-					bnx2x_panic();
+-
+-				} else if (attn.sig[3] & BNX2X_MCP_ASSERT) {
+-
+-					BNX2X_ERR("MCP assert!\n");
+-					REG_WR(bp,
+-					     MISC_REG_AEU_GENERAL_ATTN_11, 0);
+-					bnx2x_mc_assert(bp);
+-
+-				} else {
+-					BNX2X_ERR("UNKOWEN HW ASSERT!\n");
+-				}
+-			}
+-
+-			if (attn.sig[1] & group_mask.sig[1] &
+-			    BNX2X_DOORQ_ASSERT) {
+-
+-				val = REG_RD(bp, DORQ_REG_DORQ_INT_STS_CLR);
+-				BNX2X_ERR("DB hw attention 0x%x\n", val);
+-				/* DORQ discard attention */
+-				if (val & 0x2)
+-					BNX2X_ERR("FATAL error from DORQ\n");
+-			}
+-
+-			if (attn.sig[2] & group_mask.sig[2] &
+-			    AEU_INPUTS_ATTN_BITS_CFC_HW_INTERRUPT) {
+-
+-				val = REG_RD(bp, CFC_REG_CFC_INT_STS_CLR);
+-				BNX2X_ERR("CFC hw attention 0x%x\n", val);
+-				/* CFC error attention */
+-				if (val & 0x2)
+-					BNX2X_ERR("FATAL error from CFC\n");
+-			}
+-
+-			if (attn.sig[2] & group_mask.sig[2] &
+-			    AEU_INPUTS_ATTN_BITS_PXP_HW_INTERRUPT) {
+-
+-				val = REG_RD(bp, PXP_REG_PXP_INT_STS_CLR_0);
+-				BNX2X_ERR("PXP hw attention 0x%x\n", val);
+-				/* RQ_USDMDP_FIFO_OVERFLOW */
+-				if (val & 0x18000)
+-					BNX2X_ERR("FATAL error from PXP\n");
+-			}
+-
+-			if (attn.sig[3] & group_mask.sig[3] &
+-			    EVEREST_LATCHED_ATTN_IN_USE_MASK) {
+-
+-				REG_WR(bp, MISC_REG_AEU_CLR_LATCH_SIGNAL,
+-				       0x7ff);
+-				DP(NETIF_MSG_HW, "got latched bits 0x%x\n",
+-				   attn.sig[3]);
+-			}
++			bnx2x_attn_int_deasserted3(bp,
++					attn.sig[3] & group_mask.sig[3]);
++			bnx2x_attn_int_deasserted1(bp,
++					attn.sig[1] & group_mask.sig[1]);
++			bnx2x_attn_int_deasserted2(bp,
++					attn.sig[2] & group_mask.sig[2]);
++			bnx2x_attn_int_deasserted0(bp,
++					attn.sig[0] & group_mask.sig[0]);
+ 
+ 			if ((attn.sig[0] & group_mask.sig[0] &
+ 						HW_INTERRUT_ASSERT_SET_0) ||
+@@ -3456,7 +4325,15 @@ static void bnx2x_attn_int_deasserted(struct bnx2x *bp, u32 deasserted)
+ 						HW_INTERRUT_ASSERT_SET_1) ||
+ 			    (attn.sig[2] & group_mask.sig[2] &
+ 						HW_INTERRUT_ASSERT_SET_2))
+-				BNX2X_ERR("FATAL HW block attention\n");
++				BNX2X_ERR("FATAL HW block attention"
++					  "  set0 0x%x  set1 0x%x"
++					  "  set2 0x%x\n",
++					  (attn.sig[0] & group_mask.sig[0] &
++					   HW_INTERRUT_ASSERT_SET_0),
++					  (attn.sig[1] & group_mask.sig[1] &
++					   HW_INTERRUT_ASSERT_SET_1),
++					  (attn.sig[2] & group_mask.sig[2] &
++					   HW_INTERRUT_ASSERT_SET_2));
+ 
+ 			if ((attn.sig[0] & group_mask.sig[0] &
+ 						HW_PRTY_ASSERT_SET_0) ||
+@@ -3464,7 +4341,7 @@ static void bnx2x_attn_int_deasserted(struct bnx2x *bp, u32 deasserted)
+ 						HW_PRTY_ASSERT_SET_1) ||
+ 			    (attn.sig[2] & group_mask.sig[2] &
+ 						HW_PRTY_ASSERT_SET_2))
+-				BNX2X_ERR("FATAL HW block parity atention\n");
++			       BNX2X_ERR("FATAL HW block parity attention\n");
+ 		}
+ 	}
+ 
+@@ -3529,7 +4406,7 @@ static void bnx2x_sp_task(struct work_struct *work)
+ 
+ 	/* Return here if interrupt is disabled */
+ 	if (unlikely(atomic_read(&bp->intr_sem) != 0)) {
+-		DP(NETIF_MSG_INTR, "called but intr_sem not 0, returning\n");
++		DP(BNX2X_MSG_SP, "called but intr_sem not 0, returning\n");
+ 		return;
+ 	}
+ 
+@@ -3539,12 +4416,11 @@ static void bnx2x_sp_task(struct work_struct *work)
+ 
+ 	DP(NETIF_MSG_INTR, "got a slowpath interrupt (updated %x)\n", status);
+ 
+-	if (status & 0x1) {
+-		/* HW attentions */
++	/* HW attentions */
++	if (status & 0x1)
+ 		bnx2x_attn_int(bp);
+-	}
+ 
+-	/* CStorm events: query_stats, cfc delete ramrods */
++	/* CStorm events: query_stats, port delete ramrod */
+ 	if (status & 0x2)
+ 		bp->stat_pending = 0;
+ 
+@@ -3558,6 +4434,7 @@ static void bnx2x_sp_task(struct work_struct *work)
+ 		     IGU_INT_NOP, 1);
+ 	bnx2x_ack_sb(bp, DEF_SB_ID, TSTORM_ID, le16_to_cpu(bp->def_t_idx),
+ 		     IGU_INT_ENABLE, 1);
++
+ }
+ 
+ static irqreturn_t bnx2x_msix_sp_int(int irq, void *dev_instance)
+@@ -3567,11 +4444,11 @@ static irqreturn_t bnx2x_msix_sp_int(int irq, void *dev_instance)
+ 
+ 	/* Return here if interrupt is disabled */
+ 	if (unlikely(atomic_read(&bp->intr_sem) != 0)) {
+-		DP(NETIF_MSG_INTR, "called but intr_sem not 0, returning\n");
++		DP(BNX2X_MSG_SP, "called but intr_sem not 0, returning\n");
+ 		return IRQ_HANDLED;
+ 	}
+ 
+-	bnx2x_ack_sb(bp, 16, XSTORM_ID, 0, IGU_INT_DISABLE, 0);
++	bnx2x_ack_sb(bp, DEF_SB_ID, XSTORM_ID, 0, IGU_INT_DISABLE, 0);
+ 
+ #ifdef BNX2X_STOP_ON_ERROR
+ 	if (unlikely(bp->panic))
+@@ -3906,7 +4783,7 @@ static void bnx2x_stop_stats(struct bnx2x *bp)
+ 
+ 		while (bp->stats_state != STATS_STATE_DISABLE) {
+ 			if (!timeout) {
+-				BNX2X_ERR("timeout wating for stats stop\n");
++				BNX2X_ERR("timeout waiting for stats stop\n");
+ 				break;
+ 			}
+ 			timeout--;
+@@ -4173,39 +5050,37 @@ static void bnx2x_update_net_stats(struct bnx2x *bp)
+ 
+ 	nstats->rx_bytes = bnx2x_hilo(&estats->total_bytes_received_hi);
+ 
+-	nstats->tx_bytes =
+-		bnx2x_hilo(&estats->total_bytes_transmitted_hi);
++	nstats->tx_bytes = bnx2x_hilo(&estats->total_bytes_transmitted_hi);
+ 
+-	nstats->rx_dropped = estats->checksum_discard +
+-				   estats->mac_discard;
++	nstats->rx_dropped = estats->checksum_discard + estats->mac_discard;
+ 	nstats->tx_dropped = 0;
+ 
+ 	nstats->multicast =
+ 		bnx2x_hilo(&estats->total_multicast_packets_transmitted_hi);
+ 
+-	nstats->collisions =
+-		estats->single_collision_transmit_frames +
+-		estats->multiple_collision_transmit_frames +
+-		estats->late_collision_frames +
+-		estats->excessive_collision_frames;
++	nstats->collisions = estats->single_collision_transmit_frames +
++			     estats->multiple_collision_transmit_frames +
++			     estats->late_collision_frames +
++			     estats->excessive_collision_frames;
+ 
+ 	nstats->rx_length_errors = estats->runt_packets_received +
+ 				   estats->jabber_packets_received;
+-	nstats->rx_over_errors = estats->no_buff_discard;
++	nstats->rx_over_errors = estats->brb_discard +
++				 estats->brb_truncate_discard;
+ 	nstats->rx_crc_errors = estats->crc_receive_errors;
+ 	nstats->rx_frame_errors = estats->alignment_errors;
+-	nstats->rx_fifo_errors = estats->brb_discard +
+-				       estats->brb_truncate_discard;
++	nstats->rx_fifo_errors = estats->no_buff_discard;
+ 	nstats->rx_missed_errors = estats->xxoverflow_discard;
+ 
+ 	nstats->rx_errors = nstats->rx_length_errors +
+ 			    nstats->rx_over_errors +
+ 			    nstats->rx_crc_errors +
+ 			    nstats->rx_frame_errors +
+-			    nstats->rx_fifo_errors;
++			    nstats->rx_fifo_errors +
++			    nstats->rx_missed_errors;
+ 
+ 	nstats->tx_aborted_errors = estats->late_collision_frames +
+-					  estats->excessive_collision_frames;
++				    estats->excessive_collision_frames;
+ 	nstats->tx_carrier_errors = estats->false_carrier_detections;
+ 	nstats->tx_fifo_errors = 0;
+ 	nstats->tx_heartbeat_errors = 0;
+@@ -4334,7 +5209,7 @@ static void bnx2x_timer(unsigned long data)
+ 		return;
+ 
+ 	if (atomic_read(&bp->intr_sem) != 0)
+-		goto bnx2x_restart_timer;
++		goto timer_restart;
+ 
+ 	if (poll) {
+ 		struct bnx2x_fastpath *fp = &bp->fp[0];
+@@ -4344,7 +5219,7 @@ static void bnx2x_timer(unsigned long data)
+ 		rc = bnx2x_rx_int(fp, 1000);
+ 	}
+ 
+-	if (!nomcp && (bp->bc_ver >= 0x040003)) {
++	if (!nomcp) {
+ 		int port = bp->port;
+ 		u32 drv_pulse;
+ 		u32 mcp_pulse;
+@@ -4353,9 +5228,9 @@ static void bnx2x_timer(unsigned long data)
+ 		bp->fw_drv_pulse_wr_seq &= DRV_PULSE_SEQ_MASK;
+ 		/* TBD - add SYSTEM_TIME */
+ 		drv_pulse = bp->fw_drv_pulse_wr_seq;
+-		SHMEM_WR(bp, drv_fw_mb[port].drv_pulse_mb, drv_pulse);
++		SHMEM_WR(bp, func_mb[port].drv_pulse_mb, drv_pulse);
+ 
+-		mcp_pulse = (SHMEM_RD(bp, drv_fw_mb[port].mcp_pulse_mb) &
++		mcp_pulse = (SHMEM_RD(bp, func_mb[port].mcp_pulse_mb) &
+ 			     MCP_PULSE_SEQ_MASK);
+ 		/* The delta between driver pulse and mcp response
+ 		 * should be 1 (before mcp response) or 0 (after mcp response)
+@@ -4369,11 +5244,11 @@ static void bnx2x_timer(unsigned long data)
+ 	}
+ 
+ 	if (bp->stats_state == STATS_STATE_DISABLE)
+-		goto bnx2x_restart_timer;
++		goto timer_restart;
+ 
+ 	bnx2x_update_stats(bp);
+ 
+-bnx2x_restart_timer:
++timer_restart:
+ 	mod_timer(&bp->timer, jiffies + bp->current_interval);
+ }
+ 
+@@ -4438,6 +5313,9 @@ static void bnx2x_init_def_sb(struct bnx2x *bp,
+ 					    atten_status_block);
+ 	def_sb->atten_status_block.status_block_id = id;
+ 
++	bp->def_att_idx = 0;
++	bp->attn_state = 0;
++
+ 	reg_offset = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_0 :
+ 			     MISC_REG_AEU_ENABLE1_FUNC_0_OUT_0);
+ 
+@@ -4472,6 +5350,8 @@ static void bnx2x_init_def_sb(struct bnx2x *bp,
+ 					    u_def_status_block);
+ 	def_sb->u_def_status_block.status_block_id = id;
+ 
++	bp->def_u_idx = 0;
++
+ 	REG_WR(bp, BAR_USTRORM_INTMEM +
+ 	       USTORM_DEF_SB_HOST_SB_ADDR_OFFSET(port), U64_LO(section));
+ 	REG_WR(bp, BAR_USTRORM_INTMEM +
+@@ -4489,6 +5369,8 @@ static void bnx2x_init_def_sb(struct bnx2x *bp,
+ 					    c_def_status_block);
+ 	def_sb->c_def_status_block.status_block_id = id;
+ 
++	bp->def_c_idx = 0;
++
+ 	REG_WR(bp, BAR_CSTRORM_INTMEM +
+ 	       CSTORM_DEF_SB_HOST_SB_ADDR_OFFSET(port), U64_LO(section));
+ 	REG_WR(bp, BAR_CSTRORM_INTMEM +
+@@ -4506,6 +5388,8 @@ static void bnx2x_init_def_sb(struct bnx2x *bp,
+ 					    t_def_status_block);
+ 	def_sb->t_def_status_block.status_block_id = id;
+ 
++	bp->def_t_idx = 0;
++
+ 	REG_WR(bp, BAR_TSTRORM_INTMEM +
+ 	       TSTORM_DEF_SB_HOST_SB_ADDR_OFFSET(port), U64_LO(section));
+ 	REG_WR(bp, BAR_TSTRORM_INTMEM +
+@@ -4523,6 +5407,8 @@ static void bnx2x_init_def_sb(struct bnx2x *bp,
+ 					    x_def_status_block);
+ 	def_sb->x_def_status_block.status_block_id = id;
+ 
++	bp->def_x_idx = 0;
++
+ 	REG_WR(bp, BAR_XSTRORM_INTMEM +
+ 	       XSTORM_DEF_SB_HOST_SB_ADDR_OFFSET(port), U64_LO(section));
+ 	REG_WR(bp, BAR_XSTRORM_INTMEM +
+@@ -4535,6 +5421,8 @@ static void bnx2x_init_def_sb(struct bnx2x *bp,
+ 		REG_WR16(bp, BAR_XSTRORM_INTMEM +
+ 			 XSTORM_DEF_SB_HC_DISABLE_OFFSET(port, index), 0x1);
+ 
++	bp->stat_pending = 0;
++
+ 	bnx2x_ack_sb(bp, id, CSTORM_ID, 0, IGU_INT_ENABLE, 0);
+ }
+ 
+@@ -4626,7 +5514,7 @@ static void bnx2x_init_rx_rings(struct bnx2x *bp)
+ 		fp->rx_bd_prod = fp->rx_comp_prod = ring_prod;
+ 		fp->rx_pkt = fp->rx_calls = 0;
+ 
+-		/* Warning! this will genrate an interrupt (to the TSTORM) */
++		/* Warning! this will generate an interrupt (to the TSTORM) */
+ 		/* must only be done when chip is initialized */
+ 		REG_WR(bp, BAR_TSTRORM_INTMEM +
+ 		       TSTORM_RCQ_PROD_OFFSET(port, j), ring_prod);
+@@ -4678,7 +5566,6 @@ static void bnx2x_init_sp_ring(struct bnx2x *bp)
+ 
+ 	bp->spq_left = MAX_SPQ_PENDING;
+ 	bp->spq_prod_idx = 0;
+-	bp->dsb_sp_prod_idx = 0;
+ 	bp->dsb_sp_prod = BNX2X_SP_DSB_INDEX;
+ 	bp->spq_prod_bd = bp->spq;
+ 	bp->spq_last_bd = bp->spq_prod_bd + MAX_SP_DESC_CNT;
+@@ -4755,6 +5642,42 @@ static void bnx2x_init_ind_table(struct bnx2x *bp)
+ 	REG_WR(bp, PRS_REG_A_PRSU_20, 0xf);
+ }
+ 
++static void bnx2x_set_client_config(struct bnx2x *bp)
++{
++#ifdef BCM_VLAN
++	int mode = bp->rx_mode;
++#endif
++	int i, port = bp->port;
++	struct tstorm_eth_client_config tstorm_client = {0};
++
++	tstorm_client.mtu = bp->dev->mtu;
++	tstorm_client.statistics_counter_id = 0;
++	tstorm_client.config_flags =
++				TSTORM_ETH_CLIENT_CONFIG_STATSITICS_ENABLE;
++#ifdef BCM_VLAN
++	if (mode && bp->vlgrp) {
++		tstorm_client.config_flags |=
++				TSTORM_ETH_CLIENT_CONFIG_VLAN_REMOVAL_ENABLE;
++		DP(NETIF_MSG_IFUP, "vlan removal enabled\n");
++	}
++#endif
++	if (mode != BNX2X_RX_MODE_PROMISC)
++		tstorm_client.drop_flags =
++				TSTORM_ETH_CLIENT_CONFIG_DROP_MAC_ERR;
++
++	for_each_queue(bp, i) {
++		REG_WR(bp, BAR_TSTRORM_INTMEM +
++		       TSTORM_CLIENT_CONFIG_OFFSET(port, i),
++		       ((u32 *)&tstorm_client)[0]);
++		REG_WR(bp, BAR_TSTRORM_INTMEM +
++		       TSTORM_CLIENT_CONFIG_OFFSET(port, i) + 4,
++		       ((u32 *)&tstorm_client)[1]);
++	}
++
++/*	DP(NETIF_MSG_IFUP, "tstorm_client: 0x%08x 0x%08x\n",
++	   ((u32 *)&tstorm_client)[0], ((u32 *)&tstorm_client)[1]); */
++}
++
+ static void bnx2x_set_storm_rx_mode(struct bnx2x *bp)
+ {
+ 	int mode = bp->rx_mode;
+@@ -4794,41 +5717,9 @@ static void bnx2x_set_storm_rx_mode(struct bnx2x *bp)
+ /*      	DP(NETIF_MSG_IFUP, "tstorm_mac_filter[%d]: 0x%08x\n", i,
+ 		   ((u32 *)&tstorm_mac_filter)[i]); */
+ 	}
+-}
+ 
+-static void bnx2x_set_client_config(struct bnx2x *bp, int client_id)
+-{
+-#ifdef BCM_VLAN
+-	int mode = bp->rx_mode;
+-#endif
+-	int port = bp->port;
+-	struct tstorm_eth_client_config tstorm_client = {0};
+-
+-	tstorm_client.mtu = bp->dev->mtu;
+-	tstorm_client.statistics_counter_id = 0;
+-	tstorm_client.config_flags =
+-		TSTORM_ETH_CLIENT_CONFIG_STATSITICS_ENABLE;
+-#ifdef BCM_VLAN
+-	if (mode && bp->vlgrp) {
+-		tstorm_client.config_flags |=
+-				TSTORM_ETH_CLIENT_CONFIG_VLAN_REMOVAL_ENABLE;
+-		DP(NETIF_MSG_IFUP, "vlan removal enabled\n");
+-	}
+-#endif
+-	tstorm_client.drop_flags = (TSTORM_ETH_CLIENT_CONFIG_DROP_IP_CS_ERR |
+-				    TSTORM_ETH_CLIENT_CONFIG_DROP_TCP_CS_ERR |
+-				    TSTORM_ETH_CLIENT_CONFIG_DROP_UDP_CS_ERR |
+-				    TSTORM_ETH_CLIENT_CONFIG_DROP_MAC_ERR);
+-
+-	REG_WR(bp, BAR_TSTRORM_INTMEM +
+-	       TSTORM_CLIENT_CONFIG_OFFSET(port, client_id),
+-	       ((u32 *)&tstorm_client)[0]);
+-	REG_WR(bp, BAR_TSTRORM_INTMEM +
+-	       TSTORM_CLIENT_CONFIG_OFFSET(port, client_id) + 4,
+-	       ((u32 *)&tstorm_client)[1]);
+-
+-/*      DP(NETIF_MSG_IFUP, "tstorm_client: 0x%08x 0x%08x\n",
+-	   ((u32 *)&tstorm_client)[0], ((u32 *)&tstorm_client)[1]); */
++	if (mode != BNX2X_RX_MODE_NONE)
++		bnx2x_set_client_config(bp);
+ }
+ 
+ static void bnx2x_init_internal(struct bnx2x *bp)
+@@ -4836,7 +5727,6 @@ static void bnx2x_init_internal(struct bnx2x *bp)
+ 	int port = bp->port;
+ 	struct tstorm_eth_function_common_config tstorm_config = {0};
+ 	struct stats_indication_flags stats_flags = {0};
+-	int i;
+ 
+ 	if (is_multi(bp)) {
+ 		tstorm_config.config_flags = MULTI_FLAGS;
+@@ -4850,13 +5740,9 @@ static void bnx2x_init_internal(struct bnx2x *bp)
+ /*      DP(NETIF_MSG_IFUP, "tstorm_config: 0x%08x\n",
+ 	   (*(u32 *)&tstorm_config)); */
+ 
+-	bp->rx_mode = BNX2X_RX_MODE_NONE; /* no rx untill link is up */
++	bp->rx_mode = BNX2X_RX_MODE_NONE; /* no rx until link is up */
+ 	bnx2x_set_storm_rx_mode(bp);
+ 
+-	for_each_queue(bp, i)
+-		bnx2x_set_client_config(bp, i);
+-
+-
+ 	stats_flags.collect_eth = cpu_to_le32(1);
+ 
+ 	REG_WR(bp, BAR_XSTRORM_INTMEM + XSTORM_STATS_FLAGS_OFFSET(port),
+@@ -4902,7 +5788,7 @@ static void bnx2x_nic_init(struct bnx2x *bp)
+ 	bnx2x_init_internal(bp);
+ 	bnx2x_init_stats(bp);
+ 	bnx2x_init_ind_table(bp);
+-	bnx2x_enable_int(bp);
++	bnx2x_int_enable(bp);
+ 
+ }
+ 
+@@ -5265,8 +6151,10 @@ static int bnx2x_function_init(struct bnx2x *bp, int mode)
+ 	if (mode & 0x1) {       /* init common */
+ 		DP(BNX2X_MSG_MCP, "starting common init  func %d  mode %x\n",
+ 		   func, mode);
+-		REG_WR(bp, MISC_REG_RESET_REG_1, 0xffffffff);
+-		REG_WR(bp, MISC_REG_RESET_REG_2, 0xfffc);
++		REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_1_SET,
++		       0xffffffff);
++		REG_WR(bp, GRCBASE_MISC + MISC_REGISTERS_RESET_REG_1_SET,
++		       0xfffc);
+ 		bnx2x_init_block(bp, MISC_COMMON_START, MISC_COMMON_END);
+ 
+ 		REG_WR(bp, MISC_REG_LCPLL_CTRL_REG_2, 0x100);
+@@ -5359,7 +6247,7 @@ static int bnx2x_function_init(struct bnx2x *bp, int mode)
+ 		REG_RD(bp, USEM_REG_PASSIVE_BUFFER + 8);
+ #endif
+ 		bnx2x_init_block(bp, QM_COMMON_START, QM_COMMON_END);
+-		/* softrest pulse */
++		/* soft reset pulse */
+ 		REG_WR(bp, QM_REG_SOFT_RESET, 1);
+ 		REG_WR(bp, QM_REG_SOFT_RESET, 0);
+ 
+@@ -5413,7 +6301,7 @@ static int bnx2x_function_init(struct bnx2x *bp, int mode)
+ 		REG_WR(bp, SRC_REG_SOFT_RST, 1);
+ 		for (i = SRC_REG_KEYRSS0_0; i <= SRC_REG_KEYRSS1_9; i += 4) {
+ 			REG_WR(bp, i, 0xc0cac01a);
+-			/* TODO: repleace with something meaningfull */
++			/* TODO: replace with something meaningful */
+ 		}
+ 		/* SRCH COMMON comes here */
+ 		REG_WR(bp, SRC_REG_SOFT_RST, 0);
+@@ -5486,6 +6374,28 @@ static int bnx2x_function_init(struct bnx2x *bp, int mode)
+ 		enable_blocks_attention(bp);
+ 		/* enable_blocks_parity(bp); */
+ 
++		switch (bp->board & SHARED_HW_CFG_BOARD_TYPE_MASK) {
++		case SHARED_HW_CFG_BOARD_TYPE_BCM957710A1022G:
++			/* Fan failure is indicated by SPIO 5 */
++			bnx2x_set_spio(bp, MISC_REGISTERS_SPIO_5,
++				       MISC_REGISTERS_SPIO_INPUT_HI_Z);
++
++			/* set to active low mode */
++			val = REG_RD(bp, MISC_REG_SPIO_INT);
++			val |= ((1 << MISC_REGISTERS_SPIO_5) <<
++					MISC_REGISTERS_SPIO_INT_OLD_SET_POS);
++			REG_WR(bp, MISC_REG_SPIO_INT, val);
++
++			/* enable interrupt to signal the IGU */
++			val = REG_RD(bp, MISC_REG_SPIO_EVENT_EN);
++			val |= (1 << MISC_REGISTERS_SPIO_5);
++			REG_WR(bp, MISC_REG_SPIO_EVENT_EN, val);
++			break;
++
++		default:
++			break;
++		}
++
+ 	} /* end of common init */
+ 
+ 	/* per port init */
+@@ -5645,9 +6555,21 @@ static int bnx2x_function_init(struct bnx2x *bp, int mode)
+ 	/* Port MCP comes here */
+ 	/* Port DMAE comes here */
+ 
++	switch (bp->board & SHARED_HW_CFG_BOARD_TYPE_MASK) {
++	case SHARED_HW_CFG_BOARD_TYPE_BCM957710A1022G:
++		/* add SPIO 5 to group 0 */
++		val = REG_RD(bp, MISC_REG_AEU_ENABLE1_FUNC_0_OUT_0);
++		val |= AEU_INPUTS_ATTN_BITS_SPIO5;
++		REG_WR(bp, MISC_REG_AEU_ENABLE1_FUNC_0_OUT_0, val);
++		break;
++
++	default:
++		break;
++	}
++
+ 	bnx2x_link_reset(bp);
+ 
+-	/* Reset pciex errors for debug */
++	/* Reset PCIE errors for debug */
+ 	REG_WR(bp, 0x2114, 0xffffffff);
+ 	REG_WR(bp, 0x2120, 0xffffffff);
+ 	REG_WR(bp, 0x2814, 0xffffffff);
+@@ -5669,9 +6591,9 @@ static int bnx2x_function_init(struct bnx2x *bp, int mode)
+ 		port = bp->port;
+ 
+ 		bp->fw_drv_pulse_wr_seq =
+-				(SHMEM_RD(bp, drv_fw_mb[port].drv_pulse_mb) &
++				(SHMEM_RD(bp, func_mb[port].drv_pulse_mb) &
+ 				 DRV_PULSE_SEQ_MASK);
+-		bp->fw_mb = SHMEM_RD(bp, drv_fw_mb[port].fw_mb_param);
++		bp->fw_mb = SHMEM_RD(bp, func_mb[port].fw_mb_param);
+ 		DP(BNX2X_MSG_MCP, "drv_pulse 0x%x  fw_mb 0x%x\n",
+ 		   bp->fw_drv_pulse_wr_seq, bp->fw_mb);
+ 	} else {
+@@ -5681,16 +6603,15 @@ static int bnx2x_function_init(struct bnx2x *bp, int mode)
+ 	return 0;
+ }
+ 
+-
+-/* send the MCP a request, block untill there is a reply */
++/* send the MCP a request, block until there is a reply */
+ static u32 bnx2x_fw_command(struct bnx2x *bp, u32 command)
+ {
+-	u32 rc = 0;
+-	u32 seq = ++bp->fw_seq;
+ 	int port = bp->port;
++	u32 seq = ++bp->fw_seq;
++	u32 rc = 0;
+ 
+-	SHMEM_WR(bp, drv_fw_mb[port].drv_mb_header, command|seq);
+-	DP(BNX2X_MSG_MCP, "wrote command (%x) to FW MB\n", command|seq);
++	SHMEM_WR(bp, func_mb[port].drv_mb_header, (command | seq));
++	DP(BNX2X_MSG_MCP, "wrote command (%x) to FW MB\n", (command | seq));
+ 
+ 	/* let the FW do it's magic ... */
+ 	msleep(100); /* TBD */
+@@ -5698,19 +6619,20 @@ static u32 bnx2x_fw_command(struct bnx2x *bp, u32 command)
+ 	if (CHIP_REV_IS_SLOW(bp))
+ 		msleep(900);
+ 
+-	rc = SHMEM_RD(bp, drv_fw_mb[port].fw_mb_header);
+-
++	rc = SHMEM_RD(bp, func_mb[port].fw_mb_header);
+ 	DP(BNX2X_MSG_MCP, "read (%x) seq is (%x) from FW MB\n", rc, seq);
+ 
+ 	/* is this a reply to our command? */
+ 	if (seq == (rc & FW_MSG_SEQ_NUMBER_MASK)) {
+ 		rc &= FW_MSG_CODE_MASK;
++
+ 	} else {
+ 		/* FW BUG! */
+ 		BNX2X_ERR("FW failed to respond!\n");
+ 		bnx2x_fw_dump(bp);
+ 		rc = 0;
+ 	}
++
+ 	return rc;
+ }
+ 
+@@ -5869,7 +6791,7 @@ static int bnx2x_alloc_mem(struct bnx2x *bp)
+ 	for (i = 0; i < 16*1024; i += 64)
+ 		* (u64 *)((char *)bp->t2 + i + 56) = bp->t2_mapping + i + 64;
+ 
+-	/* now sixup the last line in the block to point to the next block */
++	/* now fixup the last line in the block to point to the next block */
+ 	*(u64 *)((char *)bp->t2 + 1024*16-8) = bp->t2_mapping;
+ 
+ 	/* Timer block array (MAX_CONN*8) phys uncached for now 1024 conns */
+@@ -5950,22 +6872,19 @@ static void bnx2x_free_msix_irqs(struct bnx2x *bp)
+ 	int i;
+ 
+ 	free_irq(bp->msix_table[0].vector, bp->dev);
+-	DP(NETIF_MSG_IFDOWN, "rleased sp irq (%d)\n",
++	DP(NETIF_MSG_IFDOWN, "released sp irq (%d)\n",
+ 	   bp->msix_table[0].vector);
+ 
+ 	for_each_queue(bp, i) {
+-		DP(NETIF_MSG_IFDOWN, "about to rlease fp #%d->%d irq  "
++		DP(NETIF_MSG_IFDOWN, "about to release fp #%d->%d irq  "
+ 		   "state(%x)\n", i, bp->msix_table[i + 1].vector,
+ 		   bnx2x_fp(bp, i, state));
+ 
+-		if (bnx2x_fp(bp, i, state) != BNX2X_FP_STATE_CLOSED) {
+-
+-			free_irq(bp->msix_table[i + 1].vector, &bp->fp[i]);
+-			bnx2x_fp(bp, i, state) = BNX2X_FP_STATE_CLOSED;
+-
+-		} else
+-			DP(NETIF_MSG_IFDOWN, "irq not freed\n");
++		if (bnx2x_fp(bp, i, state) != BNX2X_FP_STATE_CLOSED)
++			BNX2X_ERR("IRQ of fp #%d being freed while "
++				  "state != closed\n", i);
+ 
++		free_irq(bp->msix_table[i + 1].vector, &bp->fp[i]);
+ 	}
+ 
+ }
+@@ -5995,7 +6914,7 @@ static int bnx2x_enable_msix(struct bnx2x *bp)
+ 
+ 	if (pci_enable_msix(bp->pdev, &bp->msix_table[0],
+ 				     bp->num_queues + 1)){
+-		BNX2X_ERR("failed to enable msix\n");
++		BNX2X_LOG("failed to enable MSI-X\n");
+ 		return -1;
+ 
+ 	}
+@@ -6010,11 +6929,8 @@ static int bnx2x_enable_msix(struct bnx2x *bp)
+ static int bnx2x_req_msix_irqs(struct bnx2x *bp)
+ {
+ 
+-
+ 	int i, rc;
+ 
+-	DP(NETIF_MSG_IFUP, "about to request sp irq\n");
+-
+ 	rc = request_irq(bp->msix_table[0].vector, bnx2x_msix_sp_int, 0,
+ 			 bp->dev->name, bp->dev);
+ 
+@@ -6029,7 +6945,8 @@ static int bnx2x_req_msix_irqs(struct bnx2x *bp)
+ 				 bp->dev->name, &bp->fp[i]);
+ 
+ 		if (rc) {
+-			BNX2X_ERR("request fp #%d irq failed\n", i);
++			BNX2X_ERR("request fp #%d irq failed  "
++				  "rc %d\n", i, rc);
+ 			bnx2x_free_msix_irqs(bp);
+ 			return -EBUSY;
+ 		}
+@@ -6109,8 +7026,8 @@ static int bnx2x_wait_ramrod(struct bnx2x *bp, int state, int idx,
+ 	/* can take a while if any port is running */
+ 	int timeout = 500;
+ 
+-	/* DP("waiting for state to become %d on IDX [%d]\n",
+-	state, sb_idx); */
++	DP(NETIF_MSG_IFUP, "%s for state to become %x on IDX [%d]\n",
++	   poll ? "polling" : "waiting", state, idx);
+ 
+ 	might_sleep();
+ 
+@@ -6128,7 +7045,7 @@ static int bnx2x_wait_ramrod(struct bnx2x *bp, int state, int idx,
+ 
+ 		mb(); /* state is changed by bnx2x_sp_event()*/
+ 
+-		if (*state_p != state)
++		if (*state_p == state)
+ 			return 0;
+ 
+ 		timeout--;
+@@ -6136,17 +7053,17 @@ static int bnx2x_wait_ramrod(struct bnx2x *bp, int state, int idx,
+ 
+ 	}
+ 
+-
+ 	/* timeout! */
+-	BNX2X_ERR("timeout waiting for ramrod %d on %d\n", state, idx);
+-	return -EBUSY;
++	BNX2X_ERR("timeout %s for state %x on IDX [%d]\n",
++		  poll ? "polling" : "waiting", state, idx);
+ 
++	return -EBUSY;
+ }
+ 
+ static int bnx2x_setup_leading(struct bnx2x *bp)
+ {
+ 
+-	/* reset IGU staae */
++	/* reset IGU state */
+ 	bnx2x_ack_sb(bp, DEF_SB_ID, CSTORM_ID, 0, IGU_INT_ENABLE, 0);
+ 
+ 	/* SETUP ramrod */
+@@ -6162,12 +7079,13 @@ static int bnx2x_setup_multi(struct bnx2x *bp, int index)
+ 	/* reset IGU state */
+ 	bnx2x_ack_sb(bp, index, CSTORM_ID, 0, IGU_INT_ENABLE, 0);
+ 
++	/* SETUP ramrod */
+ 	bp->fp[index].state = BNX2X_FP_STATE_OPENING;
+ 	bnx2x_sp_post(bp, RAMROD_CMD_ID_ETH_CLIENT_SETUP, index, 0, index, 0);
+ 
+ 	/* Wait for completion */
+ 	return bnx2x_wait_ramrod(bp, BNX2X_FP_STATE_OPEN, index,
+-				 &(bp->fp[index].state), 1);
++				 &(bp->fp[index].state), 0);
+ 
+ }
+ 
+@@ -6177,8 +7095,8 @@ static void bnx2x_set_rx_mode(struct net_device *dev);
+ 
+ static int bnx2x_nic_load(struct bnx2x *bp, int req_irq)
+ {
+-	int rc;
+-	int i = 0;
++	u32 load_code;
++	int i;
+ 
+ 	bp->state = BNX2X_STATE_OPENING_WAIT4_LOAD;
+ 
+@@ -6188,26 +7106,28 @@ static int bnx2x_nic_load(struct bnx2x *bp, int req_irq)
+ 	   initialized, otherwise - not.
+ 	*/
+ 	if (!nomcp) {
+-		rc = bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_REQ);
+-		if (rc == FW_MSG_CODE_DRV_LOAD_REFUSED) {
++		load_code = bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_REQ);
++		if (!load_code) {
++			BNX2X_ERR("MCP response failure, unloading\n");
++			return -EBUSY;
++		}
++		if (load_code == FW_MSG_CODE_DRV_LOAD_REFUSED) {
++			BNX2X_ERR("MCP refused load request, unloading\n");
+ 			return -EBUSY; /* other port in diagnostic mode */
+ 		}
+ 	} else {
+-		rc = FW_MSG_CODE_DRV_LOAD_COMMON;
++		load_code = FW_MSG_CODE_DRV_LOAD_COMMON;
+ 	}
+ 
+-	DP(NETIF_MSG_IFUP, "set number of queues to %d\n", bp->num_queues);
+-
+ 	/* if we can't use msix we only need one fp,
+ 	 * so try to enable msix with the requested number of fp's
+ 	 * and fallback to inta with one fp
+ 	 */
+ 	if (req_irq) {
+-
+ 		if (use_inta) {
+ 			bp->num_queues = 1;
+ 		} else {
+-			if (use_multi > 1 && use_multi <= 16)
++			if ((use_multi > 1) && (use_multi <= 16))
+ 				/* user requested number */
+ 				bp->num_queues = use_multi;
+ 			else if (use_multi == 1)
+@@ -6216,15 +7136,17 @@ static int bnx2x_nic_load(struct bnx2x *bp, int req_irq)
+ 				bp->num_queues = 1;
+ 
+ 			if (bnx2x_enable_msix(bp)) {
+-				/* faild to enable msix */
++				/* failed to enable msix */
+ 				bp->num_queues = 1;
+ 				if (use_multi)
+-					BNX2X_ERR("Muti requested but failed"
++					BNX2X_ERR("Multi requested but failed"
+ 						  " to enable MSI-X\n");
+ 			}
+ 		}
+ 	}
+ 
++	DP(NETIF_MSG_IFUP, "set number of queues to %d\n", bp->num_queues);
++
+ 	if (bnx2x_alloc_mem(bp))
+ 		return -ENOMEM;
+ 
+@@ -6232,13 +7154,13 @@ static int bnx2x_nic_load(struct bnx2x *bp, int req_irq)
+ 		if (bp->flags & USING_MSIX_FLAG) {
+ 			if (bnx2x_req_msix_irqs(bp)) {
+ 				pci_disable_msix(bp->pdev);
+-				goto out_error;
++				goto load_error;
+ 			}
+ 
+ 		} else {
+ 			if (bnx2x_req_irq(bp)) {
+ 				BNX2X_ERR("IRQ request failed, aborting\n");
+-				goto out_error;
++				goto load_error;
+ 			}
+ 		}
+ 	}
+@@ -6249,31 +7171,25 @@ static int bnx2x_nic_load(struct bnx2x *bp, int req_irq)
+ 
+ 
+ 	/* Initialize HW */
+-	if (bnx2x_function_init(bp, (rc == FW_MSG_CODE_DRV_LOAD_COMMON))) {
++	if (bnx2x_function_init(bp,
++				(load_code == FW_MSG_CODE_DRV_LOAD_COMMON))) {
+ 		BNX2X_ERR("HW init failed, aborting\n");
+-		goto out_error;
++		goto load_error;
+ 	}
+ 
+ 
+ 	atomic_set(&bp->intr_sem, 0);
+ 
+-	/* Reenable SP tasklet */
+-	/*if (bp->sp_task_en) { 	       */
+-	/*        tasklet_enable(&bp->sp_task);*/
+-	/*} else {      		       */
+-	/*        bp->sp_task_en = 1;          */
+-	/*}     			       */
+ 
+ 	/* Setup NIC internals and enable interrupts */
+ 	bnx2x_nic_init(bp);
+ 
+ 	/* Send LOAD_DONE command to MCP */
+ 	if (!nomcp) {
+-		rc = bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_DONE);
+-		DP(NETIF_MSG_IFUP, "rc = 0x%x\n", rc);
+-		if (!rc) {
++		load_code = bnx2x_fw_command(bp, DRV_MSG_CODE_LOAD_DONE);
++		if (!load_code) {
+ 			BNX2X_ERR("MCP response failure, unloading\n");
+-			goto int_disable;
++			goto load_int_disable;
+ 		}
+ 	}
+ 
+@@ -6285,11 +7201,11 @@ static int bnx2x_nic_load(struct bnx2x *bp, int req_irq)
+ 		napi_enable(&bnx2x_fp(bp, i, napi));
+ 
+ 	if (bnx2x_setup_leading(bp))
+-		goto stop_netif;
++		goto load_stop_netif;
+ 
+ 	for_each_nondefault_queue(bp, i)
+ 		if (bnx2x_setup_multi(bp, i))
+-			goto stop_netif;
++			goto load_stop_netif;
+ 
+ 	bnx2x_set_mac_addr(bp);
+ 
+@@ -6313,42 +7229,24 @@ static int bnx2x_nic_load(struct bnx2x *bp, int req_irq)
+ 
+ 	return 0;
+ 
+-stop_netif:
++load_stop_netif:
+ 	for_each_queue(bp, i)
+ 		napi_disable(&bnx2x_fp(bp, i, napi));
+ 
+-int_disable:
+-	bnx2x_disable_int_sync(bp);
++load_int_disable:
++	bnx2x_int_disable_sync(bp);
+ 
+ 	bnx2x_free_skbs(bp);
+ 	bnx2x_free_irq(bp);
+ 
+-out_error:
++load_error:
+ 	bnx2x_free_mem(bp);
+ 
+ 	/* TBD we really need to reset the chip
+ 	   if we want to recover from this */
+-	return rc;
++	return -EBUSY;
+ }
+ 
+-static void bnx2x_netif_stop(struct bnx2x *bp)
+-{
+-	int i;
+-
+-	bp->rx_mode = BNX2X_RX_MODE_NONE;
+-	bnx2x_set_storm_rx_mode(bp);
+-
+-	bnx2x_disable_int_sync(bp);
+-	bnx2x_link_reset(bp);
+-
+-	for_each_queue(bp, i)
+-		napi_disable(&bnx2x_fp(bp, i, napi));
+-
+-	if (netif_running(bp->dev)) {
+-		netif_tx_disable(bp->dev);
+-		bp->dev->trans_start = jiffies; /* prevent tx timeout */
+-	}
+-}
+ 
+ static void bnx2x_reset_chip(struct bnx2x *bp, u32 reset_code)
+ {
+@@ -6401,20 +7299,20 @@ static int bnx2x_stop_multi(struct bnx2x *bp, int index)
+ 
+ 	int rc;
+ 
+-	/* halt the connnection */
++	/* halt the connection */
+ 	bp->fp[index].state = BNX2X_FP_STATE_HALTING;
+ 	bnx2x_sp_post(bp, RAMROD_CMD_ID_ETH_HALT, index, 0, 0, 0);
+ 
+ 
+ 	rc = bnx2x_wait_ramrod(bp, BNX2X_FP_STATE_HALTED, index,
+ 				       &(bp->fp[index].state), 1);
+-	if (rc) /* timout */
++	if (rc) /* timeout */
+ 		return rc;
+ 
+ 	/* delete cfc entry */
+ 	bnx2x_sp_post(bp, RAMROD_CMD_ID_ETH_CFC_DEL, index, 0, 0, 1);
+ 
+-	return bnx2x_wait_ramrod(bp, BNX2X_FP_STATE_DELETED, index,
++	return bnx2x_wait_ramrod(bp, BNX2X_FP_STATE_CLOSED, index,
+ 				 &(bp->fp[index].state), 1);
+ 
+ }
+@@ -6422,8 +7320,8 @@ static int bnx2x_stop_multi(struct bnx2x *bp, int index)
+ 
+ static void bnx2x_stop_leading(struct bnx2x *bp)
+ {
+-
+-	/* if the other port is hadling traffic,
++	u16 dsb_sp_prod_idx;
++	/* if the other port is handling traffic,
+ 	   this can take a lot of time */
+ 	int timeout = 500;
+ 
+@@ -6437,52 +7335,71 @@ static void bnx2x_stop_leading(struct bnx2x *bp)
+ 			       &(bp->fp[0].state), 1))
+ 		return;
+ 
+-	bp->dsb_sp_prod_idx = *bp->dsb_sp_prod;
++	dsb_sp_prod_idx = *bp->dsb_sp_prod;
+ 
+-	/* Send CFC_DELETE ramrod */
++	/* Send PORT_DELETE ramrod */
+ 	bnx2x_sp_post(bp, RAMROD_CMD_ID_ETH_PORT_DEL, 0, 0, 0, 1);
+ 
+-	/*
+-	   Wait for completion.
++	/* Wait for completion to arrive on default status block
+ 	   we are going to reset the chip anyway
+ 	   so there is not much to do if this times out
+ 	 */
+-	while (bp->dsb_sp_prod_idx == *bp->dsb_sp_prod && timeout) {
+-			timeout--;
+-			msleep(1);
++	while ((dsb_sp_prod_idx == *bp->dsb_sp_prod) && timeout) {
++		timeout--;
++		msleep(1);
+ 	}
+-
++	if (!timeout) {
++		DP(NETIF_MSG_IFDOWN, "timeout polling for completion "
++		   "dsb_sp_prod 0x%x != dsb_sp_prod_idx 0x%x\n",
++		   *bp->dsb_sp_prod, dsb_sp_prod_idx);
++	}
++	bp->state = BNX2X_STATE_CLOSING_WAIT4_UNLOAD;
++	bp->fp[0].state = BNX2X_FP_STATE_CLOSED;
+ }
+ 
+-static int bnx2x_nic_unload(struct bnx2x *bp, int fre_irq)
++
++static int bnx2x_nic_unload(struct bnx2x *bp, int free_irq)
+ {
+ 	u32 reset_code = 0;
+-	int rc;
+-	int i;
++	int i, timeout;
+ 
+ 	bp->state = BNX2X_STATE_CLOSING_WAIT4_HALT;
+ 
+-	/* Calling flush_scheduled_work() may deadlock because
+-	 * linkwatch_event() may be on the workqueue and it will try to get
+-	 * the rtnl_lock which we are holding.
+-	 */
++	del_timer_sync(&bp->timer);
+ 
+-	while (bp->in_reset_task)
+-		msleep(1);
++	bp->rx_mode = BNX2X_RX_MODE_NONE;
++	bnx2x_set_storm_rx_mode(bp);
+ 
+-	/* Delete the timer: do it before disabling interrupts, as it
+-	   may be stil STAT_QUERY ramrod pending after stopping the timer */
+-	del_timer_sync(&bp->timer);
++	if (netif_running(bp->dev)) {
++		netif_tx_disable(bp->dev);
++		bp->dev->trans_start = jiffies;	/* prevent tx timeout */
++	}
++
++	/* Wait until all fast path tasks complete */
++	for_each_queue(bp, i) {
++		struct bnx2x_fastpath *fp = &bp->fp[i];
++
++		timeout = 1000;
++		while (bnx2x_has_work(fp) && (timeout--))
++			msleep(1);
++		if (!timeout)
++			BNX2X_ERR("timeout waiting for queue[%d]\n", i);
++	}
+ 
+ 	/* Wait until stat ramrod returns and all SP tasks complete */
+-	while (bp->stat_pending && (bp->spq_left != MAX_SPQ_PENDING))
++	timeout = 1000;
++	while ((bp->stat_pending || (bp->spq_left != MAX_SPQ_PENDING)) &&
++	       (timeout--))
+ 		msleep(1);
+ 
+-	/* Stop fast path, disable MAC, disable interrupts, disable napi */
+-	bnx2x_netif_stop(bp);
++	for_each_queue(bp, i)
++		napi_disable(&bnx2x_fp(bp, i, napi));
++	/* Disable interrupts after Tx and Rx are disabled on stack level */
++	bnx2x_int_disable_sync(bp);
+ 
+ 	if (bp->flags & NO_WOL_FLAG)
+ 		reset_code = DRV_MSG_CODE_UNLOAD_REQ_WOL_MCP;
++
+ 	else if (bp->wol) {
+ 		u32 emac_base = bp->port ? GRCBASE_EMAC0 : GRCBASE_EMAC1;
+ 		u8 *mac_addr = bp->dev->dev_addr;
+@@ -6499,28 +7416,37 @@ static int bnx2x_nic_unload(struct bnx2x *bp, int fre_irq)
+ 		EMAC_WR(EMAC_REG_EMAC_MAC_MATCH + 4, val);
+ 
+ 		reset_code = DRV_MSG_CODE_UNLOAD_REQ_WOL_EN;
++
+ 	} else
+ 		reset_code = DRV_MSG_CODE_UNLOAD_REQ_WOL_DIS;
+ 
++	/* Close multi and leading connections */
+ 	for_each_nondefault_queue(bp, i)
+ 		if (bnx2x_stop_multi(bp, i))
+-			goto error;
+-
++			goto unload_error;
+ 
+ 	bnx2x_stop_leading(bp);
++	if ((bp->state != BNX2X_STATE_CLOSING_WAIT4_UNLOAD) ||
++	    (bp->fp[0].state != BNX2X_FP_STATE_CLOSED)) {
++		DP(NETIF_MSG_IFDOWN, "failed to close leading properly!"
++		   "state 0x%x  fp[0].state 0x%x",
++		   bp->state, bp->fp[0].state);
++	}
++
++unload_error:
++	bnx2x_link_reset(bp);
+ 
+-error:
+ 	if (!nomcp)
+-		rc = bnx2x_fw_command(bp, reset_code);
++		reset_code = bnx2x_fw_command(bp, reset_code);
+ 	else
+-		rc = FW_MSG_CODE_DRV_UNLOAD_COMMON;
++		reset_code = FW_MSG_CODE_DRV_UNLOAD_COMMON;
+ 
+ 	/* Release IRQs */
+-	if (fre_irq)
++	if (free_irq)
+ 		bnx2x_free_irq(bp);
+ 
+ 	/* Reset the chip */
+-	bnx2x_reset_chip(bp, rc);
++	bnx2x_reset_chip(bp, reset_code);
+ 
+ 	/* Report UNLOAD_DONE to MCP */
+ 	if (!nomcp)
+@@ -6531,8 +7457,7 @@ error:
+ 	bnx2x_free_mem(bp);
+ 
+ 	bp->state = BNX2X_STATE_CLOSED;
+-	/* Set link down */
+-	bp->link_up = 0;
++
+ 	netif_carrier_off(bp->dev);
+ 
+ 	return 0;
+@@ -6568,7 +7493,7 @@ static void bnx2x_link_settings_supported(struct bnx2x *bp, u32 switch_cfg)
+ 					  SUPPORTED_100baseT_Half |
+ 					  SUPPORTED_100baseT_Full |
+ 					  SUPPORTED_1000baseT_Full |
+-					  SUPPORTED_2500baseT_Full |
++					  SUPPORTED_2500baseX_Full |
+ 					  SUPPORTED_TP | SUPPORTED_FIBRE |
+ 					  SUPPORTED_Autoneg |
+ 					  SUPPORTED_Pause |
+@@ -6581,10 +7506,10 @@ static void bnx2x_link_settings_supported(struct bnx2x *bp, u32 switch_cfg)
+ 
+ 			bp->phy_flags |= PHY_SGMII_FLAG;
+ 
+-			bp->supported |= (/* SUPPORTED_10baseT_Half |
+-					     SUPPORTED_10baseT_Full |
+-					     SUPPORTED_100baseT_Half |
+-					     SUPPORTED_100baseT_Full |*/
++			bp->supported |= (SUPPORTED_10baseT_Half |
++					  SUPPORTED_10baseT_Full |
++					  SUPPORTED_100baseT_Half |
++					  SUPPORTED_100baseT_Full |
+ 					  SUPPORTED_1000baseT_Full |
+ 					  SUPPORTED_TP | SUPPORTED_FIBRE |
+ 					  SUPPORTED_Autoneg |
+@@ -6620,7 +7545,7 @@ static void bnx2x_link_settings_supported(struct bnx2x *bp, u32 switch_cfg)
+ 					  SUPPORTED_100baseT_Half |
+ 					  SUPPORTED_100baseT_Full |
+ 					  SUPPORTED_1000baseT_Full |
+-					  SUPPORTED_2500baseT_Full |
++					  SUPPORTED_2500baseX_Full |
+ 					  SUPPORTED_10000baseT_Full |
+ 					  SUPPORTED_TP | SUPPORTED_FIBRE |
+ 					  SUPPORTED_Autoneg |
+@@ -6629,12 +7554,46 @@ static void bnx2x_link_settings_supported(struct bnx2x *bp, u32 switch_cfg)
+ 			break;
+ 
+ 		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8705:
++			BNX2X_DEV_INFO("ext_phy_type 0x%x (8705)\n",
++					ext_phy_type);
++
++			bp->supported |= (SUPPORTED_10000baseT_Full |
++					  SUPPORTED_FIBRE |
++					  SUPPORTED_Pause |
++					  SUPPORTED_Asym_Pause);
++			break;
++
+ 		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8706:
+-			BNX2X_DEV_INFO("ext_phy_type 0x%x (8705/6)\n",
++			BNX2X_DEV_INFO("ext_phy_type 0x%x (8706)\n",
++				       ext_phy_type);
++
++			bp->supported |= (SUPPORTED_10000baseT_Full |
++					  SUPPORTED_1000baseT_Full |
++					  SUPPORTED_Autoneg |
++					  SUPPORTED_FIBRE |
++					  SUPPORTED_Pause |
++					  SUPPORTED_Asym_Pause);
++			break;
++
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8072:
++			BNX2X_DEV_INFO("ext_phy_type 0x%x (8072)\n",
+ 				       ext_phy_type);
+ 
+ 			bp->supported |= (SUPPORTED_10000baseT_Full |
++					  SUPPORTED_1000baseT_Full |
+ 					  SUPPORTED_FIBRE |
++					  SUPPORTED_Autoneg |
++					  SUPPORTED_Pause |
++					  SUPPORTED_Asym_Pause);
++			break;
++
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_SFX7101:
++			BNX2X_DEV_INFO("ext_phy_type 0x%x (SFX7101)\n",
++				       ext_phy_type);
++
++			bp->supported |= (SUPPORTED_10000baseT_Full |
++					  SUPPORTED_TP |
++					  SUPPORTED_Autoneg |
+ 					  SUPPORTED_Pause |
+ 					  SUPPORTED_Asym_Pause);
+ 			break;
+@@ -6691,7 +7650,7 @@ static void bnx2x_link_settings_supported(struct bnx2x *bp, u32 switch_cfg)
+ 				   SUPPORTED_1000baseT_Full);
+ 
+ 	if (!(bp->speed_cap_mask & PORT_HW_CFG_SPEED_CAPABILITY_D0_2_5G))
+-		bp->supported &= ~SUPPORTED_2500baseT_Full;
++		bp->supported &= ~SUPPORTED_2500baseX_Full;
+ 
+ 	if (!(bp->speed_cap_mask & PORT_HW_CFG_SPEED_CAPABILITY_D0_10G))
+ 		bp->supported &= ~SUPPORTED_10000baseT_Full;
+@@ -6711,13 +7670,8 @@ static void bnx2x_link_settings_requested(struct bnx2x *bp)
+ 			bp->req_line_speed = 0;
+ 			bp->advertising = bp->supported;
+ 		} else {
+-			u32 ext_phy_type;
+-
+-			ext_phy_type = XGXS_EXT_PHY_TYPE(bp);
+-			if ((ext_phy_type ==
+-				PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8705) ||
+-			    (ext_phy_type ==
+-				PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8706)) {
++			if (XGXS_EXT_PHY_TYPE(bp) ==
++				PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8705) {
+ 				/* force 10G, no AN */
+ 				bp->req_line_speed = SPEED_10000;
+ 				bp->advertising =
+@@ -6734,8 +7688,7 @@ static void bnx2x_link_settings_requested(struct bnx2x *bp)
+ 		break;
+ 
+ 	case PORT_FEATURE_LINK_SPEED_10M_FULL:
+-		if (bp->speed_cap_mask &
+-		    PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL) {
++		if (bp->supported & SUPPORTED_10baseT_Full) {
+ 			bp->req_line_speed = SPEED_10;
+ 			bp->advertising = (ADVERTISED_10baseT_Full |
+ 					   ADVERTISED_TP);
+@@ -6749,8 +7702,7 @@ static void bnx2x_link_settings_requested(struct bnx2x *bp)
+ 		break;
+ 
+ 	case PORT_FEATURE_LINK_SPEED_10M_HALF:
+-		if (bp->speed_cap_mask &
+-		    PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_HALF) {
++		if (bp->supported & SUPPORTED_10baseT_Half) {
+ 			bp->req_line_speed = SPEED_10;
+ 			bp->req_duplex = DUPLEX_HALF;
+ 			bp->advertising = (ADVERTISED_10baseT_Half |
+@@ -6765,8 +7717,7 @@ static void bnx2x_link_settings_requested(struct bnx2x *bp)
+ 		break;
+ 
+ 	case PORT_FEATURE_LINK_SPEED_100M_FULL:
+-		if (bp->speed_cap_mask &
+-		    PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL) {
++		if (bp->supported & SUPPORTED_100baseT_Full) {
+ 			bp->req_line_speed = SPEED_100;
+ 			bp->advertising = (ADVERTISED_100baseT_Full |
+ 					   ADVERTISED_TP);
+@@ -6780,8 +7731,7 @@ static void bnx2x_link_settings_requested(struct bnx2x *bp)
+ 		break;
+ 
+ 	case PORT_FEATURE_LINK_SPEED_100M_HALF:
+-		if (bp->speed_cap_mask &
+-		    PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_HALF) {
++		if (bp->supported & SUPPORTED_100baseT_Half) {
+ 			bp->req_line_speed = SPEED_100;
+ 			bp->req_duplex = DUPLEX_HALF;
+ 			bp->advertising = (ADVERTISED_100baseT_Half |
+@@ -6796,8 +7746,7 @@ static void bnx2x_link_settings_requested(struct bnx2x *bp)
+ 		break;
+ 
+ 	case PORT_FEATURE_LINK_SPEED_1G:
+-		if (bp->speed_cap_mask &
+-		    PORT_HW_CFG_SPEED_CAPABILITY_D0_1G) {
++		if (bp->supported & SUPPORTED_1000baseT_Full) {
+ 			bp->req_line_speed = SPEED_1000;
+ 			bp->advertising = (ADVERTISED_1000baseT_Full |
+ 					   ADVERTISED_TP);
+@@ -6811,10 +7760,9 @@ static void bnx2x_link_settings_requested(struct bnx2x *bp)
+ 		break;
+ 
+ 	case PORT_FEATURE_LINK_SPEED_2_5G:
+-		if (bp->speed_cap_mask &
+-		    PORT_HW_CFG_SPEED_CAPABILITY_D0_2_5G) {
++		if (bp->supported & SUPPORTED_2500baseX_Full) {
+ 			bp->req_line_speed = SPEED_2500;
+-			bp->advertising = (ADVERTISED_2500baseT_Full |
++			bp->advertising = (ADVERTISED_2500baseX_Full |
+ 					   ADVERTISED_TP);
+ 		} else {
+ 			BNX2X_ERR("NVRAM config error. "
+@@ -6828,15 +7776,7 @@ static void bnx2x_link_settings_requested(struct bnx2x *bp)
+ 	case PORT_FEATURE_LINK_SPEED_10G_CX4:
+ 	case PORT_FEATURE_LINK_SPEED_10G_KX4:
+ 	case PORT_FEATURE_LINK_SPEED_10G_KR:
+-		if (!(bp->phy_flags & PHY_XGXS_FLAG)) {
+-			BNX2X_ERR("NVRAM config error. "
+-				  "Invalid link_config 0x%x"
+-				  "  phy_flags 0x%x\n",
+-				  bp->link_config, bp->phy_flags);
+-			return;
+-		}
+-		if (bp->speed_cap_mask &
+-		    PORT_HW_CFG_SPEED_CAPABILITY_D0_10G) {
++		if (bp->supported & SUPPORTED_10000baseT_Full) {
+ 			bp->req_line_speed = SPEED_10000;
+ 			bp->advertising = (ADVERTISED_10000baseT_Full |
+ 					   ADVERTISED_FIBRE);
+@@ -6863,43 +7803,13 @@ static void bnx2x_link_settings_requested(struct bnx2x *bp)
+ 
+ 	bp->req_flow_ctrl = (bp->link_config &
+ 			     PORT_FEATURE_FLOW_CONTROL_MASK);
+-	/* Please refer to Table 28B-3 of the 802.3ab-1999 spec */
+-	switch (bp->req_flow_ctrl) {
+-	case FLOW_CTRL_AUTO:
++	if ((bp->req_flow_ctrl == FLOW_CTRL_AUTO) &&
++	    (bp->supported & SUPPORTED_Autoneg))
+ 		bp->req_autoneg |= AUTONEG_FLOW_CTRL;
+-		if (bp->dev->mtu <= 4500) {
+-			bp->pause_mode = PAUSE_BOTH;
+-			bp->advertising |= (ADVERTISED_Pause |
+-					    ADVERTISED_Asym_Pause);
+-		} else {
+-			bp->pause_mode = PAUSE_ASYMMETRIC;
+-			bp->advertising |= ADVERTISED_Asym_Pause;
+-		}
+-		break;
+-
+-	case FLOW_CTRL_TX:
+-		bp->pause_mode = PAUSE_ASYMMETRIC;
+-		bp->advertising |= ADVERTISED_Asym_Pause;
+-		break;
+-
+-	case FLOW_CTRL_RX:
+-	case FLOW_CTRL_BOTH:
+-		bp->pause_mode = PAUSE_BOTH;
+-		bp->advertising |= (ADVERTISED_Pause |
+-				    ADVERTISED_Asym_Pause);
+-		break;
+ 
+-	case FLOW_CTRL_NONE:
+-	default:
+-		bp->pause_mode = PAUSE_NONE;
+-		bp->advertising &= ~(ADVERTISED_Pause |
+-				     ADVERTISED_Asym_Pause);
+-		break;
+-	}
+-	BNX2X_DEV_INFO("req_autoneg 0x%x  req_flow_ctrl 0x%x\n"
+-	     KERN_INFO "  pause_mode %d  advertising 0x%x\n",
+-		       bp->req_autoneg, bp->req_flow_ctrl,
+-		       bp->pause_mode, bp->advertising);
++	BNX2X_DEV_INFO("req_autoneg 0x%x  req_flow_ctrl 0x%x"
++		       "  advertising 0x%x\n",
++		       bp->req_autoneg, bp->req_flow_ctrl, bp->advertising);
+ }
+ 
+ static void bnx2x_get_hwinfo(struct bnx2x *bp)
+@@ -6933,15 +7843,15 @@ static void bnx2x_get_hwinfo(struct bnx2x *bp)
+ 	val = SHMEM_RD(bp, validity_map[port]);
+ 	if ((val & (SHR_MEM_VALIDITY_DEV_INFO | SHR_MEM_VALIDITY_MB))
+ 		!= (SHR_MEM_VALIDITY_DEV_INFO | SHR_MEM_VALIDITY_MB))
+-		BNX2X_ERR("MCP validity signature bad\n");
++		BNX2X_ERR("BAD MCP validity signature\n");
+ 
+-	bp->fw_seq = (SHMEM_RD(bp, drv_fw_mb[port].drv_mb_header) &
++	bp->fw_seq = (SHMEM_RD(bp, func_mb[port].drv_mb_header) &
+ 		      DRV_MSG_SEQ_NUMBER_MASK);
+ 
+ 	bp->hw_config = SHMEM_RD(bp, dev_info.shared_hw_config.config);
+-
++	bp->board = SHMEM_RD(bp, dev_info.shared_hw_config.board);
+ 	bp->serdes_config =
+-		SHMEM_RD(bp, dev_info.port_hw_config[bp->port].serdes_config);
++		SHMEM_RD(bp, dev_info.port_hw_config[port].serdes_config);
+ 	bp->lane_config =
+ 		SHMEM_RD(bp, dev_info.port_hw_config[port].lane_config);
+ 	bp->ext_phy_config =
+@@ -6954,13 +7864,13 @@ static void bnx2x_get_hwinfo(struct bnx2x *bp)
+ 	bp->link_config =
+ 		SHMEM_RD(bp, dev_info.port_feature_config[port].link_config);
+ 
+-	BNX2X_DEV_INFO("hw_config (%08x)  serdes_config (%08x)\n"
++	BNX2X_DEV_INFO("hw_config (%08x) board (%08x)  serdes_config (%08x)\n"
+ 	     KERN_INFO "  lane_config (%08x)  ext_phy_config (%08x)\n"
+ 	     KERN_INFO "  speed_cap_mask (%08x)  link_config (%08x)"
+ 		       "  fw_seq (%08x)\n",
+-		       bp->hw_config, bp->serdes_config, bp->lane_config,
+-		       bp->ext_phy_config, bp->speed_cap_mask,
+-		       bp->link_config, bp->fw_seq);
++		       bp->hw_config, bp->board, bp->serdes_config,
++		       bp->lane_config, bp->ext_phy_config,
++		       bp->speed_cap_mask, bp->link_config, bp->fw_seq);
+ 
+ 	switch_cfg = (bp->link_config & PORT_FEATURE_CONNECTED_SWITCH_MASK);
+ 	bnx2x_link_settings_supported(bp, switch_cfg);
+@@ -7014,14 +7924,8 @@ static void bnx2x_get_hwinfo(struct bnx2x *bp)
+ 	return;
+ 
+ set_mac: /* only supposed to happen on emulation/FPGA */
+-	BNX2X_ERR("warning constant MAC workaround active\n");
+-	bp->dev->dev_addr[0] = 0;
+-	bp->dev->dev_addr[1] = 0x50;
+-	bp->dev->dev_addr[2] = 0xc2;
+-	bp->dev->dev_addr[3] = 0x2c;
+-	bp->dev->dev_addr[4] = 0x71;
+-	bp->dev->dev_addr[5] = port ? 0x0d : 0x0e;
+-
++	BNX2X_ERR("warning rendom MAC workaround active\n");
++	random_ether_addr(bp->dev->dev_addr);
+ 	memcpy(bp->dev->perm_addr, bp->dev->dev_addr, 6);
+ 
+ }
+@@ -7048,19 +7952,34 @@ static int bnx2x_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+ 	}
+ 
+ 	if (bp->phy_flags & PHY_XGXS_FLAG) {
+-		cmd->port = PORT_FIBRE;
+-	} else {
++		u32 ext_phy_type = XGXS_EXT_PHY_TYPE(bp);
++
++		switch (ext_phy_type) {
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT:
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8705:
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8706:
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8072:
++			cmd->port = PORT_FIBRE;
++			break;
++
++		case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_SFX7101:
++			cmd->port = PORT_TP;
++			break;
++
++		default:
++			DP(NETIF_MSG_LINK, "BAD XGXS ext_phy_config 0x%x\n",
++			   bp->ext_phy_config);
++		}
++	} else
+ 		cmd->port = PORT_TP;
+-	}
+ 
+ 	cmd->phy_address = bp->phy_addr;
+ 	cmd->transceiver = XCVR_INTERNAL;
+ 
+-	if (bp->req_autoneg & AUTONEG_SPEED) {
++	if (bp->req_autoneg & AUTONEG_SPEED)
+ 		cmd->autoneg = AUTONEG_ENABLE;
+-	} else {
++	else
+ 		cmd->autoneg = AUTONEG_DISABLE;
+-	}
+ 
+ 	cmd->maxtxpkt = 0;
+ 	cmd->maxrxpkt = 0;
+@@ -7091,8 +8010,10 @@ static int bnx2x_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+ 
+ 	switch (cmd->port) {
+ 	case PORT_TP:
+-		if (!(bp->supported & SUPPORTED_TP))
++		if (!(bp->supported & SUPPORTED_TP)) {
++			DP(NETIF_MSG_LINK, "TP not supported\n");
+ 			return -EINVAL;
++		}
+ 
+ 		if (bp->phy_flags & PHY_XGXS_FLAG) {
+ 			bnx2x_link_reset(bp);
+@@ -7102,8 +8023,10 @@ static int bnx2x_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+ 		break;
+ 
+ 	case PORT_FIBRE:
+-		if (!(bp->supported & SUPPORTED_FIBRE))
++		if (!(bp->supported & SUPPORTED_FIBRE)) {
++			DP(NETIF_MSG_LINK, "FIBRE not supported\n");
+ 			return -EINVAL;
++		}
+ 
+ 		if (!(bp->phy_flags & PHY_XGXS_FLAG)) {
+ 			bnx2x_link_reset(bp);
+@@ -7113,12 +8036,15 @@ static int bnx2x_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+ 		break;
+ 
+ 	default:
++		DP(NETIF_MSG_LINK, "Unknown port type\n");
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (cmd->autoneg == AUTONEG_ENABLE) {
+-		if (!(bp->supported & SUPPORTED_Autoneg))
++		if (!(bp->supported & SUPPORTED_Autoneg)) {
++			DP(NETIF_MSG_LINK, "Aotoneg not supported\n");
+ 			return -EINVAL;
++		}
+ 
+ 		/* advertise the requested speed and duplex if supported */
+ 		cmd->advertising &= bp->supported;
+@@ -7133,14 +8059,22 @@ static int bnx2x_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+ 		switch (cmd->speed) {
+ 		case SPEED_10:
+ 			if (cmd->duplex == DUPLEX_FULL) {
+-				if (!(bp->supported & SUPPORTED_10baseT_Full))
++				if (!(bp->supported &
++				      SUPPORTED_10baseT_Full)) {
++					DP(NETIF_MSG_LINK,
++					   "10M full not supported\n");
+ 					return -EINVAL;
++				}
+ 
+ 				advertising = (ADVERTISED_10baseT_Full |
+ 					       ADVERTISED_TP);
+ 			} else {
+-				if (!(bp->supported & SUPPORTED_10baseT_Half))
++				if (!(bp->supported &
++				      SUPPORTED_10baseT_Half)) {
++					DP(NETIF_MSG_LINK,
++					   "10M half not supported\n");
+ 					return -EINVAL;
++				}
+ 
+ 				advertising = (ADVERTISED_10baseT_Half |
+ 					       ADVERTISED_TP);
+@@ -7150,15 +8084,21 @@ static int bnx2x_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+ 		case SPEED_100:
+ 			if (cmd->duplex == DUPLEX_FULL) {
+ 				if (!(bp->supported &
+-						SUPPORTED_100baseT_Full))
++						SUPPORTED_100baseT_Full)) {
++					DP(NETIF_MSG_LINK,
++					   "100M full not supported\n");
+ 					return -EINVAL;
++				}
+ 
+ 				advertising = (ADVERTISED_100baseT_Full |
+ 					       ADVERTISED_TP);
+ 			} else {
+ 				if (!(bp->supported &
+-						SUPPORTED_100baseT_Half))
++						SUPPORTED_100baseT_Half)) {
++					DP(NETIF_MSG_LINK,
++					   "100M half not supported\n");
+ 					return -EINVAL;
++				}
+ 
+ 				advertising = (ADVERTISED_100baseT_Half |
+ 					       ADVERTISED_TP);
+@@ -7166,39 +8106,54 @@ static int bnx2x_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+ 			break;
+ 
+ 		case SPEED_1000:
+-			if (cmd->duplex != DUPLEX_FULL)
++			if (cmd->duplex != DUPLEX_FULL) {
++				DP(NETIF_MSG_LINK, "1G half not supported\n");
+ 				return -EINVAL;
++			}
+ 
+-			if (!(bp->supported & SUPPORTED_1000baseT_Full))
++			if (!(bp->supported & SUPPORTED_1000baseT_Full)) {
++				DP(NETIF_MSG_LINK, "1G full not supported\n");
+ 				return -EINVAL;
++			}
+ 
+ 			advertising = (ADVERTISED_1000baseT_Full |
+ 				       ADVERTISED_TP);
+ 			break;
+ 
+ 		case SPEED_2500:
+-			if (cmd->duplex != DUPLEX_FULL)
++			if (cmd->duplex != DUPLEX_FULL) {
++				DP(NETIF_MSG_LINK,
++				   "2.5G half not supported\n");
+ 				return -EINVAL;
++			}
+ 
+-			if (!(bp->supported & SUPPORTED_2500baseT_Full))
++			if (!(bp->supported & SUPPORTED_2500baseX_Full)) {
++				DP(NETIF_MSG_LINK,
++				   "2.5G full not supported\n");
+ 				return -EINVAL;
++			}
+ 
+-			advertising = (ADVERTISED_2500baseT_Full |
++			advertising = (ADVERTISED_2500baseX_Full |
+ 				       ADVERTISED_TP);
+ 			break;
+ 
+ 		case SPEED_10000:
+-			if (cmd->duplex != DUPLEX_FULL)
++			if (cmd->duplex != DUPLEX_FULL) {
++				DP(NETIF_MSG_LINK, "10G half not supported\n");
+ 				return -EINVAL;
++			}
+ 
+-			if (!(bp->supported & SUPPORTED_10000baseT_Full))
++			if (!(bp->supported & SUPPORTED_10000baseT_Full)) {
++				DP(NETIF_MSG_LINK, "10G full not supported\n");
+ 				return -EINVAL;
++			}
+ 
+ 			advertising = (ADVERTISED_10000baseT_Full |
+ 				       ADVERTISED_FIBRE);
+ 			break;
+ 
+ 		default:
++			DP(NETIF_MSG_LINK, "Unsupported speed\n");
+ 			return -EINVAL;
+ 		}
+ 
+@@ -7398,8 +8353,7 @@ static void bnx2x_disable_nvram_access(struct bnx2x *bp)
+ static int bnx2x_nvram_read_dword(struct bnx2x *bp, u32 offset, u32 *ret_val,
+ 				  u32 cmd_flags)
+ {
+-	int rc;
+-	int count, i;
++	int count, i, rc;
+ 	u32 val;
+ 
+ 	/* build the command word */
+@@ -7452,13 +8406,13 @@ static int bnx2x_nvram_read(struct bnx2x *bp, u32 offset, u8 *ret_buf,
+ 
+ 	if ((offset & 0x03) || (buf_size & 0x03) || (buf_size == 0)) {
+ 		DP(NETIF_MSG_NVM,
+-		   "Invalid paramter: offset 0x%x  buf_size 0x%x\n",
++		   "Invalid parameter: offset 0x%x  buf_size 0x%x\n",
+ 		   offset, buf_size);
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (offset + buf_size > bp->flash_size) {
+-		DP(NETIF_MSG_NVM, "Invalid paramter: offset (0x%x) +"
++		DP(NETIF_MSG_NVM, "Invalid parameter: offset (0x%x) +"
+ 				  " buf_size (0x%x) > flash_size (0x%x)\n",
+ 		   offset, buf_size, bp->flash_size);
+ 		return -EINVAL;
+@@ -7519,8 +8473,7 @@ static int bnx2x_get_eeprom(struct net_device *dev,
+ static int bnx2x_nvram_write_dword(struct bnx2x *bp, u32 offset, u32 val,
+ 				   u32 cmd_flags)
+ {
+-	int rc;
+-	int count, i;
++	int count, i, rc;
+ 
+ 	/* build the command word */
+ 	cmd_flags |= MCPR_NVM_COMMAND_DOIT | MCPR_NVM_COMMAND_WR;
+@@ -7557,7 +8510,7 @@ static int bnx2x_nvram_write_dword(struct bnx2x *bp, u32 offset, u32 val,
+ 	return rc;
+ }
+ 
+-#define BYTE_OFFSET(offset)     	(8 * (offset & 0x03))
++#define BYTE_OFFSET(offset)		(8 * (offset & 0x03))
+ 
+ static int bnx2x_nvram_write1(struct bnx2x *bp, u32 offset, u8 *data_buf,
+ 			      int buf_size)
+@@ -7568,7 +8521,7 @@ static int bnx2x_nvram_write1(struct bnx2x *bp, u32 offset, u8 *data_buf,
+ 	u32 val;
+ 
+ 	if (offset + buf_size > bp->flash_size) {
+-		DP(NETIF_MSG_NVM, "Invalid paramter: offset (0x%x) +"
++		DP(NETIF_MSG_NVM, "Invalid parameter: offset (0x%x) +"
+ 				  " buf_size (0x%x) > flash_size (0x%x)\n",
+ 		   offset, buf_size, bp->flash_size);
+ 		return -EINVAL;
+@@ -7621,13 +8574,13 @@ static int bnx2x_nvram_write(struct bnx2x *bp, u32 offset, u8 *data_buf,
+ 
+ 	if ((offset & 0x03) || (buf_size & 0x03) || (buf_size == 0)) {
+ 		DP(NETIF_MSG_NVM,
+-		   "Invalid paramter: offset 0x%x  buf_size 0x%x\n",
++		   "Invalid parameter: offset 0x%x  buf_size 0x%x\n",
+ 		   offset, buf_size);
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (offset + buf_size > bp->flash_size) {
+-		DP(NETIF_MSG_NVM, "Invalid paramter: offset (0x%x) +"
++		DP(NETIF_MSG_NVM, "Invalid parameter: offset (0x%x) +"
+ 				  " buf_size (0x%x) > flash_size (0x%x)\n",
+ 		   offset, buf_size, bp->flash_size);
+ 		return -EINVAL;
+@@ -7788,52 +8741,29 @@ static int bnx2x_set_pauseparam(struct net_device *dev,
+ 	   DP_LEVEL "  autoneg %d  rx_pause %d  tx_pause %d\n",
+ 	   epause->cmd, epause->autoneg, epause->rx_pause, epause->tx_pause);
+ 
+-	bp->req_flow_ctrl = FLOW_CTRL_AUTO;
+ 	if (epause->autoneg) {
+-		bp->req_autoneg |= AUTONEG_FLOW_CTRL;
+-		if (bp->dev->mtu <= 4500) {
+-			bp->pause_mode = PAUSE_BOTH;
+-			bp->advertising |= (ADVERTISED_Pause |
+-					    ADVERTISED_Asym_Pause);
+-		} else {
+-			bp->pause_mode = PAUSE_ASYMMETRIC;
+-			bp->advertising |= ADVERTISED_Asym_Pause;
++		if (!(bp->supported & SUPPORTED_Autoneg)) {
++			DP(NETIF_MSG_LINK, "Aotoneg not supported\n");
++			return -EINVAL;
+ 		}
+ 
+-	} else {
++		bp->req_autoneg |= AUTONEG_FLOW_CTRL;
++	} else
+ 		bp->req_autoneg &= ~AUTONEG_FLOW_CTRL;
+ 
+-		if (epause->rx_pause)
+-			bp->req_flow_ctrl |= FLOW_CTRL_RX;
+-		if (epause->tx_pause)
+-			bp->req_flow_ctrl |= FLOW_CTRL_TX;
+-
+-		switch (bp->req_flow_ctrl) {
+-		case FLOW_CTRL_AUTO:
+-			bp->req_flow_ctrl = FLOW_CTRL_NONE;
+-			bp->pause_mode = PAUSE_NONE;
+-			bp->advertising &= ~(ADVERTISED_Pause |
+-					     ADVERTISED_Asym_Pause);
+-			break;
++	bp->req_flow_ctrl = FLOW_CTRL_AUTO;
+ 
+-		case FLOW_CTRL_TX:
+-			bp->pause_mode = PAUSE_ASYMMETRIC;
+-			bp->advertising |= ADVERTISED_Asym_Pause;
+-			break;
++	if (epause->rx_pause)
++		bp->req_flow_ctrl |= FLOW_CTRL_RX;
++	if (epause->tx_pause)
++		bp->req_flow_ctrl |= FLOW_CTRL_TX;
+ 
+-		case FLOW_CTRL_RX:
+-		case FLOW_CTRL_BOTH:
+-			bp->pause_mode = PAUSE_BOTH;
+-			bp->advertising |= (ADVERTISED_Pause |
+-					    ADVERTISED_Asym_Pause);
+-			break;
+-		}
+-	}
++	if (!(bp->req_autoneg & AUTONEG_FLOW_CTRL) &&
++	    (bp->req_flow_ctrl == FLOW_CTRL_AUTO))
++		bp->req_flow_ctrl = FLOW_CTRL_NONE;
+ 
+-	DP(NETIF_MSG_LINK, "req_autoneg 0x%x  req_flow_ctrl 0x%x\n"
+-	   DP_LEVEL "  pause_mode %d  advertising 0x%x\n",
+-	   bp->req_autoneg, bp->req_flow_ctrl, bp->pause_mode,
+-	   bp->advertising);
++	DP(NETIF_MSG_LINK, "req_autoneg 0x%x  req_flow_ctrl 0x%x\n",
++	   bp->req_autoneg, bp->req_flow_ctrl);
+ 
+ 	bnx2x_stop_stats(bp);
+ 	bnx2x_link_initialize(bp);
+@@ -7906,81 +8836,87 @@ static void bnx2x_self_test(struct net_device *dev,
+ static struct {
+ 	char string[ETH_GSTRING_LEN];
+ } bnx2x_stats_str_arr[BNX2X_NUM_STATS] = {
+-	{ "rx_bytes"},  			 /*  0 */
+-	{ "rx_error_bytes"},    		 /*  1 */
+-	{ "tx_bytes"},  			 /*  2 */
+-	{ "tx_error_bytes"},    		 /*  3 */
+-	{ "rx_ucast_packets"},  		 /*  4 */
+-	{ "rx_mcast_packets"},  		 /*  5 */
+-	{ "rx_bcast_packets"},  		 /*  6 */
+-	{ "tx_ucast_packets"},  		 /*  7 */
+-	{ "tx_mcast_packets"},  		 /*  8 */
+-	{ "tx_bcast_packets"},  		 /*  9 */
+-	{ "tx_mac_errors"},     		 /* 10 */
+-	{ "tx_carrier_errors"}, 		 /* 11 */
+-	{ "rx_crc_errors"},     		 /* 12 */
+-	{ "rx_align_errors"},   		 /* 13 */
+-	{ "tx_single_collisions"},      	 /* 14 */
+-	{ "tx_multi_collisions"},       	 /* 15 */
+-	{ "tx_deferred"},       		 /* 16 */
+-	{ "tx_excess_collisions"},      	 /* 17 */
+-	{ "tx_late_collisions"},		 /* 18 */
+-	{ "tx_total_collisions"},       	 /* 19 */
+-	{ "rx_fragments"},      		 /* 20 */
+-	{ "rx_jabbers"},			 /* 21 */
+-	{ "rx_undersize_packets"},      	 /* 22 */
+-	{ "rx_oversize_packets"},       	 /* 23 */
+-	{ "rx_xon_frames"},     		 /* 24 */
+-	{ "rx_xoff_frames"},    		 /* 25 */
+-	{ "tx_xon_frames"},     		 /* 26 */
+-	{ "tx_xoff_frames"},    		 /* 27 */
+-	{ "rx_mac_ctrl_frames"},		 /* 28 */
+-	{ "rx_filtered_packets"},       	 /* 29 */
+-	{ "rx_discards"},       		 /* 30 */
++	{ "rx_bytes"},
++	{ "rx_error_bytes"},
++	{ "tx_bytes"},
++	{ "tx_error_bytes"},
++	{ "rx_ucast_packets"},
++	{ "rx_mcast_packets"},
++	{ "rx_bcast_packets"},
++	{ "tx_ucast_packets"},
++	{ "tx_mcast_packets"},
++	{ "tx_bcast_packets"},
++	{ "tx_mac_errors"},	/* 10 */
++	{ "tx_carrier_errors"},
++	{ "rx_crc_errors"},
++	{ "rx_align_errors"},
++	{ "tx_single_collisions"},
++	{ "tx_multi_collisions"},
++	{ "tx_deferred"},
++	{ "tx_excess_collisions"},
++	{ "tx_late_collisions"},
++	{ "tx_total_collisions"},
++	{ "rx_fragments"},	/* 20 */
++	{ "rx_jabbers"},
++	{ "rx_undersize_packets"},
++	{ "rx_oversize_packets"},
++	{ "rx_xon_frames"},
++	{ "rx_xoff_frames"},
++	{ "tx_xon_frames"},
++	{ "tx_xoff_frames"},
++	{ "rx_mac_ctrl_frames"},
++	{ "rx_filtered_packets"},
++	{ "rx_discards"},	/* 30 */
++	{ "brb_discard"},
++	{ "brb_truncate"},
++	{ "xxoverflow"}
+ };
+ 
+ #define STATS_OFFSET32(offset_name) \
+ 	(offsetof(struct bnx2x_eth_stats, offset_name) / 4)
+ 
+ static unsigned long bnx2x_stats_offset_arr[BNX2X_NUM_STATS] = {
+-	STATS_OFFSET32(total_bytes_received_hi),		     /*  0 */
+-	STATS_OFFSET32(stat_IfHCInBadOctets_hi),		     /*  1 */
+-	STATS_OFFSET32(total_bytes_transmitted_hi),     	     /*  2 */
+-	STATS_OFFSET32(stat_IfHCOutBadOctets_hi),       	     /*  3 */
+-	STATS_OFFSET32(total_unicast_packets_received_hi),           /*  4 */
+-	STATS_OFFSET32(total_multicast_packets_received_hi),         /*  5 */
+-	STATS_OFFSET32(total_broadcast_packets_received_hi),         /*  6 */
+-	STATS_OFFSET32(total_unicast_packets_transmitted_hi),        /*  7 */
+-	STATS_OFFSET32(total_multicast_packets_transmitted_hi),      /*  8 */
+-	STATS_OFFSET32(total_broadcast_packets_transmitted_hi),      /*  9 */
+-	STATS_OFFSET32(stat_Dot3statsInternalMacTransmitErrors),     /* 10 */
+-	STATS_OFFSET32(stat_Dot3StatsCarrierSenseErrors),            /* 11 */
+-	STATS_OFFSET32(crc_receive_errors),     		     /* 12 */
+-	STATS_OFFSET32(alignment_errors),       		     /* 13 */
+-	STATS_OFFSET32(single_collision_transmit_frames),            /* 14 */
+-	STATS_OFFSET32(multiple_collision_transmit_frames),          /* 15 */
+-	STATS_OFFSET32(stat_Dot3StatsDeferredTransmissions),         /* 16 */
+-	STATS_OFFSET32(excessive_collision_frames),     	     /* 17 */
+-	STATS_OFFSET32(late_collision_frames),  		     /* 18 */
+-	STATS_OFFSET32(number_of_bugs_found_in_stats_spec),          /* 19 */
+-	STATS_OFFSET32(runt_packets_received),  		     /* 20 */
+-	STATS_OFFSET32(jabber_packets_received),		     /* 21 */
+-	STATS_OFFSET32(error_runt_packets_received),    	     /* 22 */
+-	STATS_OFFSET32(error_jabber_packets_received),  	     /* 23 */
+-	STATS_OFFSET32(pause_xon_frames_received),      	     /* 24 */
+-	STATS_OFFSET32(pause_xoff_frames_received),     	     /* 25 */
+-	STATS_OFFSET32(pause_xon_frames_transmitted),   	     /* 26 */
+-	STATS_OFFSET32(pause_xoff_frames_transmitted),  	     /* 27 */
+-	STATS_OFFSET32(control_frames_received),		     /* 28 */
+-	STATS_OFFSET32(mac_filter_discard),     		     /* 29 */
+-	STATS_OFFSET32(no_buff_discard),			     /* 30 */
++	STATS_OFFSET32(total_bytes_received_hi),
++	STATS_OFFSET32(stat_IfHCInBadOctets_hi),
++	STATS_OFFSET32(total_bytes_transmitted_hi),
++	STATS_OFFSET32(stat_IfHCOutBadOctets_hi),
++	STATS_OFFSET32(total_unicast_packets_received_hi),
++	STATS_OFFSET32(total_multicast_packets_received_hi),
++	STATS_OFFSET32(total_broadcast_packets_received_hi),
++	STATS_OFFSET32(total_unicast_packets_transmitted_hi),
++	STATS_OFFSET32(total_multicast_packets_transmitted_hi),
++	STATS_OFFSET32(total_broadcast_packets_transmitted_hi),
++	STATS_OFFSET32(stat_Dot3statsInternalMacTransmitErrors), /* 10 */
++	STATS_OFFSET32(stat_Dot3StatsCarrierSenseErrors),
++	STATS_OFFSET32(crc_receive_errors),
++	STATS_OFFSET32(alignment_errors),
++	STATS_OFFSET32(single_collision_transmit_frames),
++	STATS_OFFSET32(multiple_collision_transmit_frames),
++	STATS_OFFSET32(stat_Dot3StatsDeferredTransmissions),
++	STATS_OFFSET32(excessive_collision_frames),
++	STATS_OFFSET32(late_collision_frames),
++	STATS_OFFSET32(number_of_bugs_found_in_stats_spec),
++	STATS_OFFSET32(runt_packets_received),			/* 20 */
++	STATS_OFFSET32(jabber_packets_received),
++	STATS_OFFSET32(error_runt_packets_received),
++	STATS_OFFSET32(error_jabber_packets_received),
++	STATS_OFFSET32(pause_xon_frames_received),
++	STATS_OFFSET32(pause_xoff_frames_received),
++	STATS_OFFSET32(pause_xon_frames_transmitted),
++	STATS_OFFSET32(pause_xoff_frames_transmitted),
++	STATS_OFFSET32(control_frames_received),
++	STATS_OFFSET32(mac_filter_discard),
++	STATS_OFFSET32(no_buff_discard),			/* 30 */
++	STATS_OFFSET32(brb_discard),
++	STATS_OFFSET32(brb_truncate_discard),
++	STATS_OFFSET32(xxoverflow_discard)
+ };
+ 
+ static u8 bnx2x_stats_len_arr[BNX2X_NUM_STATS] = {
+ 	8, 0, 8, 0, 8, 8, 8, 8, 8, 8,
+ 	4, 0, 4, 4, 4, 4, 4, 4, 4, 4,
+ 	4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
+-	4,
++	4, 4, 4, 4
+ };
+ 
+ static void bnx2x_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
+@@ -8138,9 +9074,7 @@ static int bnx2x_set_power_state(struct bnx2x *bp, pci_power_t state)
+  * net_device service functions
+  */
+ 
+-/* Called with rtnl_lock from vlan functions and also netif_tx_lock
+- * from set_multicast.
+- */
++/* called with netif_tx_lock from set_multicast */
+ static void bnx2x_set_rx_mode(struct net_device *dev)
+ {
+ 	struct bnx2x *bp = netdev_priv(dev);
+@@ -8314,7 +9248,7 @@ static int bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			       ETH_TX_BD_ETH_ADDR_TYPE_SHIFT);
+ 	tx_bd->general_data |= 1; /* header nbd */
+ 
+-	/* remeber the first bd of the packet */
++	/* remember the first bd of the packet */
+ 	tx_buf->first_bd = bd_prod;
+ 
+ 	DP(NETIF_MSG_TX_QUEUED,
+@@ -8334,7 +9268,7 @@ static int bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 		/* for now NS flag is not used in Linux */
+ 		pbd->global_data = (len |
+-				    ((skb->protocol == ETH_P_8021Q) <<
++				    ((skb->protocol == ntohs(ETH_P_8021Q)) <<
+ 				     ETH_TX_PARSE_BD_LLC_SNAP_EN_SHIFT));
+ 		pbd->ip_hlen = ip_hdrlen(skb) / 2;
+ 		pbd->total_hlen = cpu_to_le16(len + pbd->ip_hlen);
+@@ -8343,7 +9277,7 @@ static int bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 			tx_bd->bd_flags.as_bitfield |=
+ 						ETH_TX_BD_FLAGS_TCP_CSUM;
+-			pbd->tcp_flags = htonl(tcp_flag_word(skb)) & 0xFFFF;
++			pbd->tcp_flags = pbd_tcp_flags(skb);
+ 			pbd->total_hlen += cpu_to_le16(tcp_hdrlen(skb) / 2);
+ 			pbd->tcp_pseudo_csum = swab16(th->check);
+ 
+@@ -8387,7 +9321,7 @@ static int bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	if (skb_shinfo(skb)->gso_size &&
+ 	    (skb->len > (bp->dev->mtu + ETH_HLEN))) {
+-		int hlen = 2 * le32_to_cpu(pbd->total_hlen);
++		int hlen = 2 * le16_to_cpu(pbd->total_hlen);
+ 
+ 		DP(NETIF_MSG_TX_QUEUED,
+ 		   "TSO packet len %d  hlen %d  total len %d  tso size %d\n",
+@@ -8427,7 +9361,7 @@ static int bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 			tx_bd->vlan = cpu_to_le16(pkt_prod);
+ 			/* this marks the bd
+ 			 * as one that has no individual mapping
+-			 * the FW ignors this flag in a bd not maked start
++			 * the FW ignores this flag in a bd not marked start
+ 			 */
+ 			tx_bd->bd_flags.as_bitfield = ETH_TX_BD_FLAGS_SW_LSO;
+ 			DP(NETIF_MSG_TX_QUEUED,
+@@ -8504,9 +9438,11 @@ static int bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	DP(NETIF_MSG_TX_QUEUED, "doorbell: nbd %u  bd %d\n", nbd, bd_prod);
+ 
+-	fp->hw_tx_prods->bds_prod += cpu_to_le16(nbd);
++	fp->hw_tx_prods->bds_prod =
++		cpu_to_le16(le16_to_cpu(fp->hw_tx_prods->bds_prod) + nbd);
+ 	mb(); /* FW restriction: must not reorder writing nbd and packets */
+-	fp->hw_tx_prods->packets_prod += cpu_to_le32(1);
++	fp->hw_tx_prods->packets_prod =
++		cpu_to_le32(le32_to_cpu(fp->hw_tx_prods->packets_prod) + 1);
+ 	DOORBELL(bp, fp_index, 0);
+ 
+ 	mmiowb();
+@@ -8525,11 +9461,6 @@ static int bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	return NETDEV_TX_OK;
+ }
+ 
+-static struct net_device_stats *bnx2x_get_stats(struct net_device *dev)
+-{
+-	return &dev->stats;
+-}
+-
+ /* Called with rtnl_lock */
+ static int bnx2x_open(struct net_device *dev)
+ {
+@@ -8543,16 +9474,13 @@ static int bnx2x_open(struct net_device *dev)
+ /* Called with rtnl_lock */
+ static int bnx2x_close(struct net_device *dev)
+ {
+-	int rc;
+ 	struct bnx2x *bp = netdev_priv(dev);
+ 
+ 	/* Unload the driver, release IRQs */
+-	rc = bnx2x_nic_unload(bp, 1);
+-	if (rc) {
+-		BNX2X_ERR("bnx2x_nic_unload failed: %d\n", rc);
+-		return rc;
+-	}
+-	bnx2x_set_power_state(bp, PCI_D3hot);
++	bnx2x_nic_unload(bp, 1);
++
++	if (!CHIP_REV_IS_SLOW(bp))
++		bnx2x_set_power_state(bp, PCI_D3hot);
+ 
+ 	return 0;
+ }
+@@ -8584,7 +9512,7 @@ static int bnx2x_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 	case SIOCGMIIPHY:
+ 		data->phy_id = bp->phy_addr;
+ 
+-		/* fallthru */
++		/* fallthrough */
+ 	case SIOCGMIIREG: {
+ 		u32 mii_regval;
+ 
+@@ -8633,7 +9561,7 @@ static int bnx2x_change_mtu(struct net_device *dev, int new_mtu)
+ 		return -EINVAL;
+ 
+ 	/* This does not race with packet allocation
+-	 * because the actuall alloc size is
++	 * because the actual alloc size is
+ 	 * only updated as part of load
+ 	 */
+ 	dev->mtu = new_mtu;
+@@ -8666,7 +9594,7 @@ static void bnx2x_vlan_rx_register(struct net_device *dev,
+ 
+ 	bp->vlgrp = vlgrp;
+ 	if (netif_running(dev))
+-		bnx2x_set_rx_mode(dev);
++		bnx2x_set_client_config(bp);
+ }
+ #endif
+ 
+@@ -8695,14 +9623,18 @@ static void bnx2x_reset_task(struct work_struct *work)
+ 	if (!netif_running(bp->dev))
+ 		return;
+ 
+-	bp->in_reset_task = 1;
++	rtnl_lock();
+ 
+-	bnx2x_netif_stop(bp);
++	if (bp->state != BNX2X_STATE_OPEN) {
++		DP(NETIF_MSG_TX_ERR, "state is %x, returning\n", bp->state);
++		goto reset_task_exit;
++	}
+ 
+ 	bnx2x_nic_unload(bp, 0);
+ 	bnx2x_nic_load(bp, 0);
+ 
+-	bp->in_reset_task = 0;
++reset_task_exit:
++	rtnl_unlock();
+ }
+ 
+ static int __devinit bnx2x_init_board(struct pci_dev *pdev,
+@@ -8783,8 +9715,6 @@ static int __devinit bnx2x_init_board(struct pci_dev *pdev,
+ 
+ 	spin_lock_init(&bp->phy_lock);
+ 
+-	bp->in_reset_task = 0;
+-
+ 	INIT_WORK(&bp->reset_task, bnx2x_reset_task);
+ 	INIT_WORK(&bp->sp_task, bnx2x_sp_task);
+ 
+@@ -8813,7 +9743,7 @@ static int __devinit bnx2x_init_board(struct pci_dev *pdev,
+ 	bnx2x_get_hwinfo(bp);
+ 
+ 	if (CHIP_REV(bp) == CHIP_REV_FPGA) {
+-		printk(KERN_ERR PFX "FPGA detacted. MCP disabled,"
++		printk(KERN_ERR PFX "FPGA detected. MCP disabled,"
+ 		       " will only init first device\n");
+ 		onefunc = 1;
+ 		nomcp = 1;
+@@ -8882,14 +9812,32 @@ err_out:
+ 	return rc;
+ }
+ 
++static int __devinit bnx2x_get_pcie_width(struct bnx2x *bp)
++{
++	u32 val = REG_RD(bp, PCICFG_OFFSET + PCICFG_LINK_CONTROL);
++
++	val = (val & PCICFG_LINK_WIDTH) >> PCICFG_LINK_WIDTH_SHIFT;
++	return val;
++}
++
++/* return value of 1=2.5GHz 2=5GHz */
++static int __devinit bnx2x_get_pcie_speed(struct bnx2x *bp)
++{
++	u32 val = REG_RD(bp, PCICFG_OFFSET + PCICFG_LINK_CONTROL);
++
++	val = (val & PCICFG_LINK_SPEED) >> PCICFG_LINK_SPEED_SHIFT;
++	return val;
++}
++
+ static int __devinit bnx2x_init_one(struct pci_dev *pdev,
+ 				    const struct pci_device_id *ent)
+ {
+ 	static int version_printed;
+ 	struct net_device *dev = NULL;
+ 	struct bnx2x *bp;
+-	int rc, i;
++	int rc;
+ 	int port = PCI_FUNC(pdev->devfn);
++	DECLARE_MAC_BUF(mac);
+ 
+ 	if (version_printed++ == 0)
+ 		printk(KERN_INFO "%s", version);
+@@ -8906,6 +9854,7 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev,
+ 
+ 	if (port && onefunc) {
+ 		printk(KERN_ERR PFX "second function disabled. exiting\n");
++		free_netdev(dev);
+ 		return 0;
+ 	}
+ 
+@@ -8918,7 +9867,6 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev,
+ 	dev->hard_start_xmit = bnx2x_start_xmit;
+ 	dev->watchdog_timeo = TX_TIMEOUT;
+ 
+-	dev->get_stats = bnx2x_get_stats;
+ 	dev->ethtool_ops = &bnx2x_ethtool_ops;
+ 	dev->open = bnx2x_open;
+ 	dev->stop = bnx2x_close;
+@@ -8944,7 +9892,7 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev,
+ 
+ 	rc = register_netdev(dev);
+ 	if (rc) {
+-		printk(KERN_ERR PFX "Cannot register net device\n");
++		dev_err(&pdev->dev, "Cannot register net device\n");
+ 		if (bp->regview)
+ 			iounmap(bp->regview);
+ 		if (bp->doorbells)
+@@ -8959,32 +9907,30 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev,
+ 	pci_set_drvdata(pdev, dev);
+ 
+ 	bp->name = board_info[ent->driver_data].name;
+-	printk(KERN_INFO "%s: %s (%c%d) PCI%s %s %dMHz "
+-	       "found at mem %lx, IRQ %d, ",
+-	       dev->name, bp->name,
++	printk(KERN_INFO "%s: %s (%c%d) PCI-E x%d %s found at mem %lx,"
++	       " IRQ %d, ", dev->name, bp->name,
+ 	       ((CHIP_ID(bp) & 0xf000) >> 12) + 'A',
+ 	       ((CHIP_ID(bp) & 0x0ff0) >> 4),
+-	       ((bp->flags & PCIX_FLAG) ? "-X" : ""),
+-	       ((bp->flags & PCI_32BIT_FLAG) ? "32-bit" : "64-bit"),
+-	       bp->bus_speed_mhz,
+-	       dev->base_addr,
+-	       bp->pdev->irq);
+-
+-	printk("node addr ");
+-	for (i = 0; i < 6; i++)
+-		printk("%2.2x", dev->dev_addr[i]);
+-	printk("\n");
+-
++	       bnx2x_get_pcie_width(bp),
++	       (bnx2x_get_pcie_speed(bp) == 2) ? "5GHz (Gen2)" : "2.5GHz",
++	       dev->base_addr, bp->pdev->irq);
++	printk(KERN_CONT "node addr %s\n", print_mac(mac, dev->dev_addr));
+ 	return 0;
+ }
+ 
+ static void __devexit bnx2x_remove_one(struct pci_dev *pdev)
+ {
+ 	struct net_device *dev = pci_get_drvdata(pdev);
+-	struct bnx2x *bp = netdev_priv(dev);
++	struct bnx2x *bp;
++
++	if (!dev) {
++		/* we get here if init_one() fails */
++		printk(KERN_ERR PFX "BAD net device from bnx2x_init_one\n");
++		return;
++	}
++
++	bp = netdev_priv(dev);
+ 
+-	flush_scheduled_work();
+-	/*tasklet_kill(&bp->sp_task);*/
+ 	unregister_netdev(dev);
+ 
+ 	if (bp->regview)
+@@ -9002,34 +9948,43 @@ static void __devexit bnx2x_remove_one(struct pci_dev *pdev)
+ static int bnx2x_suspend(struct pci_dev *pdev, pm_message_t state)
+ {
+ 	struct net_device *dev = pci_get_drvdata(pdev);
+-	struct bnx2x *bp = netdev_priv(dev);
+-	int rc;
++	struct bnx2x *bp;
++
++	if (!dev)
++		return 0;
+ 
+ 	if (!netif_running(dev))
+ 		return 0;
+ 
+-	rc = bnx2x_nic_unload(bp, 0);
+-	if (!rc)
+-		return rc;
++	bp = netdev_priv(dev);
++
++	bnx2x_nic_unload(bp, 0);
+ 
+ 	netif_device_detach(dev);
+-	pci_save_state(pdev);
+ 
++	pci_save_state(pdev);
+ 	bnx2x_set_power_state(bp, pci_choose_state(pdev, state));
++
+ 	return 0;
+ }
+ 
+ static int bnx2x_resume(struct pci_dev *pdev)
+ {
+ 	struct net_device *dev = pci_get_drvdata(pdev);
+-	struct bnx2x *bp = netdev_priv(dev);
++	struct bnx2x *bp;
+ 	int rc;
+ 
++	if (!dev) {
++		printk(KERN_ERR PFX "BAD net device from bnx2x_init_one\n");
++		return -ENODEV;
++	}
++
+ 	if (!netif_running(dev))
+ 		return 0;
+ 
+-	pci_restore_state(pdev);
++	bp = netdev_priv(dev);
+ 
++	pci_restore_state(pdev);
+ 	bnx2x_set_power_state(bp, PCI_D0);
+ 	netif_device_attach(dev);
+ 
+diff --git a/drivers/net/bnx2x.h b/drivers/net/bnx2x.h
+index 4f7ae6f..4f0c0d3 100644
+--- a/drivers/net/bnx2x.h
++++ b/drivers/net/bnx2x.h
+@@ -1,6 +1,6 @@
+ /* bnx2x.h: Broadcom Everest network driver.
+  *
+- * Copyright (c) 2007 Broadcom Corporation
++ * Copyright (c) 2007-2008 Broadcom Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License as published by
+@@ -24,6 +24,8 @@
+ #define BNX2X_MSG_STATS 		0x20000 /* was: NETIF_MSG_TIMER */
+ #define NETIF_MSG_NVM   		0x40000 /* was: NETIF_MSG_HW */
+ #define NETIF_MSG_DMAE  		0x80000 /* was: NETIF_MSG_HW */
++#define BNX2X_MSG_SP			0x100000 /* was: NETIF_MSG_INTR */
++#define BNX2X_MSG_FP			0x200000 /* was: NETIF_MSG_INTR */
+ 
+ #define DP_LEVEL			KERN_NOTICE     /* was: KERN_DEBUG */
+ 
+@@ -40,6 +42,12 @@
+ 		__LINE__, bp->dev?(bp->dev->name):"?", ##__args); \
+ 	} while (0)
+ 
++/* for logging (never masked) */
++#define BNX2X_LOG(__fmt, __args...) do { \
++	printk(KERN_NOTICE "[%s:%d(%s)]" __fmt, __FUNCTION__, \
++		__LINE__, bp->dev?(bp->dev->name):"?", ##__args); \
++	} while (0)
++
+ /* before we have a dev->name use dev_info() */
+ #define BNX2X_DEV_INFO(__fmt, __args...) do { \
+ 	if (bp->msglevel & NETIF_MSG_PROBE) \
+@@ -423,8 +431,6 @@ struct bnx2x_fastpath {
+ #define BNX2X_FP_STATE_OPEN     	0xa0000
+ #define BNX2X_FP_STATE_HALTING  	0xb0000
+ #define BNX2X_FP_STATE_HALTED   	0xc0000
+-#define BNX2X_FP_STATE_DELETED  	0xd0000
+-#define BNX2X_FP_STATE_CLOSE_IRQ	0xe0000
+ 
+ 	int     		index;
+ 
+@@ -505,7 +511,6 @@ struct bnx2x {
+ 	struct eth_spe  	*spq;
+ 	dma_addr_t      	spq_mapping;
+ 	u16     		spq_prod_idx;
+-	u16     		dsb_sp_prod_idx;
+ 	struct eth_spe  	*spq_prod_bd;
+ 	struct eth_spe  	*spq_last_bd;
+ 	u16     		*dsb_sp_prod;
+@@ -517,7 +522,7 @@ struct bnx2x {
+ 	 */
+ 	u8      		stat_pending;
+ 
+-	/* End of fileds used in the performance code paths */
++	/* End of fields used in the performance code paths */
+ 
+ 	int     		panic;
+ 	int     		msglevel;
+@@ -540,8 +545,6 @@ struct bnx2x {
+ 	spinlock_t      	phy_lock;
+ 
+ 	struct work_struct      reset_task;
+-	u16     		in_reset_task;
+-
+ 	struct work_struct      sp_task;
+ 
+ 	struct timer_list       timer;
+@@ -555,7 +558,6 @@ struct bnx2x {
+ #define CHIP_ID(bp)     		(((bp)->chip_id) & 0xfffffff0)
+ 
+ #define CHIP_NUM(bp)    		(((bp)->chip_id) & 0xffff0000)
+-#define CHIP_NUM_5710   		0x57100000
+ 
+ #define CHIP_REV(bp)    		(((bp)->chip_id) & 0x0000f000)
+ #define CHIP_REV_Ax     		0x00000000
+@@ -574,7 +576,8 @@ struct bnx2x {
+ 	u32     		fw_mb;
+ 
+ 	u32     		hw_config;
+-	u32     		serdes_config;
++	u32			board;
++	u32			serdes_config;
+ 	u32     		lane_config;
+ 	u32     		ext_phy_config;
+ #define XGXS_EXT_PHY_TYPE(bp)   	(bp->ext_phy_config & \
+@@ -595,11 +598,11 @@ struct bnx2x {
+ 	u8      		tx_lane_swap;
+ 
+ 	u8      		link_up;
++	u8			phy_link_up;
+ 
+ 	u32     		supported;
+ /* link settings - missing defines */
+ #define SUPPORTED_2500baseT_Full	(1 << 15)
+-#define SUPPORTED_CX4   		(1 << 16)
+ 
+ 	u32     		phy_flags;
+ /*#define PHY_SERDES_FLAG       		0x1*/
+@@ -644,16 +647,9 @@ struct bnx2x {
+ #define FLOW_CTRL_BOTH  		PORT_FEATURE_FLOW_CONTROL_BOTH
+ #define FLOW_CTRL_NONE  		PORT_FEATURE_FLOW_CONTROL_NONE
+ 
+-	u32     		pause_mode;
+-#define PAUSE_NONE      		0
+-#define PAUSE_SYMMETRIC 		1
+-#define PAUSE_ASYMMETRIC		2
+-#define PAUSE_BOTH      		3
+-
+ 	u32     		advertising;
+ /* link settings - missing defines */
+ #define ADVERTISED_2500baseT_Full       (1 << 15)
+-#define ADVERTISED_CX4  		(1 << 16)
+ 
+ 	u32     		link_status;
+ 	u32     		line_speed;
+@@ -667,6 +663,8 @@ struct bnx2x {
+ #define NVRAM_TIMEOUT_COUNT     	30000
+ #define NVRAM_PAGE_SIZE 		256
+ 
++	u8			wol;
++
+ 	int     		rx_ring_size;
+ 
+ 	u16     		tx_quick_cons_trip_int;
+@@ -718,9 +716,6 @@ struct bnx2x {
+ #endif
+ 
+ 	char    		*name;
+-	u16     		bus_speed_mhz;
+-	u8      		wol;
+-	u8      		pad;
+ 
+ 	/* used to synchronize stats collecting */
+ 	int     		stats_state;
+@@ -856,8 +851,8 @@ struct bnx2x {
+ #define MAX_SPQ_PENDING 		8
+ 
+ 
+-#define BNX2X_NUM_STATS 		31
+-#define BNX2X_NUM_TESTS 		2
++#define BNX2X_NUM_STATS			34
++#define BNX2X_NUM_TESTS			1
+ 
+ 
+ #define DPM_TRIGER_TYPE 		0x40
+@@ -867,6 +862,15 @@ struct bnx2x {
+ 		       DPM_TRIGER_TYPE); \
+ 	} while (0)
+ 
++/* PCIE link and speed */
++#define PCICFG_LINK_WIDTH		0x1f00000
++#define PCICFG_LINK_WIDTH_SHIFT		20
++#define PCICFG_LINK_SPEED		0xf0000
++#define PCICFG_LINK_SPEED_SHIFT		16
++
++#define BMAC_CONTROL_RX_ENABLE		2
++
++#define pbd_tcp_flags(skb)  	(ntohl(tcp_flag_word(tcp_hdr(skb)))>>16 & 0xff)
+ 
+ /* stuff added to make the code fit 80Col */
+ 
+@@ -939,13 +943,13 @@ struct bnx2x {
+ #define LINK_16GTFD     		LINK_STATUS_SPEED_AND_DUPLEX_16GTFD
+ #define LINK_16GXFD     		LINK_STATUS_SPEED_AND_DUPLEX_16GXFD
+ 
+-#define NIG_STATUS_INTERRUPT_XGXS0_LINK10G \
++#define NIG_STATUS_XGXS0_LINK10G \
+ 		NIG_STATUS_INTERRUPT_PORT0_REG_STATUS_XGXS0_LINK10G
+-#define NIG_XGXS0_LINK_STATUS \
++#define NIG_STATUS_XGXS0_LINK_STATUS \
+ 		NIG_STATUS_INTERRUPT_PORT0_REG_STATUS_XGXS0_LINK_STATUS
+-#define NIG_XGXS0_LINK_STATUS_SIZE \
++#define NIG_STATUS_XGXS0_LINK_STATUS_SIZE \
+ 		NIG_STATUS_INTERRUPT_PORT0_REG_STATUS_XGXS0_LINK_STATUS_SIZE
+-#define NIG_SERDES0_LINK_STATUS \
++#define NIG_STATUS_SERDES0_LINK_STATUS \
+ 		NIG_STATUS_INTERRUPT_PORT0_REG_STATUS_SERDES0_LINK_STATUS
+ #define NIG_MASK_MI_INT \
+ 		NIG_MASK_INTERRUPT_PORT0_REG_MASK_EMAC0_MISC_MI_INT
+diff --git a/drivers/net/bnx2x_fw_defs.h b/drivers/net/bnx2x_fw_defs.h
+index 62a6eb8..3b96890 100644
+--- a/drivers/net/bnx2x_fw_defs.h
++++ b/drivers/net/bnx2x_fw_defs.h
+@@ -1,6 +1,6 @@
+ /* bnx2x_fw_defs.h: Broadcom Everest network driver.
+  *
+- * Copyright (c) 2007 Broadcom Corporation
++ * Copyright (c) 2007-2008 Broadcom Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License as published by
+diff --git a/drivers/net/bnx2x_hsi.h b/drivers/net/bnx2x_hsi.h
+index 6fd959c..b21075c 100644
+--- a/drivers/net/bnx2x_hsi.h
++++ b/drivers/net/bnx2x_hsi.h
+@@ -1,6 +1,6 @@
+ /* bnx2x_hsi.h: Broadcom Everest network driver.
+  *
+- * Copyright (c) 2007 Broadcom Corporation
++ * Copyright (c) 2007-2008 Broadcom Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License as published by
+@@ -8,169 +8,9 @@
+  */
+ 
+ 
+-#define FUNC_0				0
+-#define FUNC_1				1
+-#define FUNC_MAX			2
+-
+-
+-/* This value (in milliseconds) determines the frequency of the driver
+- * issuing the PULSE message code.  The firmware monitors this periodic
+- * pulse to determine when to switch to an OS-absent mode. */
+-#define DRV_PULSE_PERIOD_MS		250
+-
+-/* This value (in milliseconds) determines how long the driver should
+- * wait for an acknowledgement from the firmware before timing out.  Once
+- * the firmware has timed out, the driver will assume there is no firmware
+- * running and there won't be any firmware-driver synchronization during a
+- * driver reset. */
+-#define FW_ACK_TIME_OUT_MS		5000
+-
+-#define FW_ACK_POLL_TIME_MS		1
+-
+-#define FW_ACK_NUM_OF_POLL	(FW_ACK_TIME_OUT_MS/FW_ACK_POLL_TIME_MS)
+-
+-/* LED Blink rate that will achieve ~15.9Hz */
+-#define LED_BLINK_RATE_VAL		480
+-
+-/****************************************************************************
+- * Driver <-> FW Mailbox						    *
+- ****************************************************************************/
+-struct drv_fw_mb {
+-	u32 drv_mb_header;
+-#define DRV_MSG_CODE_MASK			0xffff0000
+-#define DRV_MSG_CODE_LOAD_REQ			0x10000000
+-#define DRV_MSG_CODE_LOAD_DONE			0x11000000
+-#define DRV_MSG_CODE_UNLOAD_REQ_WOL_EN		0x20000000
+-#define DRV_MSG_CODE_UNLOAD_REQ_WOL_DIS 	0x20010000
+-#define DRV_MSG_CODE_UNLOAD_REQ_WOL_MCP 	0x20020000
+-#define DRV_MSG_CODE_UNLOAD_DONE		0x21000000
+-#define DRV_MSG_CODE_DIAG_ENTER_REQ		0x50000000
+-#define DRV_MSG_CODE_DIAG_EXIT_REQ		0x60000000
+-#define DRV_MSG_CODE_VALIDATE_KEY		0x70000000
+-#define DRV_MSG_CODE_GET_CURR_KEY		0x80000000
+-#define DRV_MSG_CODE_GET_UPGRADE_KEY		0x81000000
+-#define DRV_MSG_CODE_GET_MANUF_KEY		0x82000000
+-#define DRV_MSG_CODE_LOAD_L2B_PRAM		0x90000000
+-
+-#define DRV_MSG_SEQ_NUMBER_MASK 		0x0000ffff
+-
+-	u32 drv_mb_param;
+-
+-	u32 fw_mb_header;
+-#define FW_MSG_CODE_MASK			0xffff0000
+-#define FW_MSG_CODE_DRV_LOAD_COMMON		0x11000000
+-#define FW_MSG_CODE_DRV_LOAD_PORT		0x12000000
+-#define FW_MSG_CODE_DRV_LOAD_REFUSED		0x13000000
+-#define FW_MSG_CODE_DRV_LOAD_DONE		0x14000000
+-#define FW_MSG_CODE_DRV_UNLOAD_COMMON		0x21000000
+-#define FW_MSG_CODE_DRV_UNLOAD_PORT		0x22000000
+-#define FW_MSG_CODE_DRV_UNLOAD_DONE		0x23000000
+-#define FW_MSG_CODE_DIAG_ENTER_DONE		0x50000000
+-#define FW_MSG_CODE_DIAG_REFUSE 		0x51000000
+-#define FW_MSG_CODE_VALIDATE_KEY_SUCCESS	0x70000000
+-#define FW_MSG_CODE_VALIDATE_KEY_FAILURE	0x71000000
+-#define FW_MSG_CODE_GET_KEY_DONE		0x80000000
+-#define FW_MSG_CODE_NO_KEY			0x8f000000
+-#define FW_MSG_CODE_LIC_INFO_NOT_READY		0x8f800000
+-#define FW_MSG_CODE_L2B_PRAM_LOADED		0x90000000
+-#define FW_MSG_CODE_L2B_PRAM_T_LOAD_FAILURE	0x91000000
+-#define FW_MSG_CODE_L2B_PRAM_C_LOAD_FAILURE	0x92000000
+-#define FW_MSG_CODE_L2B_PRAM_X_LOAD_FAILURE	0x93000000
+-#define FW_MSG_CODE_L2B_PRAM_U_LOAD_FAILURE	0x94000000
+-
+-#define FW_MSG_SEQ_NUMBER_MASK			0x0000ffff
+-
+-	u32 fw_mb_param;
+-
+-	u32 link_status;
+-	/* Driver should update this field on any link change event */
+-
+-#define LINK_STATUS_LINK_FLAG_MASK		0x00000001
+-#define LINK_STATUS_LINK_UP			0x00000001
+-#define LINK_STATUS_SPEED_AND_DUPLEX_MASK	0x0000001E
+-#define LINK_STATUS_SPEED_AND_DUPLEX_AN_NOT_COMPLETE	(0<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_10THD		(1<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_10TFD		(2<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_100TXHD		(3<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_100T4		(4<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_100TXFD		(5<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD		(6<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD		(7<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_1000XFD		(7<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_2500THD		(8<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_2500TFD		(9<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_2500XFD		(9<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_10GTFD		(10<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_10GXFD		(10<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_12GTFD		(11<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_12GXFD		(11<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_12_5GTFD		(12<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_12_5GXFD		(12<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_13GTFD		(13<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_13GXFD		(13<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_15GTFD		(14<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_15GXFD		(14<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_16GTFD		(15<<1)
+-#define LINK_STATUS_SPEED_AND_DUPLEX_16GXFD		(15<<1)
+-
+-#define LINK_STATUS_AUTO_NEGOTIATE_FLAG_MASK		0x00000020
+-#define LINK_STATUS_AUTO_NEGOTIATE_ENABLED		0x00000020
+-
+-#define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE		0x00000040
+-#define LINK_STATUS_PARALLEL_DETECTION_FLAG_MASK	0x00000080
+-#define LINK_STATUS_PARALLEL_DETECTION_USED		0x00000080
+-
+-#define LINK_STATUS_LINK_PARTNER_1000TFD_CAPABLE	0x00000200
+-#define LINK_STATUS_LINK_PARTNER_1000THD_CAPABLE	0x00000400
+-#define LINK_STATUS_LINK_PARTNER_100T4_CAPABLE		0x00000800
+-#define LINK_STATUS_LINK_PARTNER_100TXFD_CAPABLE	0x00001000
+-#define LINK_STATUS_LINK_PARTNER_100TXHD_CAPABLE	0x00002000
+-#define LINK_STATUS_LINK_PARTNER_10TFD_CAPABLE		0x00004000
+-#define LINK_STATUS_LINK_PARTNER_10THD_CAPABLE		0x00008000
+-
+-#define LINK_STATUS_TX_FLOW_CONTROL_FLAG_MASK		0x00010000
+-#define LINK_STATUS_TX_FLOW_CONTROL_ENABLED		0x00010000
+-
+-#define LINK_STATUS_RX_FLOW_CONTROL_FLAG_MASK		0x00020000
+-#define LINK_STATUS_RX_FLOW_CONTROL_ENABLED		0x00020000
+-
+-#define LINK_STATUS_LINK_PARTNER_FLOW_CONTROL_MASK	0x000C0000
+-#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE	(0<<18)
+-#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE	(1<<18)
+-#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE	(2<<18)
+-#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE		(3<<18)
+-
+-#define LINK_STATUS_SERDES_LINK 			0x00100000
+-
+-#define LINK_STATUS_LINK_PARTNER_2500XFD_CAPABLE	0x00200000
+-#define LINK_STATUS_LINK_PARTNER_2500XHD_CAPABLE	0x00400000
+-#define LINK_STATUS_LINK_PARTNER_10GXFD_CAPABLE 	0x00800000
+-#define LINK_STATUS_LINK_PARTNER_12GXFD_CAPABLE 	0x01000000
+-#define LINK_STATUS_LINK_PARTNER_12_5GXFD_CAPABLE	0x02000000
+-#define LINK_STATUS_LINK_PARTNER_13GXFD_CAPABLE 	0x04000000
+-#define LINK_STATUS_LINK_PARTNER_15GXFD_CAPABLE 	0x08000000
+-#define LINK_STATUS_LINK_PARTNER_16GXFD_CAPABLE 	0x10000000
+-
+-	u32 drv_pulse_mb;
+-#define DRV_PULSE_SEQ_MASK				0x00007fff
+-#define DRV_PULSE_SYSTEM_TIME_MASK			0xffff0000
+-	/* The system time is in the format of
+-	 * (year-2001)*12*32 + month*32 + day. */
+-#define DRV_PULSE_ALWAYS_ALIVE				0x00008000
+-	/* Indicate to the firmware not to go into the
+-	 * OS-absent when it is not getting driver pulse.
+-	 * This is used for debugging as well for PXE(MBA). */
+-
+-	u32 mcp_pulse_mb;
+-#define MCP_PULSE_SEQ_MASK				0x00007fff
+-#define MCP_PULSE_ALWAYS_ALIVE				0x00008000
+-	/* Indicates to the driver not to assert due to lack
+-	 * of MCP response */
+-#define MCP_EVENT_MASK					0xffff0000
+-#define MCP_EVENT_OTHER_DRIVER_RESET_REQ		0x00010000
+-
+-};
+-
++#define PORT_0				0
++#define PORT_1				1
++#define PORT_MAX			2
+ 
+ /****************************************************************************
+  * Shared HW configuration						    *
+@@ -249,7 +89,7 @@ struct shared_hw_cfg {					 /* NVRAM Offset */
+ #define SHARED_HW_CFG_SMBUS_TIMING_100KHZ	    0x00000000
+ #define SHARED_HW_CFG_SMBUS_TIMING_400KHZ	    0x00001000
+ 
+-#define SHARED_HW_CFG_HIDE_FUNC1		    0x00002000
++#define SHARED_HW_CFG_HIDE_PORT1		    0x00002000
+ 
+ 	u32 power_dissipated;					/* 0x11c */
+ #define SHARED_HW_CFG_POWER_DIS_CMN_MASK	    0xff000000
+@@ -290,6 +130,8 @@ struct shared_hw_cfg {					 /* NVRAM Offset */
+ #define SHARED_HW_CFG_BOARD_TYPE_BCM957710T1015G    0x00000006
+ #define SHARED_HW_CFG_BOARD_TYPE_BCM957710A1020G    0x00000007
+ #define SHARED_HW_CFG_BOARD_TYPE_BCM957710T1003G    0x00000008
++#define SHARED_HW_CFG_BOARD_TYPE_BCM957710A1022G    0x00000009
++#define SHARED_HW_CFG_BOARD_TYPE_BCM957710A1021G    0x0000000a
+ 
+ #define SHARED_HW_CFG_BOARD_VER_MASK		    0xffff0000
+ #define SHARED_HW_CFG_BOARD_VER_SHIFT		    16
+@@ -304,13 +146,12 @@ struct shared_hw_cfg {					 /* NVRAM Offset */
+ 
+ };
+ 
++
+ /****************************************************************************
+  * Port HW configuration						    *
+  ****************************************************************************/
+-struct port_hw_cfg {	/* function 0: 0x12c-0x2bb, function 1: 0x2bc-0x44b */
++struct port_hw_cfg {			    /* port 0: 0x12c  port 1: 0x2bc */
+ 
+-	/* Fields below are port specific (in anticipation of dual port
+-	   devices */
+ 	u32 pci_id;
+ #define PORT_HW_CFG_PCI_VENDOR_ID_MASK		    0xffff0000
+ #define PORT_HW_CFG_PCI_DEVICE_ID_MASK		    0x0000ffff
+@@ -420,6 +261,8 @@ struct port_hw_cfg {	/* function 0: 0x12c-0x2bb, function 1: 0x2bc-0x44b */
+ #define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8706	    0x00000500
+ #define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8276	    0x00000600
+ #define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8481	    0x00000700
++#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_SFX7101	    0x00000800
++#define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_FAILURE	    0x0000fd00
+ #define PORT_HW_CFG_XGXS_EXT_PHY_TYPE_NOT_CONN	    0x0000ff00
+ 
+ #define PORT_HW_CFG_XGXS_EXT_PHY_ADDR_MASK	    0x000000ff
+@@ -462,11 +305,13 @@ struct port_hw_cfg {	/* function 0: 0x12c-0x2bb, function 1: 0x2bc-0x44b */
+ 
+ };
+ 
++
+ /****************************************************************************
+  * Shared Feature configuration 					    *
+  ****************************************************************************/
+ struct shared_feat_cfg {				 /* NVRAM Offset */
+-	u32 bmc_common; 					/* 0x450 */
++
++	u32 config;						/* 0x450 */
+ #define SHARED_FEATURE_BMC_ECHO_MODE_EN 	    0x00000001
+ 
+ };
+@@ -475,7 +320,8 @@ struct shared_feat_cfg {				 /* NVRAM Offset */
+ /****************************************************************************
+  * Port Feature configuration						    *
+  ****************************************************************************/
+-struct port_feat_cfg {	/* function 0: 0x454-0x4c7, function 1: 0x4c8-0x53b */
++struct port_feat_cfg {			    /* port 0: 0x454  port 1: 0x4c8 */
++
+ 	u32 config;
+ #define PORT_FEATURE_BAR1_SIZE_MASK		    0x0000000f
+ #define PORT_FEATURE_BAR1_SIZE_SHIFT		    0
+@@ -609,8 +455,7 @@ struct port_feat_cfg {	/* function 0: 0x454-0x4c7, function 1: 0x4c8-0x53b */
+ #define PORT_FEATURE_SMBUS_ADDR_MASK		    0x000000fe
+ #define PORT_FEATURE_SMBUS_ADDR_SHIFT		    1
+ 
+-	u32 iscsib_boot_cfg;
+-#define PORT_FEATURE_ISCSIB_SKIP_TARGET_BOOT	    0x00000001
++	u32 reserved1;
+ 
+ 	u32 link_config;    /* Used as HW defaults for the driver */
+ #define PORT_FEATURE_CONNECTED_SWITCH_MASK	    0x03000000
+@@ -657,20 +502,201 @@ struct port_feat_cfg {	/* function 0: 0x454-0x4c7, function 1: 0x4c8-0x53b */
+ };
+ 
+ 
++/*****************************************************************************
++ * Device Information							     *
++ *****************************************************************************/
++struct dev_info {						     /* size */
++
++	u32    bc_rev; /* 8 bits each: major, minor, build */	        /* 4 */
++
++	struct shared_hw_cfg	 shared_hw_config;		       /* 40 */
++
++	struct port_hw_cfg	 port_hw_config[PORT_MAX];      /* 400*2=800 */
++
++	struct shared_feat_cfg	 shared_feature_config; 	        /* 4 */
++
++	struct port_feat_cfg	 port_feature_config[PORT_MAX]; /* 116*2=232 */
++
++};
++
++
++#define FUNC_0				0
++#define FUNC_1				1
++#define E1_FUNC_MAX			2
++#define FUNC_MAX			E1_FUNC_MAX
++
++
++/* This value (in milliseconds) determines the frequency of the driver
++ * issuing the PULSE message code.  The firmware monitors this periodic
++ * pulse to determine when to switch to an OS-absent mode. */
++#define DRV_PULSE_PERIOD_MS		250
++
++/* This value (in milliseconds) determines how long the driver should
++ * wait for an acknowledgement from the firmware before timing out.  Once
++ * the firmware has timed out, the driver will assume there is no firmware
++ * running and there won't be any firmware-driver synchronization during a
++ * driver reset. */
++#define FW_ACK_TIME_OUT_MS		5000
++
++#define FW_ACK_POLL_TIME_MS		1
++
++#define FW_ACK_NUM_OF_POLL	(FW_ACK_TIME_OUT_MS/FW_ACK_POLL_TIME_MS)
++
++/* LED Blink rate that will achieve ~15.9Hz */
++#define LED_BLINK_RATE_VAL		480
++
+ /****************************************************************************
+- * Device Information							    *
++ * Driver <-> FW Mailbox						    *
+  ****************************************************************************/
+-struct dev_info {						    /* size */
++struct drv_port_mb {
++
++	u32 link_status;
++	/* Driver should update this field on any link change event */
++
++#define LINK_STATUS_LINK_FLAG_MASK			0x00000001
++#define LINK_STATUS_LINK_UP				0x00000001
++#define LINK_STATUS_SPEED_AND_DUPLEX_MASK		0x0000001E
++#define LINK_STATUS_SPEED_AND_DUPLEX_AN_NOT_COMPLETE	(0<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_10THD		(1<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_10TFD		(2<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_100TXHD		(3<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_100T4		(4<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_100TXFD		(5<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD		(6<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD		(7<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_1000XFD		(7<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_2500THD		(8<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_2500TFD		(9<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_2500XFD		(9<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_10GTFD		(10<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_10GXFD		(10<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_12GTFD		(11<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_12GXFD		(11<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_12_5GTFD		(12<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_12_5GXFD		(12<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_13GTFD		(13<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_13GXFD		(13<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_15GTFD		(14<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_15GXFD		(14<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_16GTFD		(15<<1)
++#define LINK_STATUS_SPEED_AND_DUPLEX_16GXFD		(15<<1)
++
++#define LINK_STATUS_AUTO_NEGOTIATE_FLAG_MASK		0x00000020
++#define LINK_STATUS_AUTO_NEGOTIATE_ENABLED		0x00000020
++
++#define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE		0x00000040
++#define LINK_STATUS_PARALLEL_DETECTION_FLAG_MASK	0x00000080
++#define LINK_STATUS_PARALLEL_DETECTION_USED		0x00000080
++
++#define LINK_STATUS_LINK_PARTNER_1000TFD_CAPABLE	0x00000200
++#define LINK_STATUS_LINK_PARTNER_1000THD_CAPABLE	0x00000400
++#define LINK_STATUS_LINK_PARTNER_100T4_CAPABLE		0x00000800
++#define LINK_STATUS_LINK_PARTNER_100TXFD_CAPABLE	0x00001000
++#define LINK_STATUS_LINK_PARTNER_100TXHD_CAPABLE	0x00002000
++#define LINK_STATUS_LINK_PARTNER_10TFD_CAPABLE		0x00004000
++#define LINK_STATUS_LINK_PARTNER_10THD_CAPABLE		0x00008000
++
++#define LINK_STATUS_TX_FLOW_CONTROL_FLAG_MASK		0x00010000
++#define LINK_STATUS_TX_FLOW_CONTROL_ENABLED		0x00010000
++
++#define LINK_STATUS_RX_FLOW_CONTROL_FLAG_MASK		0x00020000
++#define LINK_STATUS_RX_FLOW_CONTROL_ENABLED		0x00020000
++
++#define LINK_STATUS_LINK_PARTNER_FLOW_CONTROL_MASK	0x000C0000
++#define LINK_STATUS_LINK_PARTNER_NOT_PAUSE_CAPABLE	(0<<18)
++#define LINK_STATUS_LINK_PARTNER_SYMMETRIC_PAUSE	(1<<18)
++#define LINK_STATUS_LINK_PARTNER_ASYMMETRIC_PAUSE	(2<<18)
++#define LINK_STATUS_LINK_PARTNER_BOTH_PAUSE		(3<<18)
++
++#define LINK_STATUS_SERDES_LINK 			0x00100000
++
++#define LINK_STATUS_LINK_PARTNER_2500XFD_CAPABLE	0x00200000
++#define LINK_STATUS_LINK_PARTNER_2500XHD_CAPABLE	0x00400000
++#define LINK_STATUS_LINK_PARTNER_10GXFD_CAPABLE 	0x00800000
++#define LINK_STATUS_LINK_PARTNER_12GXFD_CAPABLE 	0x01000000
++#define LINK_STATUS_LINK_PARTNER_12_5GXFD_CAPABLE	0x02000000
++#define LINK_STATUS_LINK_PARTNER_13GXFD_CAPABLE 	0x04000000
++#define LINK_STATUS_LINK_PARTNER_15GXFD_CAPABLE 	0x08000000
++#define LINK_STATUS_LINK_PARTNER_16GXFD_CAPABLE 	0x10000000
+ 
+-	u32    bc_rev; /* 8 bits each: major, minor, build */	       /* 4 */
++	u32 reserved[3];
+ 
+-	struct shared_hw_cfg	 shared_hw_config;		      /* 40 */
++};
++
++
++struct drv_func_mb {
++
++	u32 drv_mb_header;
++#define DRV_MSG_CODE_MASK				0xffff0000
++#define DRV_MSG_CODE_LOAD_REQ				0x10000000
++#define DRV_MSG_CODE_LOAD_DONE				0x11000000
++#define DRV_MSG_CODE_UNLOAD_REQ_WOL_EN			0x20000000
++#define DRV_MSG_CODE_UNLOAD_REQ_WOL_DIS 		0x20010000
++#define DRV_MSG_CODE_UNLOAD_REQ_WOL_MCP 		0x20020000
++#define DRV_MSG_CODE_UNLOAD_DONE			0x21000000
++#define DRV_MSG_CODE_DIAG_ENTER_REQ			0x50000000
++#define DRV_MSG_CODE_DIAG_EXIT_REQ			0x60000000
++#define DRV_MSG_CODE_VALIDATE_KEY			0x70000000
++#define DRV_MSG_CODE_GET_CURR_KEY			0x80000000
++#define DRV_MSG_CODE_GET_UPGRADE_KEY			0x81000000
++#define DRV_MSG_CODE_GET_MANUF_KEY			0x82000000
++#define DRV_MSG_CODE_LOAD_L2B_PRAM			0x90000000
++
++#define DRV_MSG_SEQ_NUMBER_MASK 			0x0000ffff
++
++	u32 drv_mb_param;
++
++	u32 fw_mb_header;
++#define FW_MSG_CODE_MASK				0xffff0000
++#define FW_MSG_CODE_DRV_LOAD_COMMON			0x10100000
++#define FW_MSG_CODE_DRV_LOAD_PORT			0x10110000
++#define FW_MSG_CODE_DRV_LOAD_FUNCTION			0x10120000
++#define FW_MSG_CODE_DRV_LOAD_REFUSED			0x10200000
++#define FW_MSG_CODE_DRV_LOAD_DONE			0x11100000
++#define FW_MSG_CODE_DRV_UNLOAD_COMMON			0x20100000
++#define FW_MSG_CODE_DRV_UNLOAD_PORT			0x20110000
++#define FW_MSG_CODE_DRV_UNLOAD_FUNCTION 		0x20120000
++#define FW_MSG_CODE_DRV_UNLOAD_DONE			0x21100000
++#define FW_MSG_CODE_DIAG_ENTER_DONE			0x50100000
++#define FW_MSG_CODE_DIAG_REFUSE 			0x50200000
++#define FW_MSG_CODE_DIAG_EXIT_DONE			0x60100000
++#define FW_MSG_CODE_VALIDATE_KEY_SUCCESS		0x70100000
++#define FW_MSG_CODE_VALIDATE_KEY_FAILURE		0x70200000
++#define FW_MSG_CODE_GET_KEY_DONE			0x80100000
++#define FW_MSG_CODE_NO_KEY				0x80f00000
++#define FW_MSG_CODE_LIC_INFO_NOT_READY			0x80f80000
++#define FW_MSG_CODE_L2B_PRAM_LOADED			0x90100000
++#define FW_MSG_CODE_L2B_PRAM_T_LOAD_FAILURE		0x90210000
++#define FW_MSG_CODE_L2B_PRAM_C_LOAD_FAILURE		0x90220000
++#define FW_MSG_CODE_L2B_PRAM_X_LOAD_FAILURE		0x90230000
++#define FW_MSG_CODE_L2B_PRAM_U_LOAD_FAILURE		0x90240000
++
++#define FW_MSG_SEQ_NUMBER_MASK				0x0000ffff
++
++	u32 fw_mb_param;
++
++	u32 drv_pulse_mb;
++#define DRV_PULSE_SEQ_MASK				0x00007fff
++#define DRV_PULSE_SYSTEM_TIME_MASK			0xffff0000
++	/* The system time is in the format of
++	 * (year-2001)*12*32 + month*32 + day. */
++#define DRV_PULSE_ALWAYS_ALIVE				0x00008000
++	/* Indicate to the firmware not to go into the
++	 * OS-absent when it is not getting driver pulse.
++	 * This is used for debugging as well for PXE(MBA). */
+ 
+-	struct port_hw_cfg	 port_hw_config[FUNC_MAX];     /* 400*2=800 */
++	u32 mcp_pulse_mb;
++#define MCP_PULSE_SEQ_MASK				0x00007fff
++#define MCP_PULSE_ALWAYS_ALIVE				0x00008000
++	/* Indicates to the driver not to assert due to lack
++	 * of MCP response */
++#define MCP_EVENT_MASK					0xffff0000
++#define MCP_EVENT_OTHER_DRIVER_RESET_REQ		0x00010000
+ 
+-	struct shared_feat_cfg	 shared_feature_config; 	       /* 4 */
++	u32 iscsi_boot_signature;
++	u32 iscsi_boot_block_offset;
+ 
+-	struct port_feat_cfg	 port_feature_config[FUNC_MAX];/* 116*2=232 */
++	u32 reserved[3];
+ 
+ };
+ 
+@@ -678,9 +704,8 @@ struct dev_info {						    /* size */
+ /****************************************************************************
+  * Management firmware state						    *
+  ****************************************************************************/
+-/* Allocate 320 bytes for management firmware: still not known exactly
+- * how much IMD needs. */
+-#define MGMTFW_STATE_WORD_SIZE				    80
++/* Allocate 440 bytes for management firmware */
++#define MGMTFW_STATE_WORD_SIZE				    110
+ 
+ struct mgmtfw_state {
+ 	u32 opaque[MGMTFW_STATE_WORD_SIZE];
+@@ -691,31 +716,40 @@ struct mgmtfw_state {
+  * Shared Memory Region 						    *
+  ****************************************************************************/
+ struct shmem_region {			       /*   SharedMem Offset (size) */
+-	u32		    validity_map[FUNC_MAX];    /* 0x0 (4 * 2 = 0x8) */
+-#define SHR_MEM_VALIDITY_PCI_CFG		    0x00000001
+-#define SHR_MEM_VALIDITY_MB			    0x00000002
+-#define SHR_MEM_VALIDITY_DEV_INFO		    0x00000004
++
++	u32			validity_map[PORT_MAX];  /* 0x0 (4*2 = 0x8) */
++#define SHR_MEM_FORMAT_REV_ID			    ('A'<<24)
++#define SHR_MEM_FORMAT_REV_MASK 		    0xff000000
++	/* validity bits */
++#define SHR_MEM_VALIDITY_PCI_CFG		    0x00100000
++#define SHR_MEM_VALIDITY_MB			    0x00200000
++#define SHR_MEM_VALIDITY_DEV_INFO		    0x00400000
++#define SHR_MEM_VALIDITY_RESERVED		    0x00000007
+ 	/* One licensing bit should be set */
+ #define SHR_MEM_VALIDITY_LIC_KEY_IN_EFFECT_MASK     0x00000038
+ #define SHR_MEM_VALIDITY_LIC_MANUF_KEY_IN_EFFECT    0x00000008
+ #define SHR_MEM_VALIDITY_LIC_UPGRADE_KEY_IN_EFFECT  0x00000010
+ #define SHR_MEM_VALIDITY_LIC_NO_KEY_IN_EFFECT	    0x00000020
++	/* Active MFW */
++#define SHR_MEM_VALIDITY_ACTIVE_MFW_UNKNOWN	    0x00000000
++#define SHR_MEM_VALIDITY_ACTIVE_MFW_IPMI	    0x00000040
++#define SHR_MEM_VALIDITY_ACTIVE_MFW_UMP 	    0x00000080
++#define SHR_MEM_VALIDITY_ACTIVE_MFW_NCSI	    0x000000c0
++#define SHR_MEM_VALIDITY_ACTIVE_MFW_NONE	    0x000001c0
++#define SHR_MEM_VALIDITY_ACTIVE_MFW_MASK	    0x000001c0
+ 
+-	struct drv_fw_mb    drv_fw_mb[FUNC_MAX];     /* 0x8 (28 * 2 = 0x38) */
+-
+-	struct dev_info     dev_info;			    /* 0x40 (0x438) */
++	struct dev_info 	dev_info;		 /* 0x8     (0x438) */
+ 
+-#ifdef _LICENSE_H
+-	license_key_t	    drv_lic_key[FUNC_MAX]; /* 0x478 (52 * 2 = 0x68) */
+-#else /* Linux! */
+-	u8		    reserved[52*FUNC_MAX];
+-#endif
++	u8			reserved[52*PORT_MAX];
+ 
+ 	/* FW information (for internal FW use) */
+-	u32		    fw_info_fio_offset; 	   /* 0x4e0 (0x4)   */
+-	struct mgmtfw_state mgmtfw_state;		   /* 0x4e4 (0x140) */
++	u32			fw_info_fio_offset;    /* 0x4a8       (0x4) */
++	struct mgmtfw_state	mgmtfw_state;	       /* 0x4ac     (0x1b8) */
++
++	struct drv_port_mb	port_mb[PORT_MAX];     /* 0x664 (16*2=0x20) */
++	struct drv_func_mb	func_mb[FUNC_MAX];     /* 0x684 (44*2=0x58) */
+ 
+-};							   /* 0x624 */
++};						       /* 0x6dc */
+ 
+ 
+ #define BCM_5710_FW_MAJOR_VERSION			4
+diff --git a/drivers/net/bnx2x_init.h b/drivers/net/bnx2x_init.h
+index 04f93bf..dcaecc5 100644
+--- a/drivers/net/bnx2x_init.h
++++ b/drivers/net/bnx2x_init.h
+@@ -1,6 +1,6 @@
+ /* bnx2x_init.h: Broadcom Everest network driver.
+  *
+- * Copyright (c) 2007 Broadcom Corporation
++ * Copyright (c) 2007-2008 Broadcom Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License as published by
+@@ -409,7 +409,7 @@ static void bnx2x_init_pxp(struct bnx2x *bp)
+ 
+ 	pci_read_config_word(bp->pdev,
+ 			     bp->pcie_cap + PCI_EXP_DEVCTL, (u16 *)&val);
+-	DP(NETIF_MSG_HW, "read 0x%x from devctl\n", val);
++	DP(NETIF_MSG_HW, "read 0x%x from devctl\n", (u16)val);
+ 	w_order = ((val & PCI_EXP_DEVCTL_PAYLOAD) >> 5);
+ 	r_order = ((val & PCI_EXP_DEVCTL_READRQ) >> 12);
+ 
+@@ -472,10 +472,14 @@ static void bnx2x_init_pxp(struct bnx2x *bp)
+ 	REG_WR(bp, PXP2_REG_PSWRQ_BW_WR, val);
+ 
+ 	REG_WR(bp, PXP2_REG_RQ_WR_MBS0, w_order);
+-	REG_WR(bp, PXP2_REG_RQ_WR_MBS0 + 8, w_order);
++	REG_WR(bp, PXP2_REG_RQ_WR_MBS1, w_order);
+ 	REG_WR(bp, PXP2_REG_RQ_RD_MBS0, r_order);
+-	REG_WR(bp, PXP2_REG_RQ_RD_MBS0 + 8, r_order);
++	REG_WR(bp, PXP2_REG_RQ_RD_MBS1, r_order);
+ 
++	if (r_order == MAX_RD_ORD)
++		REG_WR(bp, PXP2_REG_RQ_PDR_LIMIT, 0xe00);
++
++	REG_WR(bp, PXP2_REG_WR_USDMDP_TH, (0x18 << w_order));
+ 	REG_WR(bp, PXP2_REG_WR_DMAE_TH, (128 << w_order)/16);
+ }
+ 
+diff --git a/drivers/net/bnx2x_reg.h b/drivers/net/bnx2x_reg.h
+index 8605529..5a1aa0b 100644
+--- a/drivers/net/bnx2x_reg.h
++++ b/drivers/net/bnx2x_reg.h
+@@ -1,6 +1,6 @@
+ /* bnx2x_reg.h: Broadcom Everest network driver.
+  *
+- * Copyright (c) 2007 Broadcom Corporation
++ * Copyright (c) 2007-2008 Broadcom Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License as published by
+@@ -24,6 +24,8 @@
+ #define BRB1_REG_BRB1_INT_STS					 0x6011c
+ /* [RW 4] Parity mask register #0 read/write */
+ #define BRB1_REG_BRB1_PRTY_MASK 				 0x60138
++/* [R 4] Parity register #0 read */
++#define BRB1_REG_BRB1_PRTY_STS					 0x6012c
+ /* [RW 10] At address BRB1_IND_FREE_LIST_PRS_CRDT initialize free head. At
+    address BRB1_IND_FREE_LIST_PRS_CRDT+1 initialize free tail. At address
+    BRB1_IND_FREE_LIST_PRS_CRDT+2 initialize parser initial credit. */
+@@ -281,6 +283,8 @@
+ #define CDU_REG_CDU_INT_STS					 0x101030
+ /* [RW 5] Parity mask register #0 read/write */
+ #define CDU_REG_CDU_PRTY_MASK					 0x10104c
++/* [R 5] Parity register #0 read */
++#define CDU_REG_CDU_PRTY_STS					 0x101040
+ /* [RC 32] logging of error data in case of a CDU load error:
+    {expected_cid[15:0]; xpected_type[2:0]; xpected_region[2:0]; ctive_error;
+    ype_error; ctual_active; ctual_compressed_context}; */
+@@ -308,6 +312,8 @@
+ #define CFC_REG_CFC_INT_STS_CLR 				 0x104100
+ /* [RW 4] Parity mask register #0 read/write */
+ #define CFC_REG_CFC_PRTY_MASK					 0x104118
++/* [R 4] Parity register #0 read */
++#define CFC_REG_CFC_PRTY_STS					 0x10410c
+ /* [RW 21] CID cam access (21:1 - Data; alid - 0) */
+ #define CFC_REG_CID_CAM 					 0x104800
+ #define CFC_REG_CONTROL0					 0x104028
+@@ -354,6 +360,8 @@
+ #define CSDM_REG_CSDM_INT_MASK_1				 0xc22ac
+ /* [RW 11] Parity mask register #0 read/write */
+ #define CSDM_REG_CSDM_PRTY_MASK 				 0xc22bc
++/* [R 11] Parity register #0 read */
++#define CSDM_REG_CSDM_PRTY_STS					 0xc22b0
+ #define CSDM_REG_ENABLE_IN1					 0xc2238
+ #define CSDM_REG_ENABLE_IN2					 0xc223c
+ #define CSDM_REG_ENABLE_OUT1					 0xc2240
+@@ -438,6 +446,9 @@
+ /* [RW 32] Parity mask register #0 read/write */
+ #define CSEM_REG_CSEM_PRTY_MASK_0				 0x200130
+ #define CSEM_REG_CSEM_PRTY_MASK_1				 0x200140
++/* [R 32] Parity register #0 read */
++#define CSEM_REG_CSEM_PRTY_STS_0				 0x200124
++#define CSEM_REG_CSEM_PRTY_STS_1				 0x200134
+ #define CSEM_REG_ENABLE_IN					 0x2000a4
+ #define CSEM_REG_ENABLE_OUT					 0x2000a8
+ /* [RW 32] This address space contains all registers and memories that are
+@@ -526,6 +537,8 @@
+ #define CSEM_REG_TS_9_AS					 0x20005c
+ /* [RW 1] Parity mask register #0 read/write */
+ #define DBG_REG_DBG_PRTY_MASK					 0xc0a8
++/* [R 1] Parity register #0 read */
++#define DBG_REG_DBG_PRTY_STS					 0xc09c
+ /* [RW 2] debug only: These bits indicate the credit for PCI request type 4
+    interface; MUST be configured AFTER pci_ext_buffer_strt_addr_lsb/msb are
+    configured */
+@@ -543,6 +556,8 @@
+ #define DMAE_REG_DMAE_INT_MASK					 0x102054
+ /* [RW 4] Parity mask register #0 read/write */
+ #define DMAE_REG_DMAE_PRTY_MASK 				 0x102064
++/* [R 4] Parity register #0 read */
++#define DMAE_REG_DMAE_PRTY_STS					 0x102058
+ /* [RW 1] Command 0 go. */
+ #define DMAE_REG_GO_C0						 0x102080
+ /* [RW 1] Command 1 go. */
+@@ -623,6 +638,8 @@
+ #define DORQ_REG_DORQ_INT_STS_CLR				 0x170178
+ /* [RW 2] Parity mask register #0 read/write */
+ #define DORQ_REG_DORQ_PRTY_MASK 				 0x170190
++/* [R 2] Parity register #0 read */
++#define DORQ_REG_DORQ_PRTY_STS					 0x170184
+ /* [RW 8] The address to write the DPM CID to STORM. */
+ #define DORQ_REG_DPM_CID_ADDR					 0x170044
+ /* [RW 5] The DPM mode CID extraction offset. */
+@@ -692,6 +709,8 @@
+ #define HC_REG_CONFIG_1 					 0x108004
+ /* [RW 3] Parity mask register #0 read/write */
+ #define HC_REG_HC_PRTY_MASK					 0x1080a0
++/* [R 3] Parity register #0 read */
++#define HC_REG_HC_PRTY_STS					 0x108094
+ /* [RW 17] status block interrupt mask; one in each bit means unmask; zerow
+    in each bit means mask; bit 0 - default SB; bit 1 - SB_0; bit 2 - SB_1...
+    bit 16- SB_15; addr 0 - port 0; addr 1 - port 1 */
+@@ -1127,6 +1146,7 @@
+ #define MISC_REG_AEU_GENERAL_ATTN_17				 0xa044
+ #define MISC_REG_AEU_GENERAL_ATTN_18				 0xa048
+ #define MISC_REG_AEU_GENERAL_ATTN_19				 0xa04c
++#define MISC_REG_AEU_GENERAL_ATTN_10				 0xa028
+ #define MISC_REG_AEU_GENERAL_ATTN_11				 0xa02c
+ #define MISC_REG_AEU_GENERAL_ATTN_2				 0xa008
+ #define MISC_REG_AEU_GENERAL_ATTN_20				 0xa050
+@@ -1135,6 +1155,9 @@
+ #define MISC_REG_AEU_GENERAL_ATTN_4				 0xa010
+ #define MISC_REG_AEU_GENERAL_ATTN_5				 0xa014
+ #define MISC_REG_AEU_GENERAL_ATTN_6				 0xa018
++#define MISC_REG_AEU_GENERAL_ATTN_7				 0xa01c
++#define MISC_REG_AEU_GENERAL_ATTN_8				 0xa020
++#define MISC_REG_AEU_GENERAL_ATTN_9				 0xa024
+ /* [RW 32] first 32b for inverting the input for function 0; for each bit:
+    0= do not invert; 1= invert; mapped as follows: [0] NIG attention for
+    function0; [1] NIG attention for function1; [2] GPIO1 mcp; [3] GPIO2 mcp;
+@@ -1183,6 +1206,40 @@
+    starts at 0x0 for the A0 tape-out and increments by one for each
+    all-layer tape-out. */
+ #define MISC_REG_CHIP_REV					 0xa40c
++/* [RW 32] The following driver registers(1..6) represent 6 drivers and 32
++   clients. Each client can be controlled by one driver only. One in each
++   bit represent that this driver control the appropriate client (Ex: bit 5
++   is set means this driver control client number 5). addr1 = set; addr0 =
++   clear; read from both addresses will give the same result = status. write
++   to address 1 will set a request to control all the clients that their
++   appropriate bit (in the write command) is set. if the client is free (the
++   appropriate bit in all the other drivers is clear) one will be written to
++   that driver register; if the client isn't free the bit will remain zero.
++   if the appropriate bit is set (the driver request to gain control on a
++   client it already controls the ~MISC_REGISTERS_INT_STS.GENERIC_SW
++   interrupt will be asserted). write to address 0 will set a request to
++   free all the clients that their appropriate bit (in the write command) is
++   set. if the appropriate bit is clear (the driver request to free a client
++   it doesn't controls the ~MISC_REGISTERS_INT_STS.GENERIC_SW interrupt will
++   be asserted). */
++#define MISC_REG_DRIVER_CONTROL_1				 0xa510
++/* [RW 32] GPIO. [31-28] FLOAT port 0; [27-24] FLOAT port 0; When any of
++   these bits is written as a '1'; the corresponding SPIO bit will turn off
++   it's drivers and become an input. This is the reset state of all GPIO
++   pins. The read value of these bits will be a '1' if that last command
++   (#SET; #CLR; or #FLOAT) for this bit was a #FLOAT. (reset value 0xff).
++   [23-20] CLR port 1; 19-16] CLR port 0; When any of these bits is written
++   as a '1'; the corresponding GPIO bit will drive low. The read value of
++   these bits will be a '1' if that last command (#SET; #CLR; or #FLOAT) for
++   this bit was a #CLR. (reset value 0). [15-12] SET port 1; 11-8] port 0;
++   SET When any of these bits is written as a '1'; the corresponding GPIO
++   bit will drive high (if it has that capability). The read value of these
++   bits will be a '1' if that last command (#SET; #CLR; or #FLOAT) for this
++   bit was a #SET. (reset value 0). [7-4] VALUE port 1; [3-0] VALUE port 0;
++   RO; These bits indicate the read value of each of the eight GPIO pins.
++   This is the result value of the pin; not the drive value. Writing these
++   bits will have not effect. */
++#define MISC_REG_GPIO						 0xa490
+ /* [RW 1] Setting this bit enables a timer in the GRC block to timeout any
+    access that does not finish within
+    ~misc_registers_grc_timout_val.grc_timeout_val cycles. When this bit is
+@@ -1223,6 +1280,8 @@
+ #define MISC_REG_MISC_INT_MASK					 0xa388
+ /* [RW 1] Parity mask register #0 read/write */
+ #define MISC_REG_MISC_PRTY_MASK 				 0xa398
++/* [R 1] Parity register #0 read */
++#define MISC_REG_MISC_PRTY_STS					 0xa38c
+ /* [RW 32] 32 LSB of storm PLL first register; reset val = 0x 071d2911.
+    inside order of the bits is: [0] P1 divider[0] (reset value 1); [1] P1
+    divider[1] (reset value 0); [2] P1 divider[2] (reset value 0); [3] P1
+@@ -1264,6 +1323,55 @@
+ /* [RW 20] 20 bit GRC address where the scratch-pad of the MCP that is
+    shared with the driver resides */
+ #define MISC_REG_SHARED_MEM_ADDR				 0xa2b4
++/* [RW 32] SPIO. [31-24] FLOAT When any of these bits is written as a '1';
++   the corresponding SPIO bit will turn off it's drivers and become an
++   input. This is the reset state of all SPIO pins. The read value of these
++   bits will be a '1' if that last command (#SET; #CL; or #FLOAT) for this
++   bit was a #FLOAT. (reset value 0xff). [23-16] CLR When any of these bits
++   is written as a '1'; the corresponding SPIO bit will drive low. The read
++   value of these bits will be a '1' if that last command (#SET; #CLR; or
++#FLOAT) for this bit was a #CLR. (reset value 0). [15-8] SET When any of
++   these bits is written as a '1'; the corresponding SPIO bit will drive
++   high (if it has that capability). The read value of these bits will be a
++   '1' if that last command (#SET; #CLR; or #FLOAT) for this bit was a #SET.
++   (reset value 0). [7-0] VALUE RO; These bits indicate the read value of
++   each of the eight SPIO pins. This is the result value of the pin; not the
++   drive value. Writing these bits will have not effect. Each 8 bits field
++   is divided as follows: [0] VAUX Enable; when pulsed low; enables supply
++   from VAUX. (This is an output pin only; the FLOAT field is not applicable
++   for this pin); [1] VAUX Disable; when pulsed low; disables supply form
++   VAUX. (This is an output pin only; FLOAT field is not applicable for this
++   pin); [2] SEL_VAUX_B - Control to power switching logic. Drive low to
++   select VAUX supply. (This is an output pin only; it is not controlled by
++   the SET and CLR fields; it is controlled by the Main Power SM; the FLOAT
++   field is not applicable for this pin; only the VALUE fields is relevant -
++   it reflects the output value); [3] reserved; [4] spio_4; [5] spio_5; [6]
++   Bit 0 of UMP device ID select; read by UMP firmware; [7] Bit 1 of UMP
++   device ID select; read by UMP firmware. */
++#define MISC_REG_SPIO						 0xa4fc
++/* [RW 8] These bits enable the SPIO_INTs to signals event to the IGU/MC.
++   according to the following map: [3:0] reserved; [4] spio_4 [5] spio_5;
++   [7:0] reserved */
++#define MISC_REG_SPIO_EVENT_EN					 0xa2b8
++/* [RW 32] SPIO INT. [31-24] OLD_CLR Writing a '1' to these bit clears the
++   corresponding bit in the #OLD_VALUE register. This will acknowledge an
++   interrupt on the falling edge of corresponding SPIO input (reset value
++   0). [23-16] OLD_SET Writing a '1' to these bit sets the corresponding bit
++   in the #OLD_VALUE register. This will acknowledge an interrupt on the
++   rising edge of corresponding SPIO input (reset value 0). [15-8] OLD_VALUE
++   RO; These bits indicate the old value of the SPIO input value. When the
++   ~INT_STATE bit is set; this bit indicates the OLD value of the pin such
++   that if ~INT_STATE is set and this bit is '0'; then the interrupt is due
++   to a low to high edge. If ~INT_STATE is set and this bit is '1'; then the
++   interrupt is due to a high to low edge (reset value 0). [7-0] INT_STATE
++   RO; These bits indicate the current SPIO interrupt state for each SPIO
++   pin. This bit is cleared when the appropriate #OLD_SET or #OLD_CLR
++   command bit is written. This bit is set when the SPIO input does not
++   match the current value in #OLD_VALUE (reset value 0). */
++#define MISC_REG_SPIO_INT					 0xa500
++/* [RW 1] Set by the MCP to remember if one or more of the drivers is/are
++   loaded; 0-prepare; -unprepare */
++#define MISC_REG_UNPREPARED					 0xa424
+ #define NIG_MASK_INTERRUPT_PORT0_REG_MASK_EMAC0_MISC_MI_INT	 (0x1<<0)
+ #define NIG_MASK_INTERRUPT_PORT0_REG_MASK_SERDES0_LINK_STATUS	 (0x1<<9)
+ #define NIG_MASK_INTERRUPT_PORT0_REG_MASK_XGXS0_LINK10G 	 (0x1<<15)
+@@ -1392,6 +1500,9 @@
+ #define NIG_REG_NIG_INGRESS_EMAC0_NO_CRC			 0x10044
+ /* [RW 1] Input enable for RX PBF LP IF */
+ #define NIG_REG_PBF_LB_IN_EN					 0x100b4
++/* [RW 1] Value of this register will be transmitted to port swap when
++   ~nig_registers_strap_override.strap_override =1 */
++#define NIG_REG_PORT_SWAP					 0x10394
+ /* [RW 1] output enable for RX parser descriptor IF */
+ #define NIG_REG_PRS_EOP_OUT_EN					 0x10104
+ /* [RW 1] Input enable for RX parser request IF */
+@@ -1410,6 +1521,10 @@
+ #define NIG_REG_STAT2_BRB_OCTET 				 0x107e0
+ #define NIG_REG_STATUS_INTERRUPT_PORT0				 0x10328
+ #define NIG_REG_STATUS_INTERRUPT_PORT1				 0x1032c
++/* [RW 1] port swap mux selection. If this register equal to 0 then port
++   swap is equal to SPIO pin that inputs from ifmux_serdes_swap. If 1 then
++   ort swap is equal to ~nig_registers_port_swap.port_swap */
++#define NIG_REG_STRAP_OVERRIDE					 0x10398
+ /* [RW 1] output enable for RX_XCM0 IF */
+ #define NIG_REG_XCM0_OUT_EN					 0x100f0
+ /* [RW 1] output enable for RX_XCM1 IF */
+@@ -1499,6 +1614,8 @@
+ #define PB_REG_PB_INT_STS					 0x1c
+ /* [RW 4] Parity mask register #0 read/write */
+ #define PB_REG_PB_PRTY_MASK					 0x38
++/* [R 4] Parity register #0 read */
++#define PB_REG_PB_PRTY_STS					 0x2c
+ #define PRS_REG_A_PRSU_20					 0x40134
+ /* [R 8] debug only: CFC load request current credit. Transaction based. */
+ #define PRS_REG_CFC_LD_CURRENT_CREDIT				 0x40164
+@@ -1590,6 +1707,8 @@
+ #define PRS_REG_PRS_INT_STS					 0x40188
+ /* [RW 8] Parity mask register #0 read/write */
+ #define PRS_REG_PRS_PRTY_MASK					 0x401a4
++/* [R 8] Parity register #0 read */
++#define PRS_REG_PRS_PRTY_STS					 0x40198
+ /* [RW 8] Context region for pure acknowledge packets. Used in CFC load
+    request message */
+ #define PRS_REG_PURE_REGIONS					 0x40024
+@@ -1718,6 +1837,9 @@
+ /* [RW 32] Parity mask register #0 read/write */
+ #define PXP2_REG_PXP2_PRTY_MASK_0				 0x120588
+ #define PXP2_REG_PXP2_PRTY_MASK_1				 0x120598
++/* [R 32] Parity register #0 read */
++#define PXP2_REG_PXP2_PRTY_STS_0				 0x12057c
++#define PXP2_REG_PXP2_PRTY_STS_1				 0x12058c
+ /* [R 1] Debug only: The 'almost full' indication from each fifo (gives
+    indication about backpressure) */
+ #define PXP2_REG_RD_ALMOST_FULL_0				 0x120424
+@@ -1911,6 +2033,8 @@
+ #define PXP2_REG_RQ_HC_ENDIAN_M 				 0x1201a8
+ /* [WB 53] Onchip address table */
+ #define PXP2_REG_RQ_ONCHIP_AT					 0x122000
++/* [RW 13] Pending read limiter threshold; in Dwords */
++#define PXP2_REG_RQ_PDR_LIMIT					 0x12033c
+ /* [RW 2] Endian mode for qm */
+ #define PXP2_REG_RQ_QM_ENDIAN_M 				 0x120194
+ /* [RW 3] page size in L2P table for QM module; -4k; -8k; -16k; -32k; -64k;
+@@ -1921,6 +2045,9 @@
+ /* [RW 3] Max burst size filed for read requests port 0; 000 - 128B;
+    001:256B; 010: 512B; 11:1K:100:2K; 01:4K */
+ #define PXP2_REG_RQ_RD_MBS0					 0x120160
++/* [RW 3] Max burst size filed for read requests port 1; 000 - 128B;
++   001:256B; 010: 512B; 11:1K:100:2K; 01:4K */
++#define PXP2_REG_RQ_RD_MBS1					 0x120168
+ /* [RW 2] Endian mode for src */
+ #define PXP2_REG_RQ_SRC_ENDIAN_M				 0x12019c
+ /* [RW 3] page size in L2P table for SRC module; -4k; -8k; -16k; -32k; -64k;
+@@ -2000,10 +2127,17 @@
+ /* [RW 3] Max burst size filed for write requests port 0; 000 - 128B;
+    001:256B; 010: 512B; */
+ #define PXP2_REG_RQ_WR_MBS0					 0x12015c
++/* [RW 3] Max burst size filed for write requests port 1; 000 - 128B;
++   001:256B; 010: 512B; */
++#define PXP2_REG_RQ_WR_MBS1					 0x120164
+ /* [RW 10] if Number of entries in dmae fifo will be higer than this
+    threshold then has_payload indication will be asserted; the default value
+    should be equal to &gt;  write MBS size! */
+ #define PXP2_REG_WR_DMAE_TH					 0x120368
++/* [RW 10] if Number of entries in usdmdp fifo will be higer than this
++   threshold then has_payload indication will be asserted; the default value
++   should be equal to &gt;  write MBS size! */
++#define PXP2_REG_WR_USDMDP_TH					 0x120348
+ /* [R 1] debug only: Indication if PSWHST arbiter is idle */
+ #define PXP_REG_HST_ARB_IS_IDLE 				 0x103004
+ /* [R 8] debug only: A bit mask for all PSWHST arbiter clients. '1' means
+@@ -2021,6 +2155,8 @@
+ #define PXP_REG_PXP_INT_STS_CLR_0				 0x10306c
+ /* [RW 26] Parity mask register #0 read/write */
+ #define PXP_REG_PXP_PRTY_MASK					 0x103094
++/* [R 26] Parity register #0 read */
++#define PXP_REG_PXP_PRTY_STS					 0x103088
+ /* [RW 4] The activity counter initial increment value sent in the load
+    request */
+ #define QM_REG_ACTCTRINITVAL_0					 0x168040
+@@ -2127,6 +2263,8 @@
+ #define QM_REG_QM_INT_STS					 0x168438
+ /* [RW 9] Parity mask register #0 read/write */
+ #define QM_REG_QM_PRTY_MASK					 0x168454
++/* [R 9] Parity register #0 read */
++#define QM_REG_QM_PRTY_STS					 0x168448
+ /* [R 32] Current queues in pipeline: Queues from 32 to 63 */
+ #define QM_REG_QSTATUS_HIGH					 0x16802c
+ /* [R 32] Current queues in pipeline: Queues from 0 to 31 */
+@@ -2410,6 +2548,8 @@
+ #define SRC_REG_SRC_INT_STS					 0x404ac
+ /* [RW 3] Parity mask register #0 read/write */
+ #define SRC_REG_SRC_PRTY_MASK					 0x404c8
++/* [R 3] Parity register #0 read */
++#define SRC_REG_SRC_PRTY_STS					 0x404bc
+ /* [R 4] Used to read the value of the XX protection CAM occupancy counter. */
+ #define TCM_REG_CAM_OCCUP					 0x5017c
+ /* [RW 1] CDU AG read Interface enable. If 0 - the request input is
+@@ -2730,6 +2870,8 @@
+ #define TSDM_REG_TSDM_INT_MASK_1				 0x422ac
+ /* [RW 11] Parity mask register #0 read/write */
+ #define TSDM_REG_TSDM_PRTY_MASK 				 0x422bc
++/* [R 11] Parity register #0 read */
++#define TSDM_REG_TSDM_PRTY_STS					 0x422b0
+ /* [RW 5] The number of time_slots in the arbitration cycle */
+ #define TSEM_REG_ARB_CYCLE_SIZE 				 0x180034
+ /* [RW 3] The source that is associated with arbitration element 0. Source
+@@ -2854,6 +2996,9 @@
+ /* [RW 32] Parity mask register #0 read/write */
+ #define TSEM_REG_TSEM_PRTY_MASK_0				 0x180120
+ #define TSEM_REG_TSEM_PRTY_MASK_1				 0x180130
++/* [R 32] Parity register #0 read */
++#define TSEM_REG_TSEM_PRTY_STS_0				 0x180114
++#define TSEM_REG_TSEM_PRTY_STS_1				 0x180124
+ /* [R 5] Used to read the XX protection CAM occupancy counter. */
+ #define UCM_REG_CAM_OCCUP					 0xe0170
+ /* [RW 1] CDU AG read Interface enable. If 0 - the request input is
+@@ -3155,6 +3300,8 @@
+ #define USDM_REG_USDM_INT_MASK_1				 0xc42b0
+ /* [RW 11] Parity mask register #0 read/write */
+ #define USDM_REG_USDM_PRTY_MASK 				 0xc42c0
++/* [R 11] Parity register #0 read */
++#define USDM_REG_USDM_PRTY_STS					 0xc42b4
+ /* [RW 5] The number of time_slots in the arbitration cycle */
+ #define USEM_REG_ARB_CYCLE_SIZE 				 0x300034
+ /* [RW 3] The source that is associated with arbitration element 0. Source
+@@ -3279,6 +3426,9 @@
+ /* [RW 32] Parity mask register #0 read/write */
+ #define USEM_REG_USEM_PRTY_MASK_0				 0x300130
+ #define USEM_REG_USEM_PRTY_MASK_1				 0x300140
++/* [R 32] Parity register #0 read */
++#define USEM_REG_USEM_PRTY_STS_0				 0x300124
++#define USEM_REG_USEM_PRTY_STS_1				 0x300134
+ /* [RW 2] The queue index for registration on Aux1 counter flag. */
+ #define XCM_REG_AUX1_Q						 0x20134
+ /* [RW 2] Per each decision rule the queue index to register to. */
+@@ -3684,6 +3834,8 @@
+ #define XSDM_REG_XSDM_INT_MASK_1				 0x1662ac
+ /* [RW 11] Parity mask register #0 read/write */
+ #define XSDM_REG_XSDM_PRTY_MASK 				 0x1662bc
++/* [R 11] Parity register #0 read */
++#define XSDM_REG_XSDM_PRTY_STS					 0x1662b0
+ /* [RW 5] The number of time_slots in the arbitration cycle */
+ #define XSEM_REG_ARB_CYCLE_SIZE 				 0x280034
+ /* [RW 3] The source that is associated with arbitration element 0. Source
+@@ -3808,6 +3960,9 @@
+ /* [RW 32] Parity mask register #0 read/write */
+ #define XSEM_REG_XSEM_PRTY_MASK_0				 0x280130
+ #define XSEM_REG_XSEM_PRTY_MASK_1				 0x280140
++/* [R 32] Parity register #0 read */
++#define XSEM_REG_XSEM_PRTY_STS_0				 0x280124
++#define XSEM_REG_XSEM_PRTY_STS_1				 0x280134
+ #define MCPR_NVM_ACCESS_ENABLE_EN				 (1L<<0)
+ #define MCPR_NVM_ACCESS_ENABLE_WR_EN				 (1L<<1)
+ #define MCPR_NVM_ADDR_NVM_ADDR_VALUE				 (0xffffffL<<0)
+@@ -3847,6 +4002,8 @@
+ #define EMAC_MDIO_COMM_START_BUSY				 (1L<<29)
+ #define EMAC_MDIO_MODE_AUTO_POLL				 (1L<<4)
+ #define EMAC_MDIO_MODE_CLAUSE_45				 (1L<<31)
++#define EMAC_MDIO_MODE_CLOCK_CNT				 (0x3fL<<16)
++#define EMAC_MDIO_MODE_CLOCK_CNT_BITSHIFT			 16
+ #define EMAC_MODE_25G_MODE					 (1L<<5)
+ #define EMAC_MODE_ACPI_RCVD					 (1L<<20)
+ #define EMAC_MODE_HALF_DUPLEX					 (1L<<1)
+@@ -3874,6 +4031,17 @@
+ #define EMAC_RX_MTU_SIZE_JUMBO_ENA				 (1L<<31)
+ #define EMAC_TX_MODE_EXT_PAUSE_EN				 (1L<<3)
+ #define EMAC_TX_MODE_RESET					 (1L<<0)
++#define MISC_REGISTERS_GPIO_1					 1
++#define MISC_REGISTERS_GPIO_2					 2
++#define MISC_REGISTERS_GPIO_3					 3
++#define MISC_REGISTERS_GPIO_CLR_POS				 16
++#define MISC_REGISTERS_GPIO_FLOAT				 (0xffL<<24)
++#define MISC_REGISTERS_GPIO_FLOAT_POS				 24
++#define MISC_REGISTERS_GPIO_INPUT_HI_Z				 2
++#define MISC_REGISTERS_GPIO_OUTPUT_HIGH 			 1
++#define MISC_REGISTERS_GPIO_OUTPUT_LOW				 0
++#define MISC_REGISTERS_GPIO_PORT_SHIFT				 4
++#define MISC_REGISTERS_GPIO_SET_POS				 8
+ #define MISC_REGISTERS_RESET_REG_1_CLEAR			 0x588
+ #define MISC_REGISTERS_RESET_REG_1_SET				 0x584
+ #define MISC_REGISTERS_RESET_REG_2_CLEAR			 0x598
+@@ -3891,6 +4059,25 @@
+ #define MISC_REGISTERS_RESET_REG_3_MISC_NIG_MUX_XGXS0_RSTB_HW	 (0x1<<4)
+ #define MISC_REGISTERS_RESET_REG_3_MISC_NIG_MUX_XGXS0_TXD_FIFO_RSTB (0x1<<8)
+ #define MISC_REGISTERS_RESET_REG_3_SET				 0x5a4
++#define MISC_REGISTERS_SPIO_4					 4
++#define MISC_REGISTERS_SPIO_5					 5
++#define MISC_REGISTERS_SPIO_7					 7
++#define MISC_REGISTERS_SPIO_CLR_POS				 16
++#define MISC_REGISTERS_SPIO_FLOAT				 (0xffL<<24)
++#define GRC_MISC_REGISTERS_SPIO_FLOAT7				 0x80000000
++#define GRC_MISC_REGISTERS_SPIO_FLOAT6				 0x40000000
++#define GRC_MISC_REGISTERS_SPIO_FLOAT5				 0x20000000
++#define GRC_MISC_REGISTERS_SPIO_FLOAT4				 0x10000000
++#define MISC_REGISTERS_SPIO_FLOAT_POS				 24
++#define MISC_REGISTERS_SPIO_INPUT_HI_Z				 2
++#define MISC_REGISTERS_SPIO_INT_OLD_SET_POS			 16
++#define MISC_REGISTERS_SPIO_OUTPUT_HIGH 			 1
++#define MISC_REGISTERS_SPIO_OUTPUT_LOW				 0
++#define MISC_REGISTERS_SPIO_SET_POS				 8
++#define HW_LOCK_MAX_RESOURCE_VALUE				 31
++#define HW_LOCK_RESOURCE_8072_MDIO				 0
++#define HW_LOCK_RESOURCE_GPIO					 1
++#define HW_LOCK_RESOURCE_SPIO					 2
+ #define AEU_INPUTS_ATTN_BITS_BRB_PARITY_ERROR		      (1<<18)
+ #define AEU_INPUTS_ATTN_BITS_CCM_HW_INTERRUPT		      (1<<31)
+ #define AEU_INPUTS_ATTN_BITS_CDU_HW_INTERRUPT		      (1<<9)
+@@ -3918,6 +4105,7 @@
+ #define AEU_INPUTS_ATTN_BITS_QM_HW_INTERRUPT		      (1<<3)
+ #define AEU_INPUTS_ATTN_BITS_QM_PARITY_ERROR		      (1<<2)
+ #define AEU_INPUTS_ATTN_BITS_SEARCHER_PARITY_ERROR	      (1<<22)
++#define AEU_INPUTS_ATTN_BITS_SPIO5			      (1<<15)
+ #define AEU_INPUTS_ATTN_BITS_TCM_HW_INTERRUPT		      (1<<27)
+ #define AEU_INPUTS_ATTN_BITS_TIMERS_HW_INTERRUPT	      (1<<5)
+ #define AEU_INPUTS_ATTN_BITS_TSDM_HW_INTERRUPT		      (1<<25)
+@@ -4206,6 +4394,9 @@
+ #define MDIO_XGXS_BLOCK2_RX_LN_SWAP_FORCE_ENABLE	0x4000
+ #define MDIO_XGXS_BLOCK2_TX_LN_SWAP			0x11
+ #define MDIO_XGXS_BLOCK2_TX_LN_SWAP_ENABLE		0x8000
++#define MDIO_XGXS_BLOCK2_UNICORE_MODE_10G		0x14
++#define MDIO_XGXS_BLOCK2_UNICORE_MODE_10G_CX4_XGXS	0x0001
++#define MDIO_XGXS_BLOCK2_UNICORE_MODE_10G_HIGIG_XGXS	0x0010
+ #define MDIO_XGXS_BLOCK2_TEST_MODE_LANE 		0x15
+ 
+ #define MDIO_REG_BANK_GP_STATUS 			0x8120
+@@ -4362,11 +4553,13 @@
+ #define MDIO_COMBO_IEEE0_AUTO_NEG_LINK_PARTNER_ABILITY1_SGMII_MODE   0x0001
+ 
+ 
++#define EXT_PHY_AUTO_NEG_DEVAD				0x7
+ #define EXT_PHY_OPT_PMA_PMD_DEVAD			0x1
+ #define EXT_PHY_OPT_WIS_DEVAD				0x2
+ #define EXT_PHY_OPT_PCS_DEVAD				0x3
+ #define EXT_PHY_OPT_PHY_XS_DEVAD			0x4
+ #define EXT_PHY_OPT_CNTL				0x0
++#define EXT_PHY_OPT_CNTL2				0x7
+ #define EXT_PHY_OPT_PMD_RX_SD				0xa
+ #define EXT_PHY_OPT_PMD_MISC_CNTL			0xca0a
+ #define EXT_PHY_OPT_PHY_IDENTIFIER			0xc800
+@@ -4378,11 +4571,24 @@
+ #define EXT_PHY_OPT_LASI_STATUS 			0x9005
+ #define EXT_PHY_OPT_PCS_STATUS				0x0020
+ #define EXT_PHY_OPT_XGXS_LANE_STATUS			0x0018
++#define EXT_PHY_OPT_AN_LINK_STATUS			0x8304
++#define EXT_PHY_OPT_AN_CL37_CL73			0x8370
++#define EXT_PHY_OPT_AN_CL37_FD				0xffe4
++#define EXT_PHY_OPT_AN_CL37_AN				0xffe0
++#define EXT_PHY_OPT_AN_ADV				0x11
+ 
+ #define EXT_PHY_KR_PMA_PMD_DEVAD			0x1
+ #define EXT_PHY_KR_PCS_DEVAD				0x3
+ #define EXT_PHY_KR_AUTO_NEG_DEVAD			0x7
+ #define EXT_PHY_KR_CTRL 				0x0000
++#define EXT_PHY_KR_STATUS				0x0001
++#define EXT_PHY_KR_AUTO_NEG_COMPLETE		    	0x0020
++#define EXT_PHY_KR_AUTO_NEG_ADVERT			0x0010
++#define EXT_PHY_KR_AUTO_NEG_ADVERT_PAUSE	    	0x0400
++#define EXT_PHY_KR_AUTO_NEG_ADVERT_PAUSE_ASYMMETRIC 	0x0800
++#define EXT_PHY_KR_AUTO_NEG_ADVERT_PAUSE_BOTH	    	0x0C00
++#define EXT_PHY_KR_AUTO_NEG_ADVERT_PAUSE_MASK	    	0x0C00
++#define EXT_PHY_KR_LP_AUTO_NEG				0x0013
+ #define EXT_PHY_KR_CTRL2				0x0007
+ #define EXT_PHY_KR_PCS_STATUS				0x0020
+ #define EXT_PHY_KR_PMD_CTRL				0x0096
+@@ -4391,4 +4597,8 @@
+ #define EXT_PHY_KR_MISC_CTRL1				0xca85
+ #define EXT_PHY_KR_GEN_CTRL				0xca10
+ #define EXT_PHY_KR_ROM_CODE				0xca19
++#define EXT_PHY_KR_ROM_RESET_INTERNAL_MP		0x0188
++#define EXT_PHY_KR_ROM_MICRO_RESET			0x018a
++
++#define EXT_PHY_SFX7101_XGXS_TEST1	    0xc00a
+ 
+diff --git a/drivers/net/cs89x0.c b/drivers/net/cs89x0.c
+index 5717509..348371f 100644
+--- a/drivers/net/cs89x0.c
++++ b/drivers/net/cs89x0.c
+@@ -172,30 +172,30 @@ static char version[] __initdata =
+    them to system IRQ numbers. This mapping is card specific and is set to
+    the configuration of the Cirrus Eval board for this chip. */
+ #ifdef CONFIG_ARCH_CLPS7500
+-static unsigned int netcard_portlist[] __initdata =
++static unsigned int netcard_portlist[] __used __initdata =
+    { 0x80090303, 0x300, 0x320, 0x340, 0x360, 0x200, 0x220, 0x240, 0x260, 0x280, 0x2a0, 0x2c0, 0x2e0, 0};
+ static unsigned int cs8900_irq_map[] = {12,0,0,0};
+ #elif defined(CONFIG_SH_HICOSH4)
+-static unsigned int netcard_portlist[] __initdata =
++static unsigned int netcard_portlist[] __used __initdata =
+    { 0x0300, 0};
+ static unsigned int cs8900_irq_map[] = {1,0,0,0};
+ #elif defined(CONFIG_MACH_IXDP2351)
+-static unsigned int netcard_portlist[] __initdata = {IXDP2351_VIRT_CS8900_BASE, 0};
++static unsigned int netcard_portlist[] __used __initdata = {IXDP2351_VIRT_CS8900_BASE, 0};
+ static unsigned int cs8900_irq_map[] = {IRQ_IXDP2351_CS8900, 0, 0, 0};
+ #include <asm/irq.h>
+ #elif defined(CONFIG_ARCH_IXDP2X01)
+ #include <asm/irq.h>
+-static unsigned int netcard_portlist[] __initdata = {IXDP2X01_CS8900_VIRT_BASE, 0};
++static unsigned int netcard_portlist[] __used __initdata = {IXDP2X01_CS8900_VIRT_BASE, 0};
+ static unsigned int cs8900_irq_map[] = {IRQ_IXDP2X01_CS8900, 0, 0, 0};
+ #elif defined(CONFIG_ARCH_PNX010X)
+ #include <asm/irq.h>
+ #include <asm/arch/gpio.h>
+ #define CIRRUS_DEFAULT_BASE	IO_ADDRESS(EXT_STATIC2_s0_BASE + 0x200000)	/* = Physical address 0x48200000 */
+ #define CIRRUS_DEFAULT_IRQ	VH_INTC_INT_NUM_CASCADED_INTERRUPT_1 /* Event inputs bank 1 - ID 35/bit 3 */
+-static unsigned int netcard_portlist[] __initdata = {CIRRUS_DEFAULT_BASE, 0};
++static unsigned int netcard_portlist[] __used __initdata = {CIRRUS_DEFAULT_BASE, 0};
+ static unsigned int cs8900_irq_map[] = {CIRRUS_DEFAULT_IRQ, 0, 0, 0};
+ #else
+-static unsigned int netcard_portlist[] __initdata =
++static unsigned int netcard_portlist[] __used __initdata =
+    { 0x300, 0x320, 0x340, 0x360, 0x200, 0x220, 0x240, 0x260, 0x280, 0x2a0, 0x2c0, 0x2e0, 0};
+ static unsigned int cs8900_irq_map[] = {10,11,12,5};
+ #endif
+diff --git a/drivers/net/e1000e/82571.c b/drivers/net/e1000e/82571.c
+index 3beace5..7fe2031 100644
+--- a/drivers/net/e1000e/82571.c
++++ b/drivers/net/e1000e/82571.c
+@@ -438,7 +438,7 @@ static void e1000_release_nvm_82571(struct e1000_hw *hw)
+  *  For non-82573 silicon, write data to EEPROM at offset using SPI interface.
+  *
+  *  If e1000e_update_nvm_checksum is not called after this function, the
+- *  EEPROM will most likley contain an invalid checksum.
++ *  EEPROM will most likely contain an invalid checksum.
+  **/
+ static s32 e1000_write_nvm_82571(struct e1000_hw *hw, u16 offset, u16 words,
+ 				 u16 *data)
+@@ -547,7 +547,7 @@ static s32 e1000_validate_nvm_checksum_82571(struct e1000_hw *hw)
+  *  poll for completion.
+  *
+  *  If e1000e_update_nvm_checksum is not called after this function, the
+- *  EEPROM will most likley contain an invalid checksum.
++ *  EEPROM will most likely contain an invalid checksum.
+  **/
+ static s32 e1000_write_nvm_eewr_82571(struct e1000_hw *hw, u16 offset,
+ 				      u16 words, u16 *data)
+@@ -1053,7 +1053,7 @@ static s32 e1000_setup_fiber_serdes_link_82571(struct e1000_hw *hw)
+ 		/* If SerDes loopback mode is entered, there is no form
+ 		 * of reset to take the adapter out of that mode.  So we
+ 		 * have to explicitly take the adapter out of loopback
+-		 * mode.  This prevents drivers from twidling their thumbs
++		 * mode.  This prevents drivers from twiddling their thumbs
+ 		 * if another tool failed to take it out of loopback mode.
+ 		 */
+ 		ew32(SCTL,
+@@ -1098,7 +1098,7 @@ static s32 e1000_valid_led_default_82571(struct e1000_hw *hw, u16 *data)
+  *  e1000e_get_laa_state_82571 - Get locally administered address state
+  *  @hw: pointer to the HW structure
+  *
+- *  Retrieve and return the current locally administed address state.
++ *  Retrieve and return the current locally administered address state.
+  **/
+ bool e1000e_get_laa_state_82571(struct e1000_hw *hw)
+ {
+@@ -1113,7 +1113,7 @@ bool e1000e_get_laa_state_82571(struct e1000_hw *hw)
+  *  @hw: pointer to the HW structure
+  *  @state: enable/disable locally administered address
+  *
+- *  Enable/Disable the current locally administed address state.
++ *  Enable/Disable the current locally administers address state.
+  **/
+ void e1000e_set_laa_state_82571(struct e1000_hw *hw, bool state)
+ {
+@@ -1281,16 +1281,6 @@ static struct e1000_phy_operations e82_phy_ops_m88 = {
+ 
+ static struct e1000_nvm_operations e82571_nvm_ops = {
+ 	.acquire_nvm		= e1000_acquire_nvm_82571,
+-	.read_nvm		= e1000e_read_nvm_spi,
+-	.release_nvm		= e1000_release_nvm_82571,
+-	.update_nvm		= e1000_update_nvm_checksum_82571,
+-	.valid_led_default	= e1000_valid_led_default_82571,
+-	.validate_nvm		= e1000_validate_nvm_checksum_82571,
+-	.write_nvm		= e1000_write_nvm_82571,
+-};
+-
+-static struct e1000_nvm_operations e82573_nvm_ops = {
+-	.acquire_nvm		= e1000_acquire_nvm_82571,
+ 	.read_nvm		= e1000e_read_nvm_eerd,
+ 	.release_nvm		= e1000_release_nvm_82571,
+ 	.update_nvm		= e1000_update_nvm_checksum_82571,
+@@ -1355,6 +1345,6 @@ struct e1000_info e1000_82573_info = {
+ 	.get_invariants		= e1000_get_invariants_82571,
+ 	.mac_ops		= &e82571_mac_ops,
+ 	.phy_ops		= &e82_phy_ops_m88,
+-	.nvm_ops		= &e82573_nvm_ops,
++	.nvm_ops		= &e82571_nvm_ops,
+ };
+ 
+diff --git a/drivers/net/e1000e/defines.h b/drivers/net/e1000e/defines.h
+index 6232c3e..a4f511f 100644
+--- a/drivers/net/e1000e/defines.h
++++ b/drivers/net/e1000e/defines.h
+@@ -66,7 +66,7 @@
+ #define E1000_WUFC_ARP  0x00000020 /* ARP Request Packet Wakeup Enable */
+ 
+ /* Extended Device Control */
+-#define E1000_CTRL_EXT_SDP7_DATA 0x00000080 /* Value of SW Defineable Pin 7 */
++#define E1000_CTRL_EXT_SDP7_DATA 0x00000080 /* Value of SW Definable Pin 7 */
+ #define E1000_CTRL_EXT_EE_RST    0x00002000 /* Reinitialize from EEPROM */
+ #define E1000_CTRL_EXT_RO_DIS    0x00020000 /* Relaxed Ordering disable */
+ #define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000
+@@ -75,12 +75,12 @@
+ #define E1000_CTRL_EXT_IAME           0x08000000 /* Interrupt acknowledge Auto-mask */
+ #define E1000_CTRL_EXT_INT_TIMER_CLR  0x20000000 /* Clear Interrupt timers after IMS clear */
+ 
+-/* Receive Decriptor bit definitions */
++/* Receive Descriptor bit definitions */
+ #define E1000_RXD_STAT_DD       0x01    /* Descriptor Done */
+ #define E1000_RXD_STAT_EOP      0x02    /* End of Packet */
+ #define E1000_RXD_STAT_IXSM     0x04    /* Ignore checksum */
+ #define E1000_RXD_STAT_VP       0x08    /* IEEE VLAN Packet */
+-#define E1000_RXD_STAT_UDPCS    0x10    /* UDP xsum caculated */
++#define E1000_RXD_STAT_UDPCS    0x10    /* UDP xsum calculated */
+ #define E1000_RXD_STAT_TCPCS    0x20    /* TCP xsum calculated */
+ #define E1000_RXD_ERR_CE        0x01    /* CRC Error */
+ #define E1000_RXD_ERR_SE        0x02    /* Symbol Error */
+@@ -223,7 +223,7 @@
+ #define E1000_STATUS_LAN_INIT_DONE 0x00000200   /* Lan Init Completion by NVM */
+ #define E1000_STATUS_GIO_MASTER_ENABLE 0x00080000 /* Status of Master requests. */
+ 
+-/* Constants used to intrepret the masked PCI-X bus speed. */
++/* Constants used to interpret the masked PCI-X bus speed. */
+ 
+ #define HALF_DUPLEX 1
+ #define FULL_DUPLEX 2
+@@ -517,7 +517,7 @@
+ /* PHY 1000 MII Register/Bit Definitions */
+ /* PHY Registers defined by IEEE */
+ #define PHY_CONTROL      0x00 /* Control Register */
+-#define PHY_STATUS       0x01 /* Status Regiser */
++#define PHY_STATUS       0x01 /* Status Register */
+ #define PHY_ID1          0x02 /* Phy Id Reg (word 1) */
+ #define PHY_ID2          0x03 /* Phy Id Reg (word 2) */
+ #define PHY_AUTONEG_ADV  0x04 /* Autoneg Advertisement */
+diff --git a/drivers/net/e1000e/e1000.h b/drivers/net/e1000e/e1000.h
+index 8b88c22..327c062 100644
+--- a/drivers/net/e1000e/e1000.h
++++ b/drivers/net/e1000e/e1000.h
+@@ -42,8 +42,7 @@
+ struct e1000_info;
+ 
+ #define ndev_printk(level, netdev, format, arg...) \
+-	printk(level "%s: %s: " format, (netdev)->dev.parent->bus_id, \
+-	       (netdev)->name, ## arg)
++	printk(level "%s: " format, (netdev)->name, ## arg)
+ 
+ #ifdef DEBUG
+ #define ndev_dbg(netdev, format, arg...) \
+diff --git a/drivers/net/e1000e/hw.h b/drivers/net/e1000e/hw.h
+index 3c5862f..916025b 100644
+--- a/drivers/net/e1000e/hw.h
++++ b/drivers/net/e1000e/hw.h
+@@ -184,7 +184,7 @@ enum e1e_registers {
+ 	E1000_ICRXDMTC = 0x04120, /* Irq Cause Rx Desc MinThreshold Count */
+ 	E1000_ICRXOC   = 0x04124, /* Irq Cause Receiver Overrun Count */
+ 	E1000_RXCSUM   = 0x05000, /* RX Checksum Control - RW */
+-	E1000_RFCTL    = 0x05008, /* Receive Filter Control*/
++	E1000_RFCTL    = 0x05008, /* Receive Filter Control */
+ 	E1000_MTA      = 0x05200, /* Multicast Table Array - RW Array */
+ 	E1000_RA       = 0x05400, /* Receive Address - RW Array */
+ 	E1000_VFTA     = 0x05600, /* VLAN Filter Table Array - RW Array */
+@@ -202,7 +202,7 @@ enum e1e_registers {
+ 	E1000_FACTPS    = 0x05B30, /* Function Active and Power State to MNG */
+ 	E1000_SWSM      = 0x05B50, /* SW Semaphore */
+ 	E1000_FWSM      = 0x05B54, /* FW Semaphore */
+-	E1000_HICR      = 0x08F00, /* Host Inteface Control */
++	E1000_HICR      = 0x08F00, /* Host Interface Control */
+ };
+ 
+ /* RSS registers */
+diff --git a/drivers/net/e1000e/ich8lan.c b/drivers/net/e1000e/ich8lan.c
+index 8f8139d..0ae3955 100644
+--- a/drivers/net/e1000e/ich8lan.c
++++ b/drivers/net/e1000e/ich8lan.c
+@@ -671,7 +671,7 @@ static s32 e1000_get_phy_info_ich8lan(struct e1000_hw *hw)
+  *  e1000_check_polarity_ife_ich8lan - Check cable polarity for IFE PHY
+  *  @hw: pointer to the HW structure
+  *
+- *  Polarity is determined on the polarity reveral feature being enabled.
++ *  Polarity is determined on the polarity reversal feature being enabled.
+  *  This function is only called by other family-specific
+  *  routines.
+  **/
+@@ -947,7 +947,7 @@ static s32 e1000_flash_cycle_init_ich8lan(struct e1000_hw *hw)
+ 	/* Either we should have a hardware SPI cycle in progress
+ 	 * bit to check against, in order to start a new cycle or
+ 	 * FDONE bit should be changed in the hardware so that it
+-	 * is 1 after harware reset, which can then be used as an
++	 * is 1 after hardware reset, which can then be used as an
+ 	 * indication whether a cycle is in progress or has been
+ 	 * completed.
+ 	 */
+@@ -1155,7 +1155,7 @@ static s32 e1000_write_nvm_ich8lan(struct e1000_hw *hw, u16 offset, u16 words,
+  *  which writes the checksum to the shadow ram.  The changes in the shadow
+  *  ram are then committed to the EEPROM by processing each bank at a time
+  *  checking for the modified bit and writing only the pending changes.
+- *  After a succesful commit, the shadow ram is cleared and is ready for
++ *  After a successful commit, the shadow ram is cleared and is ready for
+  *  future writes.
+  **/
+ static s32 e1000_update_nvm_checksum_ich8lan(struct e1000_hw *hw)
+@@ -1680,7 +1680,7 @@ static s32 e1000_reset_hw_ich8lan(struct e1000_hw *hw)
+  *   - initialize LED identification
+  *   - setup receive address registers
+  *   - setup flow control
+- *   - setup transmit discriptors
++ *   - setup transmit descriptors
+  *   - clear statistics
+  **/
+ static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw)
+@@ -1961,7 +1961,7 @@ static s32 e1000_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw)
+ 		     E1000_PHY_CTRL_NOND0A_GBE_DISABLE);
+ 	ew32(PHY_CTRL, phy_ctrl);
+ 
+-	/* Call gig speed drop workaround on Giga disable before accessing
++	/* Call gig speed drop workaround on Gig disable before accessing
+ 	 * any PHY registers */
+ 	e1000e_gig_downshift_workaround_ich8lan(hw);
+ 
+@@ -1972,7 +1972,7 @@ static s32 e1000_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw)
+ /**
+  *  e1000_set_kmrn_lock_loss_workaound_ich8lan - Set Kumeran workaround state
+  *  @hw: pointer to the HW structure
+- *  @state: boolean value used to set the current Kumaran workaround state
++ *  @state: boolean value used to set the current Kumeran workaround state
+  *
+  *  If ICH8, set the current Kumeran workaround state (enabled - TRUE
+  *  /disabled - FALSE).
+@@ -2017,7 +2017,7 @@ void e1000e_igp3_phy_powerdown_workaround_ich8lan(struct e1000_hw *hw)
+ 			E1000_PHY_CTRL_NOND0A_GBE_DISABLE);
+ 		ew32(PHY_CTRL, reg);
+ 
+-		/* Call gig speed drop workaround on Giga disable before
++		/* Call gig speed drop workaround on Gig disable before
+ 		 * accessing any PHY registers */
+ 		if (hw->mac.type == e1000_ich8lan)
+ 			e1000e_gig_downshift_workaround_ich8lan(hw);
+@@ -2045,7 +2045,7 @@ void e1000e_igp3_phy_powerdown_workaround_ich8lan(struct e1000_hw *hw)
+  *  @hw: pointer to the HW structure
+  *
+  *  Steps to take when dropping from 1Gb/s (eg. link cable removal (LSC),
+- *  LPLU, Giga disable, MDIC PHY reset):
++ *  LPLU, Gig disable, MDIC PHY reset):
+  *    1) Set Kumeran Near-end loopback
+  *    2) Clear Kumeran Near-end loopback
+  *  Should only be called for ICH8[m] devices with IGP_3 Phy.
+@@ -2089,10 +2089,10 @@ static s32 e1000_cleanup_led_ich8lan(struct e1000_hw *hw)
+ }
+ 
+ /**
+- *  e1000_led_on_ich8lan - Turn LED's on
++ *  e1000_led_on_ich8lan - Turn LEDs on
+  *  @hw: pointer to the HW structure
+  *
+- *  Turn on the LED's.
++ *  Turn on the LEDs.
+  **/
+ static s32 e1000_led_on_ich8lan(struct e1000_hw *hw)
+ {
+@@ -2105,10 +2105,10 @@ static s32 e1000_led_on_ich8lan(struct e1000_hw *hw)
+ }
+ 
+ /**
+- *  e1000_led_off_ich8lan - Turn LED's off
++ *  e1000_led_off_ich8lan - Turn LEDs off
+  *  @hw: pointer to the HW structure
+  *
+- *  Turn off the LED's.
++ *  Turn off the LEDs.
+  **/
+ static s32 e1000_led_off_ich8lan(struct e1000_hw *hw)
+ {
+diff --git a/drivers/net/e1000e/lib.c b/drivers/net/e1000e/lib.c
+index 16f35fa..95f75a4 100644
+--- a/drivers/net/e1000e/lib.c
++++ b/drivers/net/e1000e/lib.c
+@@ -589,9 +589,6 @@ static s32 e1000_set_default_fc_generic(struct e1000_hw *hw)
+ 	s32 ret_val;
+ 	u16 nvm_data;
+ 
+-	if (mac->fc != e1000_fc_default)
+-		return 0;
+-
+ 	/* Read and store word 0x0F of the EEPROM. This word contains bits
+ 	 * that determine the hardware's default PAUSE (flow control) mode,
+ 	 * a bit that determines whether the HW defaults to enabling or
+@@ -1107,34 +1104,13 @@ s32 e1000e_config_fc_after_link_up(struct e1000_hw *hw)
+ 			 (mii_nway_lp_ability_reg & NWAY_LPAR_ASM_DIR)) {
+ 			mac->fc = e1000_fc_rx_pause;
+ 			hw_dbg(hw, "Flow Control = RX PAUSE frames only.\r\n");
+-		}
+-		/* Per the IEEE spec, at this point flow control should be
+-		 * disabled.  However, we want to consider that we could
+-		 * be connected to a legacy switch that doesn't advertise
+-		 * desired flow control, but can be forced on the link
+-		 * partner.  So if we advertised no flow control, that is
+-		 * what we will resolve to.  If we advertised some kind of
+-		 * receive capability (Rx Pause Only or Full Flow Control)
+-		 * and the link partner advertised none, we will configure
+-		 * ourselves to enable Rx Flow Control only.  We can do
+-		 * this safely for two reasons:  If the link partner really
+-		 * didn't want flow control enabled, and we enable Rx, no
+-		 * harm done since we won't be receiving any PAUSE frames
+-		 * anyway.  If the intent on the link partner was to have
+-		 * flow control enabled, then by us enabling RX only, we
+-		 * can at least receive pause frames and process them.
+-		 * This is a good idea because in most cases, since we are
+-		 * predominantly a server NIC, more times than not we will
+-		 * be asked to delay transmission of packets than asking
+-		 * our link partner to pause transmission of frames.
+-		 */
+-		else if ((mac->original_fc == e1000_fc_none) ||
+-			 (mac->original_fc == e1000_fc_tx_pause)) {
++		} else {
++			/*
++			 * Per the IEEE spec, at this point flow control
++			 * should be disabled.
++			 */
+ 			mac->fc = e1000_fc_none;
+ 			hw_dbg(hw, "Flow Control = NONE.\r\n");
+-		} else {
+-			mac->fc = e1000_fc_rx_pause;
+-			hw_dbg(hw, "Flow Control = RX PAUSE frames only.\r\n");
+ 		}
+ 
+ 		/* Now we need to do one last check...  If we auto-
+@@ -1164,7 +1140,7 @@ s32 e1000e_config_fc_after_link_up(struct e1000_hw *hw)
+ }
+ 
+ /**
+- *  e1000e_get_speed_and_duplex_copper - Retreive current speed/duplex
++ *  e1000e_get_speed_and_duplex_copper - Retrieve current speed/duplex
+  *  @hw: pointer to the HW structure
+  *  @speed: stores the current speed
+  *  @duplex: stores the current duplex
+@@ -1200,7 +1176,7 @@ s32 e1000e_get_speed_and_duplex_copper(struct e1000_hw *hw, u16 *speed, u16 *dup
+ }
+ 
+ /**
+- *  e1000e_get_speed_and_duplex_fiber_serdes - Retreive current speed/duplex
++ *  e1000e_get_speed_and_duplex_fiber_serdes - Retrieve current speed/duplex
+  *  @hw: pointer to the HW structure
+  *  @speed: stores the current speed
+  *  @duplex: stores the current duplex
+@@ -1410,7 +1386,7 @@ s32 e1000e_cleanup_led_generic(struct e1000_hw *hw)
+  *  e1000e_blink_led - Blink LED
+  *  @hw: pointer to the HW structure
+  *
+- *  Blink the led's which are set to be on.
++ *  Blink the LEDs which are set to be on.
+  **/
+ s32 e1000e_blink_led(struct e1000_hw *hw)
+ {
+@@ -1515,7 +1491,7 @@ void e1000e_set_pcie_no_snoop(struct e1000_hw *hw, u32 no_snoop)
+  *  @hw: pointer to the HW structure
+  *
+  *  Returns 0 if successful, else returns -10
+- *  (-E1000_ERR_MASTER_REQUESTS_PENDING) if master disable bit has not casued
++ *  (-E1000_ERR_MASTER_REQUESTS_PENDING) if master disable bit has not caused
+  *  the master requests to be disabled.
+  *
+  *  Disables PCI-Express master access and verifies there are no pending
+@@ -1876,7 +1852,7 @@ static s32 e1000_ready_nvm_eeprom(struct e1000_hw *hw)
+ }
+ 
+ /**
+- *  e1000e_read_nvm_spi - Read EEPROM's using SPI
++ *  e1000e_read_nvm_spi - Reads EEPROM using SPI
+  *  @hw: pointer to the HW structure
+  *  @offset: offset of word in the EEPROM to read
+  *  @words: number of words to read
+@@ -1980,7 +1956,7 @@ s32 e1000e_read_nvm_eerd(struct e1000_hw *hw, u16 offset, u16 words, u16 *data)
+  *  Writes data to EEPROM at offset using SPI interface.
+  *
+  *  If e1000e_update_nvm_checksum is not called after this function , the
+- *  EEPROM will most likley contain an invalid checksum.
++ *  EEPROM will most likely contain an invalid checksum.
+  **/
+ s32 e1000e_write_nvm_spi(struct e1000_hw *hw, u16 offset, u16 words, u16 *data)
+ {
+@@ -2222,7 +2198,7 @@ static u8 e1000_calculate_checksum(u8 *buffer, u32 length)
+  *
+  *  Returns E1000_success upon success, else E1000_ERR_HOST_INTERFACE_COMMAND
+  *
+- *  This function checks whether the HOST IF is enabled for command operaton
++ *  This function checks whether the HOST IF is enabled for command operation
+  *  and also checks whether the previous command is completed.  It busy waits
+  *  in case of previous command is not completed.
+  **/
+@@ -2254,7 +2230,7 @@ static s32 e1000_mng_enable_host_if(struct e1000_hw *hw)
+ }
+ 
+ /**
+- *  e1000e_check_mng_mode - check managament mode
++ *  e1000e_check_mng_mode - check management mode
+  *  @hw: pointer to the HW structure
+  *
+  *  Reads the firmware semaphore register and returns true (>0) if
+diff --git a/drivers/net/e1000e/netdev.c b/drivers/net/e1000e/netdev.c
+index 3031d6d..fc5c63f 100644
+--- a/drivers/net/e1000e/netdev.c
++++ b/drivers/net/e1000e/netdev.c
+@@ -1006,7 +1006,7 @@ static void e1000_irq_enable(struct e1000_adapter *adapter)
+  * e1000_get_hw_control - get control of the h/w from f/w
+  * @adapter: address of board private structure
+  *
+- * e1000_get_hw_control sets {CTRL_EXT|FWSM}:DRV_LOAD bit.
++ * e1000_get_hw_control sets {CTRL_EXT|SWSM}:DRV_LOAD bit.
+  * For ASF and Pass Through versions of f/w this means that
+  * the driver is loaded. For AMT version (only with 82573)
+  * of the f/w this means that the network i/f is open.
+@@ -1032,7 +1032,7 @@ static void e1000_get_hw_control(struct e1000_adapter *adapter)
+  * e1000_release_hw_control - release control of the h/w to f/w
+  * @adapter: address of board private structure
+  *
+- * e1000_release_hw_control resets {CTRL_EXT|FWSM}:DRV_LOAD bit.
++ * e1000_release_hw_control resets {CTRL_EXT|SWSM}:DRV_LOAD bit.
+  * For ASF and Pass Through versions of f/w this means that the
+  * driver is no longer loaded. For AMT version (only with 82573) i
+  * of the f/w this means that the network i/f is closed.
+@@ -1241,6 +1241,11 @@ void e1000e_free_rx_resources(struct e1000_adapter *adapter)
+ 
+ /**
+  * e1000_update_itr - update the dynamic ITR value based on statistics
++ * @adapter: pointer to adapter
++ * @itr_setting: current adapter->itr
++ * @packets: the number of packets during this measurement interval
++ * @bytes: the number of bytes during this measurement interval
++ *
+  *      Stores a new ITR value based on packets and byte
+  *      counts during the last interrupt.  The advantage of per interrupt
+  *      computation is faster updates and more accurate ITR for the current
+@@ -1250,10 +1255,6 @@ void e1000e_free_rx_resources(struct e1000_adapter *adapter)
+  *      while increasing bulk throughput.
+  *      this functionality is controlled by the InterruptThrottleRate module
+  *      parameter (see e1000_param.c)
+- * @adapter: pointer to adapter
+- * @itr_setting: current adapter->itr
+- * @packets: the number of packets during this measurement interval
+- * @bytes: the number of bytes during this measurement interval
+  **/
+ static unsigned int e1000_update_itr(struct e1000_adapter *adapter,
+ 				     u16 itr_setting, int packets,
+@@ -1366,6 +1367,7 @@ set_itr_now:
+ /**
+  * e1000_clean - NAPI Rx polling callback
+  * @adapter: board private structure
++ * @budget: amount of packets driver is allowed to process this poll
+  **/
+ static int e1000_clean(struct napi_struct *napi, int budget)
+ {
+@@ -2000,7 +2002,7 @@ static void e1000_power_down_phy(struct e1000_adapter *adapter)
+ 	    e1000_check_reset_block(hw))
+ 		return;
+ 
+-	/* managebility (AMT) is enabled */
++	/* manageability (AMT) is enabled */
+ 	if (er32(MANC) & E1000_MANC_SMBUS_EN)
+ 		return;
+ 
+@@ -3488,7 +3490,6 @@ static int e1000_suspend(struct pci_dev *pdev, pm_message_t state)
+ static void e1000e_disable_l1aspm(struct pci_dev *pdev)
+ {
+ 	int pos;
+-	u32 cap;
+ 	u16 val;
+ 
+ 	/*
+@@ -3503,7 +3504,6 @@ static void e1000e_disable_l1aspm(struct pci_dev *pdev)
+ 	 * active.
+ 	 */
+ 	pos = pci_find_capability(pdev, PCI_CAP_ID_EXP);
+-	pci_read_config_dword(pdev, pos + PCI_EXP_LNKCAP, &cap);
+ 	pci_read_config_word(pdev, pos + PCI_EXP_LNKCTL, &val);
+ 	if (val & 0x2) {
+ 		dev_warn(&pdev->dev, "Disabling L1 ASPM\n");
+diff --git a/drivers/net/e1000e/phy.c b/drivers/net/e1000e/phy.c
+index fc6fee1..dab3c46 100644
+--- a/drivers/net/e1000e/phy.c
++++ b/drivers/net/e1000e/phy.c
+@@ -121,7 +121,7 @@ s32 e1000e_phy_reset_dsp(struct e1000_hw *hw)
+  *  @offset: register offset to be read
+  *  @data: pointer to the read data
+  *
+- *  Reads the MDI control regsiter in the PHY at offset and stores the
++ *  Reads the MDI control register in the PHY at offset and stores the
+  *  information read to data.
+  **/
+ static s32 e1000_read_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 *data)
+@@ -1172,7 +1172,7 @@ s32 e1000e_set_d3_lplu_state(struct e1000_hw *hw, bool active)
+ }
+ 
+ /**
+- *  e1000e_check_downshift - Checks whether a downshift in speed occured
++ *  e1000e_check_downshift - Checks whether a downshift in speed occurred
+  *  @hw: pointer to the HW structure
+  *
+  *  Success returns 0, Failure returns 1
+@@ -1388,8 +1388,8 @@ s32 e1000e_get_cable_length_m88(struct e1000_hw *hw)
+  *
+  *  The automatic gain control (agc) normalizes the amplitude of the
+  *  received signal, adjusting for the attenuation produced by the
+- *  cable.  By reading the AGC registers, which reperesent the
+- *  cobination of course and fine gain value, the value can be put
++ *  cable.  By reading the AGC registers, which represent the
++ *  combination of course and fine gain value, the value can be put
+  *  into a lookup table to obtain the approximate cable length
+  *  for each channel.
+  **/
+@@ -1619,7 +1619,7 @@ s32 e1000e_phy_sw_reset(struct e1000_hw *hw)
+  *  Verify the reset block is not blocking us from resetting.  Acquire
+  *  semaphore (if necessary) and read/set/write the device control reset
+  *  bit in the PHY.  Wait the appropriate delay time for the device to
+- *  reset and relase the semaphore (if necessary).
++ *  reset and release the semaphore (if necessary).
+  **/
+ s32 e1000e_phy_hw_reset_generic(struct e1000_hw *hw)
+ {
+diff --git a/drivers/net/ehea/ehea.h b/drivers/net/ehea/ehea.h
+index 88fb53e..7c4ead3 100644
+--- a/drivers/net/ehea/ehea.h
++++ b/drivers/net/ehea/ehea.h
+@@ -40,7 +40,7 @@
+ #include <asm/io.h>
+ 
+ #define DRV_NAME	"ehea"
+-#define DRV_VERSION	"EHEA_0083"
++#define DRV_VERSION	"EHEA_0087"
+ 
+ /* eHEA capability flags */
+ #define DLPAR_PORT_ADD_REM 1
+@@ -386,6 +386,13 @@ struct ehea_port_res {
+ 
+ 
+ #define EHEA_MAX_PORTS 16
++
++#define EHEA_NUM_PORTRES_FW_HANDLES    6  /* QP handle, SendCQ handle,
++					     RecvCQ handle, EQ handle,
++					     SendMR handle, RecvMR handle */
++#define EHEA_NUM_PORT_FW_HANDLES       1  /* EQ handle */
++#define EHEA_NUM_ADAPTER_FW_HANDLES    2  /* MR handle, NEQ handle */
++
+ struct ehea_adapter {
+ 	u64 handle;
+ 	struct of_device *ofdev;
+@@ -405,6 +412,31 @@ struct ehea_mc_list {
+ 	u64 macaddr;
+ };
+ 
++/* kdump support */
++struct ehea_fw_handle_entry {
++	u64 adh;               /* Adapter Handle */
++	u64 fwh;               /* Firmware Handle */
++};
++
++struct ehea_fw_handle_array {
++	struct ehea_fw_handle_entry *arr;
++	int num_entries;
++	struct semaphore lock;
++};
++
++struct ehea_bcmc_reg_entry {
++	u64 adh;               /* Adapter Handle */
++	u32 port_id;           /* Logical Port Id */
++	u8 reg_type;           /* Registration Type */
++	u64 macaddr;
++};
++
++struct ehea_bcmc_reg_array {
++	struct ehea_bcmc_reg_entry *arr;
++	int num_entries;
++	struct semaphore lock;
++};
++
+ #define EHEA_PORT_UP 1
+ #define EHEA_PORT_DOWN 0
+ #define EHEA_PHY_LINK_UP 1
+diff --git a/drivers/net/ehea/ehea_main.c b/drivers/net/ehea/ehea_main.c
+index c051c7e..21af674 100644
+--- a/drivers/net/ehea/ehea_main.c
++++ b/drivers/net/ehea/ehea_main.c
+@@ -35,6 +35,7 @@
+ #include <linux/if_ether.h>
+ #include <linux/notifier.h>
+ #include <linux/reboot.h>
++#include <asm/kexec.h>
+ 
+ #include <net/ip.h>
+ 
+@@ -98,8 +99,10 @@ static int port_name_cnt;
+ static LIST_HEAD(adapter_list);
+ u64 ehea_driver_flags;
+ struct work_struct ehea_rereg_mr_task;
+-
+ struct semaphore dlpar_mem_lock;
++struct ehea_fw_handle_array ehea_fw_handles;
++struct ehea_bcmc_reg_array ehea_bcmc_regs;
++
+ 
+ static int __devinit ehea_probe_adapter(struct of_device *dev,
+ 					const struct of_device_id *id);
+@@ -132,6 +135,160 @@ void ehea_dump(void *adr, int len, char *msg)
+ 	}
+ }
+ 
++static void ehea_update_firmware_handles(void)
++{
++	struct ehea_fw_handle_entry *arr = NULL;
++	struct ehea_adapter *adapter;
++	int num_adapters = 0;
++	int num_ports = 0;
++	int num_portres = 0;
++	int i = 0;
++	int num_fw_handles, k, l;
++
++	/* Determine number of handles */
++	list_for_each_entry(adapter, &adapter_list, list) {
++		num_adapters++;
++
++		for (k = 0; k < EHEA_MAX_PORTS; k++) {
++			struct ehea_port *port = adapter->port[k];
++
++			if (!port || (port->state != EHEA_PORT_UP))
++				continue;
++
++			num_ports++;
++			num_portres += port->num_def_qps + port->num_add_tx_qps;
++		}
++	}
++
++	num_fw_handles = num_adapters * EHEA_NUM_ADAPTER_FW_HANDLES +
++			 num_ports * EHEA_NUM_PORT_FW_HANDLES +
++			 num_portres * EHEA_NUM_PORTRES_FW_HANDLES;
++
++	if (num_fw_handles) {
++		arr = kzalloc(num_fw_handles * sizeof(*arr), GFP_KERNEL);
++		if (!arr)
++			return;  /* Keep the existing array */
++	} else
++		goto out_update;
++
++	list_for_each_entry(adapter, &adapter_list, list) {
++		for (k = 0; k < EHEA_MAX_PORTS; k++) {
++			struct ehea_port *port = adapter->port[k];
++
++			if (!port || (port->state != EHEA_PORT_UP))
++				continue;
++
++			for (l = 0;
++			     l < port->num_def_qps + port->num_add_tx_qps;
++			     l++) {
++				struct ehea_port_res *pr = &port->port_res[l];
++
++				arr[i].adh = adapter->handle;
++				arr[i++].fwh = pr->qp->fw_handle;
++				arr[i].adh = adapter->handle;
++				arr[i++].fwh = pr->send_cq->fw_handle;
++				arr[i].adh = adapter->handle;
++				arr[i++].fwh = pr->recv_cq->fw_handle;
++				arr[i].adh = adapter->handle;
++				arr[i++].fwh = pr->eq->fw_handle;
++				arr[i].adh = adapter->handle;
++				arr[i++].fwh = pr->send_mr.handle;
++				arr[i].adh = adapter->handle;
++				arr[i++].fwh = pr->recv_mr.handle;
++			}
++			arr[i].adh = adapter->handle;
++			arr[i++].fwh = port->qp_eq->fw_handle;
++		}
++
++		arr[i].adh = adapter->handle;
++		arr[i++].fwh = adapter->neq->fw_handle;
++
++		if (adapter->mr.handle) {
++			arr[i].adh = adapter->handle;
++			arr[i++].fwh = adapter->mr.handle;
++		}
++	}
++
++out_update:
++	kfree(ehea_fw_handles.arr);
++	ehea_fw_handles.arr = arr;
++	ehea_fw_handles.num_entries = i;
++}
++
++static void ehea_update_bcmc_registrations(void)
++{
++	struct ehea_bcmc_reg_entry *arr = NULL;
++	struct ehea_adapter *adapter;
++	struct ehea_mc_list *mc_entry;
++	int num_registrations = 0;
++	int i = 0;
++	int k;
++
++	/* Determine number of registrations */
++	list_for_each_entry(adapter, &adapter_list, list)
++		for (k = 0; k < EHEA_MAX_PORTS; k++) {
++			struct ehea_port *port = adapter->port[k];
++
++			if (!port || (port->state != EHEA_PORT_UP))
++				continue;
++
++			num_registrations += 2;	/* Broadcast registrations */
++
++			list_for_each_entry(mc_entry, &port->mc_list->list,list)
++				num_registrations += 2;
++		}
++
++	if (num_registrations) {
++		arr = kzalloc(num_registrations * sizeof(*arr), GFP_KERNEL);
++		if (!arr)
++			return;  /* Keep the existing array */
++	} else
++		goto out_update;
++
++	list_for_each_entry(adapter, &adapter_list, list) {
++		for (k = 0; k < EHEA_MAX_PORTS; k++) {
++			struct ehea_port *port = adapter->port[k];
++
++			if (!port || (port->state != EHEA_PORT_UP))
++				continue;
++
++			arr[i].adh = adapter->handle;
++			arr[i].port_id = port->logical_port_id;
++			arr[i].reg_type = EHEA_BCMC_BROADCAST |
++					  EHEA_BCMC_UNTAGGED;
++			arr[i++].macaddr = port->mac_addr;
++
++			arr[i].adh = adapter->handle;
++			arr[i].port_id = port->logical_port_id;
++			arr[i].reg_type = EHEA_BCMC_BROADCAST |
++					  EHEA_BCMC_VLANID_ALL;
++			arr[i++].macaddr = port->mac_addr;
++
++			list_for_each_entry(mc_entry,
++					    &port->mc_list->list, list) {
++				arr[i].adh = adapter->handle;
++				arr[i].port_id = port->logical_port_id;
++				arr[i].reg_type = EHEA_BCMC_SCOPE_ALL |
++						  EHEA_BCMC_MULTICAST |
++						  EHEA_BCMC_UNTAGGED;
++				arr[i++].macaddr = mc_entry->macaddr;
++
++				arr[i].adh = adapter->handle;
++				arr[i].port_id = port->logical_port_id;
++				arr[i].reg_type = EHEA_BCMC_SCOPE_ALL |
++						  EHEA_BCMC_MULTICAST |
++						  EHEA_BCMC_VLANID_ALL;
++				arr[i++].macaddr = mc_entry->macaddr;
++			}
++		}
++	}
++
++out_update:
++	kfree(ehea_bcmc_regs.arr);
++	ehea_bcmc_regs.arr = arr;
++	ehea_bcmc_regs.num_entries = i;
++}
++
+ static struct net_device_stats *ehea_get_stats(struct net_device *dev)
+ {
+ 	struct ehea_port *port = netdev_priv(dev);
+@@ -1601,19 +1758,25 @@ static int ehea_set_mac_addr(struct net_device *dev, void *sa)
+ 
+ 	memcpy(dev->dev_addr, mac_addr->sa_data, dev->addr_len);
+ 
++	down(&ehea_bcmc_regs.lock);
++
+ 	/* Deregister old MAC in pHYP */
+ 	ret = ehea_broadcast_reg_helper(port, H_DEREG_BCMC);
+ 	if (ret)
+-		goto out_free;
++		goto out_upregs;
+ 
+ 	port->mac_addr = cb0->port_mac_addr << 16;
+ 
+ 	/* Register new MAC in pHYP */
+ 	ret = ehea_broadcast_reg_helper(port, H_REG_BCMC);
+ 	if (ret)
+-		goto out_free;
++		goto out_upregs;
+ 
+ 	ret = 0;
++
++out_upregs:
++	ehea_update_bcmc_registrations();
++	up(&ehea_bcmc_regs.lock);
+ out_free:
+ 	kfree(cb0);
+ out:
+@@ -1775,9 +1938,11 @@ static void ehea_set_multicast_list(struct net_device *dev)
+ 	}
+ 	ehea_promiscuous(dev, 0);
+ 
++	down(&ehea_bcmc_regs.lock);
++
+ 	if (dev->flags & IFF_ALLMULTI) {
+ 		ehea_allmulti(dev, 1);
+-		return;
++		goto out;
+ 	}
+ 	ehea_allmulti(dev, 0);
+ 
+@@ -1803,6 +1968,8 @@ static void ehea_set_multicast_list(struct net_device *dev)
+ 
+ 	}
+ out:
++	ehea_update_bcmc_registrations();
++	up(&ehea_bcmc_regs.lock);
+ 	return;
+ }
+ 
+@@ -2285,6 +2452,8 @@ static int ehea_up(struct net_device *dev)
+ 	if (port->state == EHEA_PORT_UP)
+ 		return 0;
+ 
++	down(&ehea_fw_handles.lock);
++
+ 	ret = ehea_port_res_setup(port, port->num_def_qps,
+ 				  port->num_add_tx_qps);
+ 	if (ret) {
+@@ -2321,8 +2490,17 @@ static int ehea_up(struct net_device *dev)
+ 		}
+ 	}
+ 
+-	ret = 0;
++	down(&ehea_bcmc_regs.lock);
++
++	ret = ehea_broadcast_reg_helper(port, H_REG_BCMC);
++	if (ret) {
++		ret = -EIO;
++		goto out_free_irqs;
++	}
++
+ 	port->state = EHEA_PORT_UP;
++
++	ret = 0;
+ 	goto out;
+ 
+ out_free_irqs:
+@@ -2334,6 +2512,12 @@ out:
+ 	if (ret)
+ 		ehea_info("Failed starting %s. ret=%i", dev->name, ret);
+ 
++	ehea_update_bcmc_registrations();
++	up(&ehea_bcmc_regs.lock);
++
++	ehea_update_firmware_handles();
++	up(&ehea_fw_handles.lock);
++
+ 	return ret;
+ }
+ 
+@@ -2382,16 +2566,27 @@ static int ehea_down(struct net_device *dev)
+ 	if (port->state == EHEA_PORT_DOWN)
+ 		return 0;
+ 
++	down(&ehea_bcmc_regs.lock);
+ 	ehea_drop_multicast_list(dev);
++	ehea_broadcast_reg_helper(port, H_DEREG_BCMC);
++
+ 	ehea_free_interrupts(dev);
+ 
++	down(&ehea_fw_handles.lock);
++
+ 	port->state = EHEA_PORT_DOWN;
+ 
++	ehea_update_bcmc_registrations();
++	up(&ehea_bcmc_regs.lock);
++
+ 	ret = ehea_clean_all_portres(port);
+ 	if (ret)
+ 		ehea_info("Failed freeing resources for %s. ret=%i",
+ 			  dev->name, ret);
+ 
++	ehea_update_firmware_handles();
++	up(&ehea_fw_handles.lock);
++
+ 	return ret;
+ }
+ 
+@@ -2920,19 +3115,12 @@ struct ehea_port *ehea_setup_single_port(struct ehea_adapter *adapter,
+ 	dev->watchdog_timeo = EHEA_WATCH_DOG_TIMEOUT;
+ 
+ 	INIT_WORK(&port->reset_task, ehea_reset_port);
+-
+-	ret = ehea_broadcast_reg_helper(port, H_REG_BCMC);
+-	if (ret) {
+-		ret = -EIO;
+-		goto out_unreg_port;
+-	}
+-
+ 	ehea_set_ethtool_ops(dev);
+ 
+ 	ret = register_netdev(dev);
+ 	if (ret) {
+ 		ehea_error("register_netdev failed. ret=%d", ret);
+-		goto out_dereg_bc;
++		goto out_unreg_port;
+ 	}
+ 
+ 	port->lro_max_aggr = lro_max_aggr;
+@@ -2949,9 +3137,6 @@ struct ehea_port *ehea_setup_single_port(struct ehea_adapter *adapter,
+ 
+ 	return port;
+ 
+-out_dereg_bc:
+-	ehea_broadcast_reg_helper(port, H_DEREG_BCMC);
+-
+ out_unreg_port:
+ 	ehea_unregister_port(port);
+ 
+@@ -2971,7 +3156,6 @@ static void ehea_shutdown_single_port(struct ehea_port *port)
+ {
+ 	unregister_netdev(port->netdev);
+ 	ehea_unregister_port(port);
+-	ehea_broadcast_reg_helper(port, H_DEREG_BCMC);
+ 	kfree(port->mc_list);
+ 	free_netdev(port->netdev);
+ 	port->adapter->active_ports--;
+@@ -3014,7 +3198,6 @@ static int ehea_setup_ports(struct ehea_adapter *adapter)
+ 
+ 		i++;
+ 	};
+-
+ 	return 0;
+ }
+ 
+@@ -3159,6 +3342,7 @@ static int __devinit ehea_probe_adapter(struct of_device *dev,
+ 		ehea_error("Invalid ibmebus device probed");
+ 		return -EINVAL;
+ 	}
++	down(&ehea_fw_handles.lock);
+ 
+ 	adapter = kzalloc(sizeof(*adapter), GFP_KERNEL);
+ 	if (!adapter) {
+@@ -3239,7 +3423,10 @@ out_kill_eq:
+ 
+ out_free_ad:
+ 	kfree(adapter);
++
+ out:
++	ehea_update_firmware_handles();
++	up(&ehea_fw_handles.lock);
+ 	return ret;
+ }
+ 
+@@ -3258,18 +3445,41 @@ static int __devexit ehea_remove(struct of_device *dev)
+ 
+ 	flush_scheduled_work();
+ 
++	down(&ehea_fw_handles.lock);
++
+ 	ibmebus_free_irq(adapter->neq->attr.ist1, adapter);
+ 	tasklet_kill(&adapter->neq_tasklet);
+ 
+ 	ehea_destroy_eq(adapter->neq);
+ 	ehea_remove_adapter_mr(adapter);
+ 	list_del(&adapter->list);
+-
+ 	kfree(adapter);
+ 
++	ehea_update_firmware_handles();
++	up(&ehea_fw_handles.lock);
++
+ 	return 0;
+ }
+ 
++void ehea_crash_handler(void)
++{
++	int i;
++
++	if (ehea_fw_handles.arr)
++		for (i = 0; i < ehea_fw_handles.num_entries; i++)
++			ehea_h_free_resource(ehea_fw_handles.arr[i].adh,
++					     ehea_fw_handles.arr[i].fwh,
++					     FORCE_FREE);
++
++	if (ehea_bcmc_regs.arr)
++		for (i = 0; i < ehea_bcmc_regs.num_entries; i++)
++			ehea_h_reg_dereg_bcmc(ehea_bcmc_regs.arr[i].adh,
++					      ehea_bcmc_regs.arr[i].port_id,
++					      ehea_bcmc_regs.arr[i].reg_type,
++					      ehea_bcmc_regs.arr[i].macaddr,
++					      0, H_DEREG_BCMC);
++}
++
+ static int ehea_reboot_notifier(struct notifier_block *nb,
+ 				unsigned long action, void *unused)
+ {
+@@ -3330,7 +3540,12 @@ int __init ehea_module_init(void)
+ 
+ 
+ 	INIT_WORK(&ehea_rereg_mr_task, ehea_rereg_mrs);
++	memset(&ehea_fw_handles, 0, sizeof(ehea_fw_handles));
++	memset(&ehea_bcmc_regs, 0, sizeof(ehea_bcmc_regs));
++
+ 	sema_init(&dlpar_mem_lock, 1);
++	sema_init(&ehea_fw_handles.lock, 1);
++	sema_init(&ehea_bcmc_regs.lock, 1);
+ 
+ 	ret = check_module_parm();
+ 	if (ret)
+@@ -3340,12 +3555,18 @@ int __init ehea_module_init(void)
+ 	if (ret)
+ 		goto out;
+ 
+-	register_reboot_notifier(&ehea_reboot_nb);
++	ret = register_reboot_notifier(&ehea_reboot_nb);
++	if (ret)
++		ehea_info("failed registering reboot notifier");
++
++	ret = crash_shutdown_register(&ehea_crash_handler);
++	if (ret)
++		ehea_info("failed registering crash handler");
+ 
+ 	ret = ibmebus_register_driver(&ehea_driver);
+ 	if (ret) {
+ 		ehea_error("failed registering eHEA device driver on ebus");
+-		goto out;
++		goto out2;
+ 	}
+ 
+ 	ret = driver_create_file(&ehea_driver.driver,
+@@ -3353,21 +3574,33 @@ int __init ehea_module_init(void)
+ 	if (ret) {
+ 		ehea_error("failed to register capabilities attribute, ret=%d",
+ 			   ret);
+-		unregister_reboot_notifier(&ehea_reboot_nb);
+-		ibmebus_unregister_driver(&ehea_driver);
+-		goto out;
++		goto out3;
+ 	}
+ 
++	return ret;
++
++out3:
++	ibmebus_unregister_driver(&ehea_driver);
++out2:
++	unregister_reboot_notifier(&ehea_reboot_nb);
++	crash_shutdown_unregister(&ehea_crash_handler);
+ out:
+ 	return ret;
+ }
+ 
+ static void __exit ehea_module_exit(void)
+ {
++	int ret;
++
+ 	flush_scheduled_work();
+ 	driver_remove_file(&ehea_driver.driver, &driver_attr_capabilities);
+ 	ibmebus_unregister_driver(&ehea_driver);
+ 	unregister_reboot_notifier(&ehea_reboot_nb);
++	ret = crash_shutdown_unregister(&ehea_crash_handler);
++	if (ret)
++		ehea_info("failed unregistering crash handler");
++	kfree(ehea_fw_handles.arr);
++	kfree(ehea_bcmc_regs.arr);
+ 	ehea_destroy_busmap();
+ }
+ 
+diff --git a/drivers/net/fs_enet/fs_enet-main.c b/drivers/net/fs_enet/fs_enet-main.c
+index 42d94ed..af869cf 100644
+--- a/drivers/net/fs_enet/fs_enet-main.c
++++ b/drivers/net/fs_enet/fs_enet-main.c
+@@ -946,16 +946,11 @@ static int fs_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+ {
+ 	struct fs_enet_private *fep = netdev_priv(dev);
+ 	struct mii_ioctl_data *mii = (struct mii_ioctl_data *)&rq->ifr_data;
+-	unsigned long flags;
+-	int rc;
+ 
+ 	if (!netif_running(dev))
+ 		return -EINVAL;
+ 
+-	spin_lock_irqsave(&fep->lock, flags);
+-	rc = phy_mii_ioctl(fep->phydev, mii, cmd);
+-	spin_unlock_irqrestore(&fep->lock, flags);
+-	return rc;
++	return phy_mii_ioctl(fep->phydev, mii, cmd);
+ }
+ 
+ extern int fs_mii_connect(struct net_device *dev);
+diff --git a/drivers/net/gianfar.c b/drivers/net/gianfar.c
+index 4244fc2..718cf77 100644
+--- a/drivers/net/gianfar.c
++++ b/drivers/net/gianfar.c
+@@ -605,7 +605,7 @@ void stop_gfar(struct net_device *dev)
+ 
+ 	free_skb_resources(priv);
+ 
+-	dma_free_coherent(NULL,
++	dma_free_coherent(&dev->dev,
+ 			sizeof(struct txbd8)*priv->tx_ring_size
+ 			+ sizeof(struct rxbd8)*priv->rx_ring_size,
+ 			priv->tx_bd_base,
+@@ -626,7 +626,7 @@ static void free_skb_resources(struct gfar_private *priv)
+ 	for (i = 0; i < priv->tx_ring_size; i++) {
+ 
+ 		if (priv->tx_skbuff[i]) {
+-			dma_unmap_single(NULL, txbdp->bufPtr,
++			dma_unmap_single(&priv->dev->dev, txbdp->bufPtr,
+ 					txbdp->length,
+ 					DMA_TO_DEVICE);
+ 			dev_kfree_skb_any(priv->tx_skbuff[i]);
+@@ -643,7 +643,7 @@ static void free_skb_resources(struct gfar_private *priv)
+ 	if(priv->rx_skbuff != NULL) {
+ 		for (i = 0; i < priv->rx_ring_size; i++) {
+ 			if (priv->rx_skbuff[i]) {
+-				dma_unmap_single(NULL, rxbdp->bufPtr,
++				dma_unmap_single(&priv->dev->dev, rxbdp->bufPtr,
+ 						priv->rx_buffer_size,
+ 						DMA_FROM_DEVICE);
+ 
+@@ -708,7 +708,7 @@ int startup_gfar(struct net_device *dev)
+ 	gfar_write(&regs->imask, IMASK_INIT_CLEAR);
+ 
+ 	/* Allocate memory for the buffer descriptors */
+-	vaddr = (unsigned long) dma_alloc_coherent(NULL,
++	vaddr = (unsigned long) dma_alloc_coherent(&dev->dev,
+ 			sizeof (struct txbd8) * priv->tx_ring_size +
+ 			sizeof (struct rxbd8) * priv->rx_ring_size,
+ 			&addr, GFP_KERNEL);
+@@ -919,7 +919,7 @@ err_irq_fail:
+ rx_skb_fail:
+ 	free_skb_resources(priv);
+ tx_skb_fail:
+-	dma_free_coherent(NULL,
++	dma_free_coherent(&dev->dev,
+ 			sizeof(struct txbd8)*priv->tx_ring_size
+ 			+ sizeof(struct rxbd8)*priv->rx_ring_size,
+ 			priv->tx_bd_base,
+@@ -1053,7 +1053,7 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	/* Set buffer length and pointer */
+ 	txbdp->length = skb->len;
+-	txbdp->bufPtr = dma_map_single(NULL, skb->data,
++	txbdp->bufPtr = dma_map_single(&dev->dev, skb->data,
+ 			skb->len, DMA_TO_DEVICE);
+ 
+ 	/* Save the skb pointer so we can free it later */
+@@ -1332,7 +1332,7 @@ struct sk_buff * gfar_new_skb(struct net_device *dev, struct rxbd8 *bdp)
+ 	 */
+ 	skb_reserve(skb, alignamount);
+ 
+-	bdp->bufPtr = dma_map_single(NULL, skb->data,
++	bdp->bufPtr = dma_map_single(&dev->dev, skb->data,
+ 			priv->rx_buffer_size, DMA_FROM_DEVICE);
+ 
+ 	bdp->length = 0;
+diff --git a/drivers/net/igb/igb_main.c b/drivers/net/igb/igb_main.c
+index bff280e..6a1f230 100644
+--- a/drivers/net/igb/igb_main.c
++++ b/drivers/net/igb/igb_main.c
+@@ -439,7 +439,7 @@ static int igb_request_irq(struct igb_adapter *adapter)
+ 		err = igb_request_msix(adapter);
+ 		if (!err) {
+ 			/* enable IAM, auto-mask,
+-			 * DO NOT USE EIAME or IAME in legacy mode */
++			 * DO NOT USE EIAM or IAM in legacy mode */
+ 			wr32(E1000_IAM, IMS_ENABLE_MASK);
+ 			goto request_done;
+ 		}
+@@ -465,14 +465,9 @@ static int igb_request_irq(struct igb_adapter *adapter)
+ 	err = request_irq(adapter->pdev->irq, &igb_intr, IRQF_SHARED,
+ 			  netdev->name, netdev);
+ 
+-	if (err) {
++	if (err)
+ 		dev_err(&adapter->pdev->dev, "Error %d getting interrupt\n",
+ 			err);
+-		goto request_done;
+-	}
+-
+-	/* enable IAM, auto-mask */
+-	wr32(E1000_IAM, IMS_ENABLE_MASK);
+ 
+ request_done:
+ 	return err;
+@@ -821,7 +816,8 @@ void igb_reset(struct igb_adapter *adapter)
+ 	wr32(E1000_VET, ETHERNET_IEEE_VLAN_TYPE);
+ 
+ 	igb_reset_adaptive(&adapter->hw);
+-	adapter->hw.phy.ops.get_phy_info(&adapter->hw);
++	if (adapter->hw.phy.ops.get_phy_info)
++		adapter->hw.phy.ops.get_phy_info(&adapter->hw);
+ }
+ 
+ /**
+@@ -2057,7 +2053,8 @@ static void igb_set_multi(struct net_device *netdev)
+ static void igb_update_phy_info(unsigned long data)
+ {
+ 	struct igb_adapter *adapter = (struct igb_adapter *) data;
+-	adapter->hw.phy.ops.get_phy_info(&adapter->hw);
++	if (adapter->hw.phy.ops.get_phy_info)
++		adapter->hw.phy.ops.get_phy_info(&adapter->hw);
+ }
+ 
+ /**
+diff --git a/drivers/net/ixgb/ixgb_ethtool.c b/drivers/net/ixgb/ixgb_ethtool.c
+index 53a9fd0..75f3a68 100644
+--- a/drivers/net/ixgb/ixgb_ethtool.c
++++ b/drivers/net/ixgb/ixgb_ethtool.c
+@@ -67,6 +67,7 @@ static struct ixgb_stats ixgb_gstrings_stats[] = {
+ 	{"rx_over_errors", IXGB_STAT(net_stats.rx_over_errors)},
+ 	{"rx_crc_errors", IXGB_STAT(net_stats.rx_crc_errors)},
+ 	{"rx_frame_errors", IXGB_STAT(net_stats.rx_frame_errors)},
++	{"rx_no_buffer_count", IXGB_STAT(stats.rnbc)},
+ 	{"rx_fifo_errors", IXGB_STAT(net_stats.rx_fifo_errors)},
+ 	{"rx_missed_errors", IXGB_STAT(net_stats.rx_missed_errors)},
+ 	{"tx_aborted_errors", IXGB_STAT(net_stats.tx_aborted_errors)},
+diff --git a/drivers/net/macb.c b/drivers/net/macb.c
+index 81bf005..1d210ed 100644
+--- a/drivers/net/macb.c
++++ b/drivers/net/macb.c
+@@ -148,7 +148,7 @@ static void macb_handle_link_change(struct net_device *dev)
+ 
+ 			if (phydev->duplex)
+ 				reg |= MACB_BIT(FD);
+-			if (phydev->speed)
++			if (phydev->speed == SPEED_100)
+ 				reg |= MACB_BIT(SPD);
+ 
+ 			macb_writel(bp, NCFGR, reg);
+diff --git a/drivers/net/pcmcia/pcnet_cs.c b/drivers/net/pcmcia/pcnet_cs.c
+index 6323988..fd8158a 100644
+--- a/drivers/net/pcmcia/pcnet_cs.c
++++ b/drivers/net/pcmcia/pcnet_cs.c
+@@ -590,6 +590,13 @@ static int pcnet_config(struct pcmcia_device *link)
+ 	dev->if_port = 0;
+     }
+ 
++    if ((link->conf.ConfigBase == 0x03c0)
++	&& (link->manf_id == 0x149) && (link->card_id = 0xc1ab)) {
++	printk(KERN_INFO "pcnet_cs: this is an AX88190 card!\n");
++	printk(KERN_INFO "pcnet_cs: use axnet_cs instead.\n");
++	goto failed;
++    }
++
+     local_hw_info = get_hwinfo(link);
+     if (local_hw_info == NULL)
+ 	local_hw_info = get_prom(link);
+@@ -1567,12 +1574,11 @@ static struct pcmcia_device_id pcnet_ids[] = {
+ 	PCMCIA_DEVICE_MANF_CARD(0x0104, 0x0145),
+ 	PCMCIA_DEVICE_MANF_CARD(0x0149, 0x0230),
+ 	PCMCIA_DEVICE_MANF_CARD(0x0149, 0x4530),
+-/*	PCMCIA_DEVICE_MANF_CARD(0x0149, 0xc1ab), conflict with axnet_cs */
++	PCMCIA_DEVICE_MANF_CARD(0x0149, 0xc1ab),
+ 	PCMCIA_DEVICE_MANF_CARD(0x0186, 0x0110),
+ 	PCMCIA_DEVICE_MANF_CARD(0x01bf, 0x2328),
+ 	PCMCIA_DEVICE_MANF_CARD(0x01bf, 0x8041),
+ 	PCMCIA_DEVICE_MANF_CARD(0x0213, 0x2452),
+-/*	PCMCIA_DEVICE_MANF_CARD(0x021b, 0x0202), conflict with axnet_cs */
+ 	PCMCIA_DEVICE_MANF_CARD(0x026f, 0x0300),
+ 	PCMCIA_DEVICE_MANF_CARD(0x026f, 0x0307),
+ 	PCMCIA_DEVICE_MANF_CARD(0x026f, 0x030a),
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 6e9f619..963630c 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -49,13 +49,13 @@ int mdiobus_register(struct mii_bus *bus)
+ 	int i;
+ 	int err = 0;
+ 
+-	mutex_init(&bus->mdio_lock);
+-
+ 	if (NULL == bus || NULL == bus->name ||
+ 			NULL == bus->read ||
+ 			NULL == bus->write)
+ 		return -EINVAL;
+ 
++	mutex_init(&bus->mdio_lock);
++
+ 	if (bus->reset)
+ 		bus->reset(bus);
+ 
+diff --git a/drivers/net/ps3_gelic_wireless.c b/drivers/net/ps3_gelic_wireless.c
+index 750d2a9..daf5aba 100644
+--- a/drivers/net/ps3_gelic_wireless.c
++++ b/drivers/net/ps3_gelic_wireless.c
+@@ -2690,6 +2690,7 @@ int gelic_wl_driver_probe(struct gelic_card *card)
+ 		return -ENOMEM;
+ 
+ 	/* setup net_device structure */
++	SET_NETDEV_DEV(netdev, &card->dev->core);
+ 	gelic_wl_setup_netdev_ops(netdev);
+ 
+ 	/* setup some of net_device and register it */
+diff --git a/drivers/net/sis190.c b/drivers/net/sis190.c
+index 202fdf3..20745fd 100644
+--- a/drivers/net/sis190.c
++++ b/drivers/net/sis190.c
+@@ -1633,13 +1633,18 @@ static inline void sis190_init_rxfilter(struct net_device *dev)
+ static int __devinit sis190_get_mac_addr(struct pci_dev *pdev, 
+ 					 struct net_device *dev)
+ {
+-	u8 from;
++	int rc;
++
++	rc = sis190_get_mac_addr_from_eeprom(pdev, dev);
++	if (rc < 0) {
++		u8 reg;
+ 
+-	pci_read_config_byte(pdev, 0x73, &from);
++		pci_read_config_byte(pdev, 0x73, &reg);
+ 
+-	return (from & 0x00000001) ?
+-		sis190_get_mac_addr_from_apc(pdev, dev) :
+-		sis190_get_mac_addr_from_eeprom(pdev, dev);
++		if (reg & 0x00000001)
++			rc = sis190_get_mac_addr_from_apc(pdev, dev);
++	}
++	return rc;
+ }
+ 
+ static void sis190_set_speed_auto(struct net_device *dev)
+diff --git a/drivers/net/sky2.c b/drivers/net/sky2.c
+index 9a62959..54c6626 100644
+--- a/drivers/net/sky2.c
++++ b/drivers/net/sky2.c
+@@ -572,8 +572,9 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port)
+ 	default:
+ 		/* set Tx LED (LED_TX) to blink mode on Rx OR Tx activity */
+ 		ledctrl |= PHY_M_LED_BLINK_RT(BLINK_84MS) | PHY_M_LEDC_TX_CTRL;
++
+ 		/* turn off the Rx LED (LED_RX) */
+-		ledover &= ~PHY_M_LED_MO_RX;
++		ledover |= PHY_M_LED_MO_RX(MO_LED_OFF);
+ 	}
+ 
+ 	if (hw->chip_id == CHIP_ID_YUKON_EC_U &&
+@@ -602,7 +603,7 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port)
+ 
+ 		if (sky2->autoneg == AUTONEG_DISABLE || sky2->speed == SPEED_100) {
+ 			/* turn on 100 Mbps LED (LED_LINK100) */
+-			ledover |= PHY_M_LED_MO_100;
++			ledover |= PHY_M_LED_MO_100(MO_LED_ON);
+ 		}
+ 
+ 		if (ledover)
+@@ -3322,82 +3323,80 @@ static void sky2_set_multicast(struct net_device *dev)
+ /* Can have one global because blinking is controlled by
+  * ethtool and that is always under RTNL mutex
+  */
+-static void sky2_led(struct sky2_hw *hw, unsigned port, int on)
++static void sky2_led(struct sky2_port *sky2, enum led_mode mode)
+ {
+-	u16 pg;
++	struct sky2_hw *hw = sky2->hw;
++	unsigned port = sky2->port;
+ 
+-	switch (hw->chip_id) {
+-	case CHIP_ID_YUKON_XL:
++	spin_lock_bh(&sky2->phy_lock);
++	if (hw->chip_id == CHIP_ID_YUKON_EC_U ||
++	    hw->chip_id == CHIP_ID_YUKON_EX ||
++	    hw->chip_id == CHIP_ID_YUKON_SUPR) {
++		u16 pg;
+ 		pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR);
+ 		gm_phy_write(hw, port, PHY_MARV_EXT_ADR, 3);
+-		gm_phy_write(hw, port, PHY_MARV_PHY_CTRL,
+-			     on ? (PHY_M_LEDC_LOS_CTRL(1) |
+-				   PHY_M_LEDC_INIT_CTRL(7) |
+-				   PHY_M_LEDC_STA1_CTRL(7) |
+-				   PHY_M_LEDC_STA0_CTRL(7))
+-			     : 0);
+ 
+-		gm_phy_write(hw, port, PHY_MARV_EXT_ADR, pg);
+-		break;
++		switch (mode) {
++		case MO_LED_OFF:
++			gm_phy_write(hw, port, PHY_MARV_PHY_CTRL,
++				     PHY_M_LEDC_LOS_CTRL(8) |
++				     PHY_M_LEDC_INIT_CTRL(8) |
++				     PHY_M_LEDC_STA1_CTRL(8) |
++				     PHY_M_LEDC_STA0_CTRL(8));
++			break;
++		case MO_LED_ON:
++			gm_phy_write(hw, port, PHY_MARV_PHY_CTRL,
++				     PHY_M_LEDC_LOS_CTRL(9) |
++				     PHY_M_LEDC_INIT_CTRL(9) |
++				     PHY_M_LEDC_STA1_CTRL(9) |
++				     PHY_M_LEDC_STA0_CTRL(9));
++			break;
++		case MO_LED_BLINK:
++			gm_phy_write(hw, port, PHY_MARV_PHY_CTRL,
++				     PHY_M_LEDC_LOS_CTRL(0xa) |
++				     PHY_M_LEDC_INIT_CTRL(0xa) |
++				     PHY_M_LEDC_STA1_CTRL(0xa) |
++				     PHY_M_LEDC_STA0_CTRL(0xa));
++			break;
++		case MO_LED_NORM:
++			gm_phy_write(hw, port, PHY_MARV_PHY_CTRL,
++				     PHY_M_LEDC_LOS_CTRL(1) |
++				     PHY_M_LEDC_INIT_CTRL(8) |
++				     PHY_M_LEDC_STA1_CTRL(7) |
++				     PHY_M_LEDC_STA0_CTRL(7));
++		}
+ 
+-	default:
+-		gm_phy_write(hw, port, PHY_MARV_LED_CTRL, 0);
++		gm_phy_write(hw, port, PHY_MARV_EXT_ADR, pg);
++	} else
+ 		gm_phy_write(hw, port, PHY_MARV_LED_OVER, 
+-			     on ? PHY_M_LED_ALL : 0);
+-	}
++				     PHY_M_LED_MO_DUP(mode) |
++				     PHY_M_LED_MO_10(mode) |
++				     PHY_M_LED_MO_100(mode) |
++				     PHY_M_LED_MO_1000(mode) |
++				     PHY_M_LED_MO_RX(mode) |
++				     PHY_M_LED_MO_TX(mode));
++
++	spin_unlock_bh(&sky2->phy_lock);
+ }
+ 
+ /* blink LED's for finding board */
+ static int sky2_phys_id(struct net_device *dev, u32 data)
+ {
+ 	struct sky2_port *sky2 = netdev_priv(dev);
+-	struct sky2_hw *hw = sky2->hw;
+-	unsigned port = sky2->port;
+-	u16 ledctrl, ledover = 0;
+-	long ms;
+-	int interrupted;
+-	int onoff = 1;
++	unsigned int i;
+ 
+-	if (!data || data > (u32) (MAX_SCHEDULE_TIMEOUT / HZ))
+-		ms = jiffies_to_msecs(MAX_SCHEDULE_TIMEOUT);
+-	else
+-		ms = data * 1000;
+-
+-	/* save initial values */
+-	spin_lock_bh(&sky2->phy_lock);
+-	if (hw->chip_id == CHIP_ID_YUKON_XL) {
+-		u16 pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR);
+-		gm_phy_write(hw, port, PHY_MARV_EXT_ADR, 3);
+-		ledctrl = gm_phy_read(hw, port, PHY_MARV_PHY_CTRL);
+-		gm_phy_write(hw, port, PHY_MARV_EXT_ADR, pg);
+-	} else {
+-		ledctrl = gm_phy_read(hw, port, PHY_MARV_LED_CTRL);
+-		ledover = gm_phy_read(hw, port, PHY_MARV_LED_OVER);
+-	}
+-
+-	interrupted = 0;
+-	while (!interrupted && ms > 0) {
+-		sky2_led(hw, port, onoff);
+-		onoff = !onoff;
+-
+-		spin_unlock_bh(&sky2->phy_lock);
+-		interrupted = msleep_interruptible(250);
+-		spin_lock_bh(&sky2->phy_lock);
+-
+-		ms -= 250;
+-	}
++	if (data == 0)
++		data = UINT_MAX;
+ 
+-	/* resume regularly scheduled programming */
+-	if (hw->chip_id == CHIP_ID_YUKON_XL) {
+-		u16 pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR);
+-		gm_phy_write(hw, port, PHY_MARV_EXT_ADR, 3);
+-		gm_phy_write(hw, port, PHY_MARV_PHY_CTRL, ledctrl);
+-		gm_phy_write(hw, port, PHY_MARV_EXT_ADR, pg);
+-	} else {
+-		gm_phy_write(hw, port, PHY_MARV_LED_CTRL, ledctrl);
+-		gm_phy_write(hw, port, PHY_MARV_LED_OVER, ledover);
++	for (i = 0; i < data; i++) {
++		sky2_led(sky2, MO_LED_ON);
++		if (msleep_interruptible(500))
++			break;
++		sky2_led(sky2, MO_LED_OFF);
++		if (msleep_interruptible(500))
++			break;
+ 	}
+-	spin_unlock_bh(&sky2->phy_lock);
++	sky2_led(sky2, MO_LED_NORM);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/sky2.h b/drivers/net/sky2.h
+index 5ab5c1c..7bb3ba9 100644
+--- a/drivers/net/sky2.h
++++ b/drivers/net/sky2.h
+@@ -1318,18 +1318,21 @@ enum {
+ 	BLINK_670MS	= 4,/* 670 ms */
+ };
+ 
+-/**** PHY_MARV_LED_OVER    16 bit r/w LED control */
+-enum {
+-	PHY_M_LED_MO_DUP  = 3<<10,/* Bit 11..10:  Duplex */
+-	PHY_M_LED_MO_10	  = 3<<8, /* Bit  9.. 8:  Link 10 */
+-	PHY_M_LED_MO_100  = 3<<6, /* Bit  7.. 6:  Link 100 */
+-	PHY_M_LED_MO_1000 = 3<<4, /* Bit  5.. 4:  Link 1000 */
+-	PHY_M_LED_MO_RX	  = 3<<2, /* Bit  3.. 2:  Rx */
+-	PHY_M_LED_MO_TX	  = 3<<0, /* Bit  1.. 0:  Tx */
+-
+-	PHY_M_LED_ALL	  = PHY_M_LED_MO_DUP | PHY_M_LED_MO_10 
+-			    | PHY_M_LED_MO_100 | PHY_M_LED_MO_1000
+-			    | PHY_M_LED_MO_RX,
++/*****  PHY_MARV_LED_OVER	16 bit r/w	Manual LED Override Reg *****/
++#define PHY_M_LED_MO_SGMII(x)	((x)<<14)	/* Bit 15..14:  SGMII AN Timer */
++
++#define PHY_M_LED_MO_DUP(x)	((x)<<10)	/* Bit 11..10:  Duplex */
++#define PHY_M_LED_MO_10(x)	((x)<<8)	/* Bit  9.. 8:  Link 10 */
++#define PHY_M_LED_MO_100(x)	((x)<<6)	/* Bit  7.. 6:  Link 100 */
++#define PHY_M_LED_MO_1000(x)	((x)<<4)	/* Bit  5.. 4:  Link 1000 */
++#define PHY_M_LED_MO_RX(x)	((x)<<2)	/* Bit  3.. 2:  Rx */
++#define PHY_M_LED_MO_TX(x)	((x)<<0)	/* Bit  1.. 0:  Tx */
++
++enum led_mode {
++	MO_LED_NORM  = 0,
++	MO_LED_BLINK = 1,
++	MO_LED_OFF   = 2,
++	MO_LED_ON    = 3,
+ };
+ 
+ /*****  PHY_MARV_EXT_CTRL_2	16 bit r/w	Ext. PHY Specific Ctrl 2 *****/
+diff --git a/drivers/net/tlan.c b/drivers/net/tlan.c
+index 3af5b92..0166407 100644
+--- a/drivers/net/tlan.c
++++ b/drivers/net/tlan.c
+@@ -1400,7 +1400,7 @@ static void TLan_SetMulticastList( struct net_device *dev )
+ 	 *
+ 	 **************************************************************/
+ 
+-u32 TLan_HandleInvalid( struct net_device *dev, u16 host_int )
++static u32 TLan_HandleInvalid( struct net_device *dev, u16 host_int )
+ {
+ 	/* printk( "TLAN:  Invalid interrupt on %s.\n", dev->name ); */
+ 	return 0;
+@@ -1432,7 +1432,7 @@ u32 TLan_HandleInvalid( struct net_device *dev, u16 host_int )
+ 	 *
+ 	 **************************************************************/
+ 
+-u32 TLan_HandleTxEOF( struct net_device *dev, u16 host_int )
++static u32 TLan_HandleTxEOF( struct net_device *dev, u16 host_int )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	int		eoc = 0;
+@@ -1518,7 +1518,7 @@ u32 TLan_HandleTxEOF( struct net_device *dev, u16 host_int )
+ 	 *
+ 	 **************************************************************/
+ 
+-u32 TLan_HandleStatOverflow( struct net_device *dev, u16 host_int )
++static u32 TLan_HandleStatOverflow( struct net_device *dev, u16 host_int )
+ {
+ 	TLan_ReadAndClearStats( dev, TLAN_RECORD );
+ 
+@@ -1554,7 +1554,7 @@ u32 TLan_HandleStatOverflow( struct net_device *dev, u16 host_int )
+ 	 *
+ 	 **************************************************************/
+ 
+-u32 TLan_HandleRxEOF( struct net_device *dev, u16 host_int )
++static u32 TLan_HandleRxEOF( struct net_device *dev, u16 host_int )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	u32		ack = 0;
+@@ -1689,7 +1689,7 @@ u32 TLan_HandleRxEOF( struct net_device *dev, u16 host_int )
+ 	 *
+ 	 **************************************************************/
+ 
+-u32 TLan_HandleDummy( struct net_device *dev, u16 host_int )
++static u32 TLan_HandleDummy( struct net_device *dev, u16 host_int )
+ {
+ 	printk( "TLAN:  Test interrupt on %s.\n", dev->name );
+ 	return 1;
+@@ -1719,7 +1719,7 @@ u32 TLan_HandleDummy( struct net_device *dev, u16 host_int )
+ 	 *
+ 	 **************************************************************/
+ 
+-u32 TLan_HandleTxEOC( struct net_device *dev, u16 host_int )
++static u32 TLan_HandleTxEOC( struct net_device *dev, u16 host_int )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	TLanList		*head_list;
+@@ -1767,7 +1767,7 @@ u32 TLan_HandleTxEOC( struct net_device *dev, u16 host_int )
+ 	 *
+ 	 **************************************************************/
+ 
+-u32 TLan_HandleStatusCheck( struct net_device *dev, u16 host_int )
++static u32 TLan_HandleStatusCheck( struct net_device *dev, u16 host_int )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	u32		ack;
+@@ -1842,7 +1842,7 @@ u32 TLan_HandleStatusCheck( struct net_device *dev, u16 host_int )
+ 	 *
+ 	 **************************************************************/
+ 
+-u32 TLan_HandleRxEOC( struct net_device *dev, u16 host_int )
++static u32 TLan_HandleRxEOC( struct net_device *dev, u16 host_int )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	dma_addr_t	head_list_phys;
+@@ -1902,7 +1902,7 @@ u32 TLan_HandleRxEOC( struct net_device *dev, u16 host_int )
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_Timer( unsigned long data )
++static void TLan_Timer( unsigned long data )
+ {
+ 	struct net_device	*dev = (struct net_device *) data;
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+@@ -1983,7 +1983,7 @@ void TLan_Timer( unsigned long data )
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_ResetLists( struct net_device *dev )
++static void TLan_ResetLists( struct net_device *dev )
+ {
+ 	TLanPrivateInfo *priv = netdev_priv(dev);
+ 	int		i;
+@@ -2043,7 +2043,7 @@ void TLan_ResetLists( struct net_device *dev )
+ } /* TLan_ResetLists */
+ 
+ 
+-void TLan_FreeLists( struct net_device *dev )
++static void TLan_FreeLists( struct net_device *dev )
+ {
+ 	TLanPrivateInfo *priv = netdev_priv(dev);
+ 	int		i;
+@@ -2092,7 +2092,7 @@ void TLan_FreeLists( struct net_device *dev )
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_PrintDio( u16 io_base )
++static void TLan_PrintDio( u16 io_base )
+ {
+ 	u32 data0, data1;
+ 	int	i;
+@@ -2127,7 +2127,7 @@ void TLan_PrintDio( u16 io_base )
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_PrintList( TLanList *list, char *type, int num)
++static void TLan_PrintList( TLanList *list, char *type, int num)
+ {
+ 	int i;
+ 
+@@ -2163,7 +2163,7 @@ void TLan_PrintList( TLanList *list, char *type, int num)
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_ReadAndClearStats( struct net_device *dev, int record )
++static void TLan_ReadAndClearStats( struct net_device *dev, int record )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	u32		tx_good, tx_under;
+@@ -2238,7 +2238,7 @@ void TLan_ReadAndClearStats( struct net_device *dev, int record )
+ 	 *
+ 	 **************************************************************/
+ 
+-void
++static void
+ TLan_ResetAdapter( struct net_device *dev )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+@@ -2324,7 +2324,7 @@ TLan_ResetAdapter( struct net_device *dev )
+ 
+ 
+ 
+-void
++static void
+ TLan_FinishReset( struct net_device *dev )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+@@ -2448,7 +2448,7 @@ TLan_FinishReset( struct net_device *dev )
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_SetMac( struct net_device *dev, int areg, char *mac )
++static void TLan_SetMac( struct net_device *dev, int areg, char *mac )
+ {
+ 	int i;
+ 
+@@ -2490,7 +2490,7 @@ void TLan_SetMac( struct net_device *dev, int areg, char *mac )
+ 	 *
+ 	 ********************************************************************/
+ 
+-void TLan_PhyPrint( struct net_device *dev )
++static void TLan_PhyPrint( struct net_device *dev )
+ {
+ 	TLanPrivateInfo *priv = netdev_priv(dev);
+ 	u16 i, data0, data1, data2, data3, phy;
+@@ -2539,7 +2539,7 @@ void TLan_PhyPrint( struct net_device *dev )
+ 	 *
+ 	 ********************************************************************/
+ 
+-void TLan_PhyDetect( struct net_device *dev )
++static void TLan_PhyDetect( struct net_device *dev )
+ {
+ 	TLanPrivateInfo *priv = netdev_priv(dev);
+ 	u16		control;
+@@ -2586,7 +2586,7 @@ void TLan_PhyDetect( struct net_device *dev )
+ 
+ 
+ 
+-void TLan_PhyPowerDown( struct net_device *dev )
++static void TLan_PhyPowerDown( struct net_device *dev )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	u16		value;
+@@ -2611,7 +2611,7 @@ void TLan_PhyPowerDown( struct net_device *dev )
+ 
+ 
+ 
+-void TLan_PhyPowerUp( struct net_device *dev )
++static void TLan_PhyPowerUp( struct net_device *dev )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	u16		value;
+@@ -2632,7 +2632,7 @@ void TLan_PhyPowerUp( struct net_device *dev )
+ 
+ 
+ 
+-void TLan_PhyReset( struct net_device *dev )
++static void TLan_PhyReset( struct net_device *dev )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	u16		phy;
+@@ -2660,7 +2660,7 @@ void TLan_PhyReset( struct net_device *dev )
+ 
+ 
+ 
+-void TLan_PhyStartLink( struct net_device *dev )
++static void TLan_PhyStartLink( struct net_device *dev )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	u16		ability;
+@@ -2747,7 +2747,7 @@ void TLan_PhyStartLink( struct net_device *dev )
+ 
+ 
+ 
+-void TLan_PhyFinishAutoNeg( struct net_device *dev )
++static void TLan_PhyFinishAutoNeg( struct net_device *dev )
+ {
+ 	TLanPrivateInfo	*priv = netdev_priv(dev);
+ 	u16		an_adv;
+@@ -2903,7 +2903,7 @@ void TLan_PhyMonitor( struct net_device *dev )
+ 	 *
+ 	 **************************************************************/
+ 
+-int TLan_MiiReadReg( struct net_device *dev, u16 phy, u16 reg, u16 *val )
++static int TLan_MiiReadReg( struct net_device *dev, u16 phy, u16 reg, u16 *val )
+ {
+ 	u8	nack;
+ 	u16	sio, tmp;
+@@ -2993,7 +2993,7 @@ int TLan_MiiReadReg( struct net_device *dev, u16 phy, u16 reg, u16 *val )
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_MiiSendData( u16 base_port, u32 data, unsigned num_bits )
++static void TLan_MiiSendData( u16 base_port, u32 data, unsigned num_bits )
+ {
+ 	u16 sio;
+ 	u32 i;
+@@ -3035,7 +3035,7 @@ void TLan_MiiSendData( u16 base_port, u32 data, unsigned num_bits )
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_MiiSync( u16 base_port )
++static void TLan_MiiSync( u16 base_port )
+ {
+ 	int i;
+ 	u16 sio;
+@@ -3074,7 +3074,7 @@ void TLan_MiiSync( u16 base_port )
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_MiiWriteReg( struct net_device *dev, u16 phy, u16 reg, u16 val )
++static void TLan_MiiWriteReg( struct net_device *dev, u16 phy, u16 reg, u16 val )
+ {
+ 	u16	sio;
+ 	int	minten;
+@@ -3144,7 +3144,7 @@ void TLan_MiiWriteReg( struct net_device *dev, u16 phy, u16 reg, u16 val )
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_EeSendStart( u16 io_base )
++static void TLan_EeSendStart( u16 io_base )
+ {
+ 	u16	sio;
+ 
+@@ -3184,7 +3184,7 @@ void TLan_EeSendStart( u16 io_base )
+ 	 *
+ 	 **************************************************************/
+ 
+-int TLan_EeSendByte( u16 io_base, u8 data, int stop )
++static int TLan_EeSendByte( u16 io_base, u8 data, int stop )
+ {
+ 	int	err;
+ 	u8	place;
+@@ -3245,7 +3245,7 @@ int TLan_EeSendByte( u16 io_base, u8 data, int stop )
+ 	 *
+ 	 **************************************************************/
+ 
+-void TLan_EeReceiveByte( u16 io_base, u8 *data, int stop )
++static void TLan_EeReceiveByte( u16 io_base, u8 *data, int stop )
+ {
+ 	u8  place;
+ 	u16 sio;
+@@ -3303,7 +3303,7 @@ void TLan_EeReceiveByte( u16 io_base, u8 *data, int stop )
+ 	 *
+ 	 **************************************************************/
+ 
+-int TLan_EeReadByte( struct net_device *dev, u8 ee_addr, u8 *data )
++static int TLan_EeReadByte( struct net_device *dev, u8 ee_addr, u8 *data )
+ {
+ 	int err;
+ 	TLanPrivateInfo *priv = netdev_priv(dev);
+diff --git a/drivers/net/tulip/uli526x.c b/drivers/net/tulip/uli526x.c
+index a7afeea..a59c1f2 100644
+--- a/drivers/net/tulip/uli526x.c
++++ b/drivers/net/tulip/uli526x.c
+@@ -482,9 +482,11 @@ static void uli526x_init(struct net_device *dev)
+ 	struct uli526x_board_info *db = netdev_priv(dev);
+ 	unsigned long ioaddr = db->ioaddr;
+ 	u8	phy_tmp;
++	u8	timeout;
+ 	u16	phy_value;
+ 	u16 phy_reg_reset;
+ 
++
+ 	ULI526X_DBUG(0, "uli526x_init()", 0);
+ 
+ 	/* Reset M526x MAC controller */
+@@ -509,11 +511,19 @@ static void uli526x_init(struct net_device *dev)
+ 	/* Parser SROM and media mode */
+ 	db->media_mode = uli526x_media_mode;
+ 
+-	/* Phyxcer capability setting */
++	/* phyxcer capability setting */
+ 	phy_reg_reset = phy_read(db->ioaddr, db->phy_addr, 0, db->chip_id);
+ 	phy_reg_reset = (phy_reg_reset | 0x8000);
+ 	phy_write(db->ioaddr, db->phy_addr, 0, phy_reg_reset, db->chip_id);
++
++	/* See IEEE 802.3-2002.pdf (Section 2, Chapter "22.2.4 Management
++	 * functions") or phy data sheet for details on phy reset
++	 */
+ 	udelay(500);
++	timeout = 10;
++	while (timeout-- &&
++		phy_read(db->ioaddr, db->phy_addr, 0, db->chip_id) & 0x8000)
++			udelay(100);
+ 
+ 	/* Process Phyxcer Media Mode */
+ 	uli526x_set_phyxcer(db);
+diff --git a/drivers/net/via-rhine.c b/drivers/net/via-rhine.c
+index 7c851b1..8c9d6ae 100644
+--- a/drivers/net/via-rhine.c
++++ b/drivers/net/via-rhine.c
+@@ -1893,7 +1893,7 @@ static void rhine_shutdown (struct pci_dev *pdev)
+ 
+ 	/* Make sure we use pattern 0, 1 and not 4, 5 */
+ 	if (rp->quirks & rq6patterns)
+-		iowrite8(0x04, ioaddr + 0xA7);
++		iowrite8(0x04, ioaddr + WOLcgClr);
+ 
+ 	if (rp->wolopts & WAKE_MAGIC) {
+ 		iowrite8(WOLmagic, ioaddr + WOLcrSet);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index fdc2367..19fd4cb 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -361,6 +361,7 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 	netif_napi_add(dev, &vi->napi, virtnet_poll, napi_weight);
+ 	vi->dev = dev;
+ 	vi->vdev = vdev;
++	vdev->priv = vi;
+ 
+ 	/* We expect two virtqueues, receive then send. */
+ 	vi->rvq = vdev->config->find_vq(vdev, 0, skb_recv_done);
+@@ -395,7 +396,6 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 	}
+ 
+ 	pr_debug("virtnet: registered device %s\n", dev->name);
+-	vdev->priv = vi;
+ 	return 0;
+ 
+ unregister:
+diff --git a/drivers/net/wireless/b43/Kconfig b/drivers/net/wireless/b43/Kconfig
+index 1a2141d..8bc4bc4 100644
+--- a/drivers/net/wireless/b43/Kconfig
++++ b/drivers/net/wireless/b43/Kconfig
+@@ -32,6 +32,7 @@ config B43_PCI_AUTOSELECT
+ 	bool
+ 	depends on B43 && SSB_PCIHOST_POSSIBLE
+ 	select SSB_PCIHOST
++	select SSB_B43_PCI_BRIDGE
+ 	default y
+ 
+ # Auto-select SSB PCICORE driver, if possible
+diff --git a/drivers/net/wireless/b43legacy/Kconfig b/drivers/net/wireless/b43legacy/Kconfig
+index 6745579..13c65fa 100644
+--- a/drivers/net/wireless/b43legacy/Kconfig
++++ b/drivers/net/wireless/b43legacy/Kconfig
+@@ -25,6 +25,7 @@ config B43LEGACY_PCI_AUTOSELECT
+ 	bool
+ 	depends on B43LEGACY && SSB_PCIHOST_POSSIBLE
+ 	select SSB_PCIHOST
++	select SSB_B43_PCI_BRIDGE
+ 	default y
+ 
+ # Auto-select SSB PCICORE driver, if possible
+diff --git a/drivers/net/wireless/bcm43xx/Kconfig b/drivers/net/wireless/bcm43xx/Kconfig
+index 0159701..afb8f43 100644
+--- a/drivers/net/wireless/bcm43xx/Kconfig
++++ b/drivers/net/wireless/bcm43xx/Kconfig
+@@ -1,6 +1,6 @@
+ config BCM43XX
+ 	tristate "Broadcom BCM43xx wireless support (DEPRECATED)"
+-	depends on PCI && IEEE80211 && IEEE80211_SOFTMAC && WLAN_80211 && EXPERIMENTAL
++	depends on PCI && IEEE80211 && IEEE80211_SOFTMAC && WLAN_80211 && (!SSB_B43_PCI_BRIDGE || SSB != y) && EXPERIMENTAL
+ 	select WIRELESS_EXT
+ 	select FW_LOADER
+ 	select HW_RANDOM
+diff --git a/drivers/net/wireless/libertas/cmd.c b/drivers/net/wireless/libertas/cmd.c
+index eab0203..b3c1acb 100644
+--- a/drivers/net/wireless/libertas/cmd.c
++++ b/drivers/net/wireless/libertas/cmd.c
+@@ -1040,7 +1040,6 @@ int lbs_mesh_access(struct lbs_private *priv, uint16_t cmd_action,
+ 	lbs_deb_leave(LBS_DEB_CMD);
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(lbs_mesh_access);
+ 
+ int lbs_mesh_config(struct lbs_private *priv, uint16_t enable, uint16_t chan)
+ {
+@@ -1576,7 +1575,6 @@ done:
+ 	lbs_deb_leave_args(LBS_DEB_HOST, "ret %d", ret);
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(lbs_prepare_and_send_command);
+ 
+ /**
+  *  @brief This function allocates the command buffer and link
+diff --git a/drivers/net/wireless/libertas/decl.h b/drivers/net/wireless/libertas/decl.h
+index aaacd9b..4e22341 100644
+--- a/drivers/net/wireless/libertas/decl.h
++++ b/drivers/net/wireless/libertas/decl.h
+@@ -69,7 +69,6 @@ struct lbs_private *lbs_add_card(void *card, struct device *dmdev);
+ int lbs_remove_card(struct lbs_private *priv);
+ int lbs_start_card(struct lbs_private *priv);
+ int lbs_stop_card(struct lbs_private *priv);
+-int lbs_reset_device(struct lbs_private *priv);
+ void lbs_host_to_card_done(struct lbs_private *priv);
+ 
+ int lbs_update_channel(struct lbs_private *priv);
+diff --git a/drivers/net/wireless/libertas/main.c b/drivers/net/wireless/libertas/main.c
+index 84fb49c..4d4e2f3 100644
+--- a/drivers/net/wireless/libertas/main.c
++++ b/drivers/net/wireless/libertas/main.c
+@@ -1351,8 +1351,6 @@ done:
+ 	lbs_deb_leave_args(LBS_DEB_MESH, "ret %d", ret);
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(lbs_add_mesh);
+-
+ 
+ static void lbs_remove_mesh(struct lbs_private *priv)
+ {
+@@ -1372,7 +1370,6 @@ static void lbs_remove_mesh(struct lbs_private *priv)
+ 	free_netdev(mesh_dev);
+ 	lbs_deb_leave(LBS_DEB_MESH);
+ }
+-EXPORT_SYMBOL_GPL(lbs_remove_mesh);
+ 
+ /**
+  *  @brief This function finds the CFP in
+@@ -1458,20 +1455,6 @@ void lbs_interrupt(struct lbs_private *priv)
+ }
+ EXPORT_SYMBOL_GPL(lbs_interrupt);
+ 
+-int lbs_reset_device(struct lbs_private *priv)
+-{
+-	int ret;
+-
+-	lbs_deb_enter(LBS_DEB_MAIN);
+-	ret = lbs_prepare_and_send_command(priv, CMD_802_11_RESET,
+-				    CMD_ACT_HALT, 0, 0, NULL);
+-	msleep_interruptible(10);
+-
+-	lbs_deb_leave_args(LBS_DEB_MAIN, "ret %d", ret);
+-	return ret;
+-}
+-EXPORT_SYMBOL_GPL(lbs_reset_device);
+-
+ static int __init lbs_init_module(void)
+ {
+ 	lbs_deb_enter(LBS_DEB_MAIN);
+diff --git a/drivers/net/wireless/rndis_wlan.c b/drivers/net/wireless/rndis_wlan.c
+index 8ce2ddf..d9460ae 100644
+--- a/drivers/net/wireless/rndis_wlan.c
++++ b/drivers/net/wireless/rndis_wlan.c
+@@ -228,9 +228,9 @@ struct NDIS_WLAN_BSSID_EX {
+ 	struct NDIS_802_11_SSID Ssid;
+ 	__le32 Privacy;
+ 	__le32 Rssi;
+-	enum NDIS_802_11_NETWORK_TYPE NetworkTypeInUse;
++	__le32 NetworkTypeInUse;
+ 	struct NDIS_802_11_CONFIGURATION Configuration;
+-	enum NDIS_802_11_NETWORK_INFRASTRUCTURE InfrastructureMode;
++	__le32 InfrastructureMode;
+ 	u8 SupportedRates[NDIS_802_11_LENGTH_RATES_EX];
+ 	__le32 IELength;
+ 	u8 IEs[0];
+@@ -279,11 +279,11 @@ struct RNDIS_CONFIG_PARAMETER_INFOBUFFER {
+ } __attribute__((packed));
+ 
+ /* these have to match what is in wpa_supplicant */
+-enum { WPA_ALG_NONE, WPA_ALG_WEP, WPA_ALG_TKIP, WPA_ALG_CCMP } wpa_alg;
+-enum { CIPHER_NONE, CIPHER_WEP40, CIPHER_TKIP, CIPHER_CCMP, CIPHER_WEP104 }
+-	wpa_cipher;
+-enum { KEY_MGMT_802_1X, KEY_MGMT_PSK, KEY_MGMT_NONE, KEY_MGMT_802_1X_NO_WPA,
+-	KEY_MGMT_WPA_NONE } wpa_key_mgmt;
++enum wpa_alg { WPA_ALG_NONE, WPA_ALG_WEP, WPA_ALG_TKIP, WPA_ALG_CCMP };
++enum wpa_cipher { CIPHER_NONE, CIPHER_WEP40, CIPHER_TKIP, CIPHER_CCMP,
++		  CIPHER_WEP104 };
++enum wpa_key_mgmt { KEY_MGMT_802_1X, KEY_MGMT_PSK, KEY_MGMT_NONE,
++		    KEY_MGMT_802_1X_NO_WPA, KEY_MGMT_WPA_NONE };
+ 
+ /*
+  *  private data
+diff --git a/drivers/net/wireless/rt2x00/rt2400pci.c b/drivers/net/wireless/rt2x00/rt2400pci.c
+index d6cba13..c69f85e 100644
+--- a/drivers/net/wireless/rt2x00/rt2400pci.c
++++ b/drivers/net/wireless/rt2x00/rt2400pci.c
+@@ -960,8 +960,12 @@ static int rt2400pci_set_device_state(struct rt2x00_dev *rt2x00dev,
+ 		rt2400pci_disable_radio(rt2x00dev);
+ 		break;
+ 	case STATE_RADIO_RX_ON:
++	case STATE_RADIO_RX_ON_LINK:
++		rt2400pci_toggle_rx(rt2x00dev, STATE_RADIO_RX_ON);
++		break;
+ 	case STATE_RADIO_RX_OFF:
+-		rt2400pci_toggle_rx(rt2x00dev, state);
++	case STATE_RADIO_RX_OFF_LINK:
++		rt2400pci_toggle_rx(rt2x00dev, STATE_RADIO_RX_OFF);
+ 		break;
+ 	case STATE_DEEP_SLEEP:
+ 	case STATE_SLEEP:
+diff --git a/drivers/net/wireless/rt2x00/rt2500pci.c b/drivers/net/wireless/rt2x00/rt2500pci.c
+index e874fdc..91e87b5 100644
+--- a/drivers/net/wireless/rt2x00/rt2500pci.c
++++ b/drivers/net/wireless/rt2x00/rt2500pci.c
+@@ -1112,8 +1112,12 @@ static int rt2500pci_set_device_state(struct rt2x00_dev *rt2x00dev,
+ 		rt2500pci_disable_radio(rt2x00dev);
+ 		break;
+ 	case STATE_RADIO_RX_ON:
++	case STATE_RADIO_RX_ON_LINK:
++		rt2500pci_toggle_rx(rt2x00dev, STATE_RADIO_RX_ON);
++		break;
+ 	case STATE_RADIO_RX_OFF:
+-		rt2500pci_toggle_rx(rt2x00dev, state);
++	case STATE_RADIO_RX_OFF_LINK:
++		rt2500pci_toggle_rx(rt2x00dev, STATE_RADIO_RX_OFF);
+ 		break;
+ 	case STATE_DEEP_SLEEP:
+ 	case STATE_SLEEP:
+diff --git a/drivers/net/wireless/rt2x00/rt2500usb.c b/drivers/net/wireless/rt2x00/rt2500usb.c
+index 4ca9730..638c3d2 100644
+--- a/drivers/net/wireless/rt2x00/rt2500usb.c
++++ b/drivers/net/wireless/rt2x00/rt2500usb.c
+@@ -1001,8 +1001,12 @@ static int rt2500usb_set_device_state(struct rt2x00_dev *rt2x00dev,
+ 		rt2500usb_disable_radio(rt2x00dev);
+ 		break;
+ 	case STATE_RADIO_RX_ON:
++	case STATE_RADIO_RX_ON_LINK:
++		rt2500usb_toggle_rx(rt2x00dev, STATE_RADIO_RX_ON);
++		break;
+ 	case STATE_RADIO_RX_OFF:
+-		rt2500usb_toggle_rx(rt2x00dev, state);
++	case STATE_RADIO_RX_OFF_LINK:
++		rt2500usb_toggle_rx(rt2x00dev, STATE_RADIO_RX_OFF);
+ 		break;
+ 	case STATE_DEEP_SLEEP:
+ 	case STATE_SLEEP:
+diff --git a/drivers/net/wireless/rt2x00/rt2x00config.c b/drivers/net/wireless/rt2x00/rt2x00config.c
+index 72cfe00..07adc57 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00config.c
++++ b/drivers/net/wireless/rt2x00/rt2x00config.c
+@@ -97,12 +97,16 @@ void rt2x00lib_config_antenna(struct rt2x00_dev *rt2x00dev,
+ 	libconf.ant.rx = rx;
+ 	libconf.ant.tx = tx;
+ 
++	if (rx == rt2x00dev->link.ant.active.rx &&
++	    tx == rt2x00dev->link.ant.active.tx)
++		return;
++
+ 	/*
+ 	 * Antenna setup changes require the RX to be disabled,
+ 	 * else the changes will be ignored by the device.
+ 	 */
+ 	if (test_bit(DEVICE_ENABLED_RADIO, &rt2x00dev->flags))
+-		rt2x00lib_toggle_rx(rt2x00dev, STATE_RADIO_RX_OFF);
++		rt2x00lib_toggle_rx(rt2x00dev, STATE_RADIO_RX_OFF_LINK);
+ 
+ 	/*
+ 	 * Write new antenna setup to device and reset the link tuner.
+@@ -116,7 +120,7 @@ void rt2x00lib_config_antenna(struct rt2x00_dev *rt2x00dev,
+ 	rt2x00dev->link.ant.active.tx = libconf.ant.tx;
+ 
+ 	if (test_bit(DEVICE_ENABLED_RADIO, &rt2x00dev->flags))
+-		rt2x00lib_toggle_rx(rt2x00dev, STATE_RADIO_RX_ON);
++		rt2x00lib_toggle_rx(rt2x00dev, STATE_RADIO_RX_ON_LINK);
+ }
+ 
+ void rt2x00lib_config(struct rt2x00_dev *rt2x00dev,
+diff --git a/drivers/net/wireless/rt2x00/rt2x00dev.c b/drivers/net/wireless/rt2x00/rt2x00dev.c
+index c4be2ac..0d51f47 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00dev.c
++++ b/drivers/net/wireless/rt2x00/rt2x00dev.c
+@@ -61,11 +61,33 @@ EXPORT_SYMBOL_GPL(rt2x00lib_get_ring);
+ /*
+  * Link tuning handlers
+  */
+-static void rt2x00lib_start_link_tuner(struct rt2x00_dev *rt2x00dev)
++void rt2x00lib_reset_link_tuner(struct rt2x00_dev *rt2x00dev)
+ {
++	if (!test_bit(DEVICE_ENABLED_RADIO, &rt2x00dev->flags))
++		return;
++
++	/*
++	 * Reset link information.
++	 * Both the currently active vgc level as well as
++	 * the link tuner counter should be reset. Resetting
++	 * the counter is important for devices where the
++	 * device should only perform link tuning during the
++	 * first minute after being enabled.
++	 */
+ 	rt2x00dev->link.count = 0;
+ 	rt2x00dev->link.vgc_level = 0;
+ 
++	/*
++	 * Reset the link tuner.
++	 */
++	rt2x00dev->ops->lib->reset_tuner(rt2x00dev);
++}
++
++static void rt2x00lib_start_link_tuner(struct rt2x00_dev *rt2x00dev)
++{
++	/*
++	 * Clear all (possibly) pre-existing quality statistics.
++	 */
+ 	memset(&rt2x00dev->link.qual, 0, sizeof(rt2x00dev->link.qual));
+ 
+ 	/*
+@@ -79,10 +101,7 @@ static void rt2x00lib_start_link_tuner(struct rt2x00_dev *rt2x00dev)
+ 	rt2x00dev->link.qual.rx_percentage = 50;
+ 	rt2x00dev->link.qual.tx_percentage = 50;
+ 
+-	/*
+-	 * Reset the link tuner.
+-	 */
+-	rt2x00dev->ops->lib->reset_tuner(rt2x00dev);
++	rt2x00lib_reset_link_tuner(rt2x00dev);
+ 
+ 	queue_delayed_work(rt2x00dev->hw->workqueue,
+ 			   &rt2x00dev->link.work, LINK_TUNE_INTERVAL);
+@@ -93,15 +112,6 @@ static void rt2x00lib_stop_link_tuner(struct rt2x00_dev *rt2x00dev)
+ 	cancel_delayed_work_sync(&rt2x00dev->link.work);
+ }
+ 
+-void rt2x00lib_reset_link_tuner(struct rt2x00_dev *rt2x00dev)
+-{
+-	if (!test_bit(DEVICE_ENABLED_RADIO, &rt2x00dev->flags))
+-		return;
+-
+-	rt2x00lib_stop_link_tuner(rt2x00dev);
+-	rt2x00lib_start_link_tuner(rt2x00dev);
+-}
+-
+ /*
+  * Ring initialization
+  */
+@@ -260,19 +270,11 @@ static void rt2x00lib_evaluate_antenna_sample(struct rt2x00_dev *rt2x00dev)
+ 	if (sample_a == sample_b)
+ 		return;
+ 
+-	if (rt2x00dev->link.ant.flags & ANTENNA_RX_DIVERSITY) {
+-		if (sample_a > sample_b && rx == ANTENNA_B)
+-			rx = ANTENNA_A;
+-		else if (rx == ANTENNA_A)
+-			rx = ANTENNA_B;
+-	}
++	if (rt2x00dev->link.ant.flags & ANTENNA_RX_DIVERSITY)
++		rx = (sample_a > sample_b) ? ANTENNA_A : ANTENNA_B;
+ 
+-	if (rt2x00dev->link.ant.flags & ANTENNA_TX_DIVERSITY) {
+-		if (sample_a > sample_b && tx == ANTENNA_B)
+-			tx = ANTENNA_A;
+-		else if (tx == ANTENNA_A)
+-			tx = ANTENNA_B;
+-	}
++	if (rt2x00dev->link.ant.flags & ANTENNA_TX_DIVERSITY)
++		tx = (sample_a > sample_b) ? ANTENNA_A : ANTENNA_B;
+ 
+ 	rt2x00lib_config_antenna(rt2x00dev, rx, tx);
+ }
+@@ -293,7 +295,7 @@ static void rt2x00lib_evaluate_antenna_eval(struct rt2x00_dev *rt2x00dev)
+ 	 * sample the rssi from the other antenna to make a valid
+ 	 * comparison between the 2 antennas.
+ 	 */
+-	if ((rssi_curr - rssi_old) > -5 || (rssi_curr - rssi_old) < 5)
++	if (abs(rssi_curr - rssi_old) < 5)
+ 		return;
+ 
+ 	rt2x00dev->link.ant.flags |= ANTENNA_MODE_SAMPLE;
+@@ -319,15 +321,15 @@ static void rt2x00lib_evaluate_antenna(struct rt2x00_dev *rt2x00dev)
+ 	rt2x00dev->link.ant.flags &= ~ANTENNA_TX_DIVERSITY;
+ 
+ 	if (rt2x00dev->hw->conf.antenna_sel_rx == 0 &&
+-	    rt2x00dev->default_ant.rx != ANTENNA_SW_DIVERSITY)
++	    rt2x00dev->default_ant.rx == ANTENNA_SW_DIVERSITY)
+ 		rt2x00dev->link.ant.flags |= ANTENNA_RX_DIVERSITY;
+ 	if (rt2x00dev->hw->conf.antenna_sel_tx == 0 &&
+-	    rt2x00dev->default_ant.tx != ANTENNA_SW_DIVERSITY)
++	    rt2x00dev->default_ant.tx == ANTENNA_SW_DIVERSITY)
+ 		rt2x00dev->link.ant.flags |= ANTENNA_TX_DIVERSITY;
+ 
+ 	if (!(rt2x00dev->link.ant.flags & ANTENNA_RX_DIVERSITY) &&
+ 	    !(rt2x00dev->link.ant.flags & ANTENNA_TX_DIVERSITY)) {
+-		rt2x00dev->link.ant.flags &= ~ANTENNA_MODE_SAMPLE;
++		rt2x00dev->link.ant.flags = 0;
+ 		return;
+ 	}
+ 
+@@ -441,17 +443,18 @@ static void rt2x00lib_link_tuner(struct work_struct *work)
+ 		rt2x00dev->ops->lib->link_tuner(rt2x00dev);
+ 
+ 	/*
+-	 * Evaluate antenna setup.
+-	 */
+-	rt2x00lib_evaluate_antenna(rt2x00dev);
+-
+-	/*
+ 	 * Precalculate a portion of the link signal which is
+ 	 * in based on the tx/rx success/failure counters.
+ 	 */
+ 	rt2x00lib_precalculate_link_signal(&rt2x00dev->link.qual);
+ 
+ 	/*
++	 * Evaluate antenna setup, make this the last step since this could
++	 * possibly reset some statistics.
++	 */
++	rt2x00lib_evaluate_antenna(rt2x00dev);
++
++	/*
+ 	 * Increase tuner counter, and reschedule the next link tuner run.
+ 	 */
+ 	rt2x00dev->link.count++;
+diff --git a/drivers/net/wireless/rt2x00/rt2x00reg.h b/drivers/net/wireless/rt2x00/rt2x00reg.h
+index 8384212..b1915dc 100644
+--- a/drivers/net/wireless/rt2x00/rt2x00reg.h
++++ b/drivers/net/wireless/rt2x00/rt2x00reg.h
+@@ -85,6 +85,8 @@ enum dev_state {
+ 	STATE_RADIO_OFF,
+ 	STATE_RADIO_RX_ON,
+ 	STATE_RADIO_RX_OFF,
++	STATE_RADIO_RX_ON_LINK,
++	STATE_RADIO_RX_OFF_LINK,
+ 	STATE_RADIO_IRQ_ON,
+ 	STATE_RADIO_IRQ_OFF,
+ };
+diff --git a/drivers/net/wireless/rt2x00/rt61pci.c b/drivers/net/wireless/rt2x00/rt61pci.c
+index b31f0c2..e808db9 100644
+--- a/drivers/net/wireless/rt2x00/rt61pci.c
++++ b/drivers/net/wireless/rt2x00/rt61pci.c
+@@ -1482,8 +1482,12 @@ static int rt61pci_set_device_state(struct rt2x00_dev *rt2x00dev,
+ 		rt61pci_disable_radio(rt2x00dev);
+ 		break;
+ 	case STATE_RADIO_RX_ON:
++	case STATE_RADIO_RX_ON_LINK:
++		rt61pci_toggle_rx(rt2x00dev, STATE_RADIO_RX_ON);
++		break;
+ 	case STATE_RADIO_RX_OFF:
+-		rt61pci_toggle_rx(rt2x00dev, state);
++	case STATE_RADIO_RX_OFF_LINK:
++		rt61pci_toggle_rx(rt2x00dev, STATE_RADIO_RX_OFF);
+ 		break;
+ 	case STATE_DEEP_SLEEP:
+ 	case STATE_SLEEP:
+diff --git a/drivers/net/wireless/rt2x00/rt73usb.c b/drivers/net/wireless/rt2x00/rt73usb.c
+index 4d576ab..4fac2d4 100644
+--- a/drivers/net/wireless/rt2x00/rt73usb.c
++++ b/drivers/net/wireless/rt2x00/rt73usb.c
+@@ -1208,8 +1208,12 @@ static int rt73usb_set_device_state(struct rt2x00_dev *rt2x00dev,
+ 		rt73usb_disable_radio(rt2x00dev);
+ 		break;
+ 	case STATE_RADIO_RX_ON:
++	case STATE_RADIO_RX_ON_LINK:
++		rt73usb_toggle_rx(rt2x00dev, STATE_RADIO_RX_ON);
++		break;
+ 	case STATE_RADIO_RX_OFF:
+-		rt73usb_toggle_rx(rt2x00dev, state);
++	case STATE_RADIO_RX_OFF_LINK:
++		rt73usb_toggle_rx(rt2x00dev, STATE_RADIO_RX_OFF);
+ 		break;
+ 	case STATE_DEEP_SLEEP:
+ 	case STATE_SLEEP:
+diff --git a/drivers/s390/net/claw.c b/drivers/s390/net/claw.c
+index c307621..d8a5c22 100644
+--- a/drivers/s390/net/claw.c
++++ b/drivers/s390/net/claw.c
+@@ -1851,8 +1851,7 @@ claw_hw_tx(struct sk_buff *skb, struct net_device *dev, long linkid)
+                 }
+         }
+         /*      See how many write buffers are required to hold this data */
+-        numBuffers= ( skb->len + privptr->p_env->write_size - 1) /
+-			( privptr->p_env->write_size);
++	numBuffers = DIV_ROUND_UP(skb->len, privptr->p_env->write_size);
+ 
+         /*      If that number of buffers isn't available, give up for now */
+         if (privptr->write_free_count < numBuffers ||
+@@ -2114,8 +2113,7 @@ init_ccw_bk(struct net_device *dev)
+         */
+         ccw_blocks_perpage= PAGE_SIZE /  CCWBK_SIZE;
+         ccw_pages_required=
+-		(ccw_blocks_required+ccw_blocks_perpage -1) /
+-			 ccw_blocks_perpage;
++		DIV_ROUND_UP(ccw_blocks_required, ccw_blocks_perpage);
+ 
+ #ifdef DEBUGMSG
+         printk(KERN_INFO "%s: %s() > ccw_blocks_perpage=%d\n",
+@@ -2131,30 +2129,29 @@ init_ccw_bk(struct net_device *dev)
+ 	 * provide good performance. With packing buffers support 32k
+ 	 * buffers are used.
+          */
+-        if (privptr->p_env->read_size < PAGE_SIZE) {
+-            claw_reads_perpage= PAGE_SIZE / privptr->p_env->read_size;
+-            claw_read_pages= (privptr->p_env->read_buffers +
+-	    	claw_reads_perpage -1) / claw_reads_perpage;
++	if (privptr->p_env->read_size < PAGE_SIZE) {
++		claw_reads_perpage = PAGE_SIZE / privptr->p_env->read_size;
++		claw_read_pages = DIV_ROUND_UP(privptr->p_env->read_buffers,
++						claw_reads_perpage);
+          }
+          else {       /* > or equal  */
+-            privptr->p_buff_pages_perread=
+-	    	(privptr->p_env->read_size + PAGE_SIZE - 1) / PAGE_SIZE;
+-            claw_read_pages=
+-	    	privptr->p_env->read_buffers * privptr->p_buff_pages_perread;
++		privptr->p_buff_pages_perread =
++			DIV_ROUND_UP(privptr->p_env->read_size, PAGE_SIZE);
++		claw_read_pages = privptr->p_env->read_buffers *
++					privptr->p_buff_pages_perread;
+          }
+         if (privptr->p_env->write_size < PAGE_SIZE) {
+-            claw_writes_perpage=
+-	    	PAGE_SIZE / privptr->p_env->write_size;
+-            claw_write_pages=
+-	    	(privptr->p_env->write_buffers + claw_writes_perpage -1) /
+-			claw_writes_perpage;
++		claw_writes_perpage =
++			PAGE_SIZE / privptr->p_env->write_size;
++		claw_write_pages = DIV_ROUND_UP(privptr->p_env->write_buffers,
++						claw_writes_perpage);
+ 
+         }
+         else {      /* >  or equal  */
+-            privptr->p_buff_pages_perwrite=
+-	    	 (privptr->p_env->read_size + PAGE_SIZE - 1) / PAGE_SIZE;
+-            claw_write_pages=
+-	     	privptr->p_env->write_buffers * privptr->p_buff_pages_perwrite;
++		privptr->p_buff_pages_perwrite =
++			DIV_ROUND_UP(privptr->p_env->read_size, PAGE_SIZE);
++		claw_write_pages = privptr->p_env->write_buffers *
++					privptr->p_buff_pages_perwrite;
+         }
+ #ifdef DEBUGMSG
+         if (privptr->p_env->read_size < PAGE_SIZE) {
+diff --git a/drivers/serial/Kconfig b/drivers/serial/Kconfig
+index b82595c..cf627cd 100644
+--- a/drivers/serial/Kconfig
++++ b/drivers/serial/Kconfig
+@@ -686,7 +686,7 @@ config UART0_RTS_PIN
+ 
+ config SERIAL_BFIN_UART1
+ 	bool "Enable UART1"
+-	depends on SERIAL_BFIN && (BF534 || BF536 || BF537 || BF54x)
++	depends on SERIAL_BFIN && (!BF531 && !BF532 && !BF533 && !BF561)
+ 	help
+ 	  Enable UART1
+ 
+@@ -699,14 +699,14 @@ config BFIN_UART1_CTSRTS
+ 
+ config UART1_CTS_PIN
+ 	int "UART1 CTS pin"
+-	depends on BFIN_UART1_CTSRTS && (BF53x || BF561)
++	depends on BFIN_UART1_CTSRTS && !BF54x
+ 	default -1
+ 	help
+ 	  Refer to ./include/asm-blackfin/gpio.h to see the GPIO map.
+ 
+ config UART1_RTS_PIN
+ 	int "UART1 RTS pin"
+-	depends on BFIN_UART1_CTSRTS && (BF53x || BF561)
++	depends on BFIN_UART1_CTSRTS && !BF54x
+ 	default -1
+ 	help
+ 	  Refer to ./include/asm-blackfin/gpio.h to see the GPIO map.
+diff --git a/drivers/serial/bfin_5xx.c b/drivers/serial/bfin_5xx.c
+index ac2a3ef..0aa345b 100644
+--- a/drivers/serial/bfin_5xx.c
++++ b/drivers/serial/bfin_5xx.c
+@@ -1,30 +1,11 @@
+ /*
+- * File:         drivers/serial/bfin_5xx.c
+- * Based on:     Based on drivers/serial/sa1100.c
+- * Author:       Aubrey Li <aubrey.li at analog.com>
++ * Blackfin On-Chip Serial Driver
+  *
+- * Created:
+- * Description:  Driver for blackfin 5xx serial ports
++ * Copyright 2006-2007 Analog Devices Inc.
+  *
+- * Modified:
+- *               Copyright 2006 Analog Devices Inc.
++ * Enter bugs at http://blackfin.uclinux.org/
+  *
+- * Bugs:         Enter bugs at http://blackfin.uclinux.org/
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2 of the License, or
+- * (at your option) any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, see the file COPYING, or write
+- * to the Free Software Foundation, Inc.,
+- * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
++ * Licensed under the GPL-2 or later.
+  */
+ 
+ #if defined(CONFIG_SERIAL_BFIN_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+@@ -67,14 +48,12 @@
+ #define DMA_RX_XCOUNT		512
+ #define DMA_RX_YCOUNT		(PAGE_SIZE / DMA_RX_XCOUNT)
+ 
+-#define DMA_RX_FLUSH_JIFFIES	5
++#define DMA_RX_FLUSH_JIFFIES	(HZ / 50)
+ 
+ #ifdef CONFIG_SERIAL_BFIN_DMA
+ static void bfin_serial_dma_tx_chars(struct bfin_serial_port *uart);
+ #else
+-static void bfin_serial_do_work(struct work_struct *work);
+ static void bfin_serial_tx_chars(struct bfin_serial_port *uart);
+-static void local_put_char(struct bfin_serial_port *uart, char ch);
+ #endif
+ 
+ static void bfin_serial_mctrl_check(struct bfin_serial_port *uart);
+@@ -85,23 +64,26 @@ static void bfin_serial_mctrl_check(struct bfin_serial_port *uart);
+ static void bfin_serial_stop_tx(struct uart_port *port)
+ {
+ 	struct bfin_serial_port *uart = (struct bfin_serial_port *)port;
++	struct circ_buf *xmit = &uart->port.info->xmit;
++#if !defined(CONFIG_BF54x) && !defined(CONFIG_SERIAL_BFIN_DMA)
++	unsigned short ier;
++#endif
+ 
+ 	while (!(UART_GET_LSR(uart) & TEMT))
+-		continue;
++		cpu_relax();
+ 
+ #ifdef CONFIG_SERIAL_BFIN_DMA
+ 	disable_dma(uart->tx_dma_channel);
++	xmit->tail = (xmit->tail + uart->tx_count) & (UART_XMIT_SIZE - 1);
++	uart->port.icount.tx += uart->tx_count;
++	uart->tx_count = 0;
++	uart->tx_done = 1;
+ #else
+ #ifdef CONFIG_BF54x
+-	/* Waiting for Transmission Finished */
+-	while (!(UART_GET_LSR(uart) & TFI))
+-		continue;
+ 	/* Clear TFI bit */
+ 	UART_PUT_LSR(uart, TFI);
+ 	UART_CLEAR_IER(uart, ETBEI);
+ #else
+-	unsigned short ier;
+-
+ 	ier = UART_GET_IER(uart);
+ 	ier &= ~ETBEI;
+ 	UART_PUT_IER(uart, ier);
+@@ -117,7 +99,8 @@ static void bfin_serial_start_tx(struct uart_port *port)
+ 	struct bfin_serial_port *uart = (struct bfin_serial_port *)port;
+ 
+ #ifdef CONFIG_SERIAL_BFIN_DMA
+-	bfin_serial_dma_tx_chars(uart);
++	if (uart->tx_done)
++		bfin_serial_dma_tx_chars(uart);
+ #else
+ #ifdef CONFIG_BF54x
+ 	UART_SET_IER(uart, ETBEI);
+@@ -209,34 +192,27 @@ int kgdb_get_debug_char(void)
+ }
+ #endif
+ 
+-#ifdef CONFIG_SERIAL_BFIN_PIO
+-static void local_put_char(struct bfin_serial_port *uart, char ch)
+-{
+-	unsigned short status;
+-	int flags = 0;
+-
+-	spin_lock_irqsave(&uart->port.lock, flags);
+-
+-	do {
+-		status = UART_GET_LSR(uart);
+-	} while (!(status & THRE));
+-
+-	UART_PUT_CHAR(uart, ch);
+-	SSYNC();
+-
+-	spin_unlock_irqrestore(&uart->port.lock, flags);
+-}
++#if ANOMALY_05000230 && defined(CONFIG_SERIAL_BFIN_PIO)
++# define UART_GET_ANOMALY_THRESHOLD(uart)    ((uart)->anomaly_threshold)
++# define UART_SET_ANOMALY_THRESHOLD(uart, v) ((uart)->anomaly_threshold = (v))
++#else
++# define UART_GET_ANOMALY_THRESHOLD(uart)    0
++# define UART_SET_ANOMALY_THRESHOLD(uart, v)
++#endif
+ 
++#ifdef CONFIG_SERIAL_BFIN_PIO
+ static void bfin_serial_rx_chars(struct bfin_serial_port *uart)
+ {
+ 	struct tty_struct *tty = uart->port.info->tty;
+ 	unsigned int status, ch, flg;
+-	static int in_break = 0;
++	static struct timeval anomaly_start = { .tv_sec = 0 };
+ #ifdef CONFIG_KGDB_UART
+ 	struct pt_regs *regs = get_irq_regs();
+ #endif
+ 
+ 	status = UART_GET_LSR(uart);
++	UART_CLEAR_LSR(uart);
++
+  	ch = UART_GET_CHAR(uart);
+  	uart->port.icount.rx++;
+ 
+@@ -262,28 +238,56 @@ static void bfin_serial_rx_chars(struct bfin_serial_port *uart)
+ #endif
+ 
+ 	if (ANOMALY_05000230) {
+-		/* The BF533 family of processors have a nice misbehavior where
+-		 * they continuously generate characters for a "single" break.
++		/* The BF533 (and BF561) family of processors have a nice anomaly
++		 * where they continuously generate characters for a "single" break.
+ 		 * We have to basically ignore this flood until the "next" valid
+-		 * character comes across.  All other Blackfin families operate
+-		 * properly though.
++		 * character comes across.  Due to the nature of the flood, it is
++		 * not possible to reliably catch bytes that are sent too quickly
++		 * after this break.  So application code talking to the Blackfin
++		 * which sends a break signal must allow at least 1.5 character
++		 * times after the end of the break for things to stabilize.  This
++		 * timeout was picked as it must absolutely be larger than 1
++		 * character time +/- some percent.  So 1.5 sounds good.  All other
++		 * Blackfin families operate properly.  Woo.
+ 		 * Note: While Anomaly 05000230 does not directly address this,
+ 		 *       the changes that went in for it also fixed this issue.
++		 *       That anomaly was fixed in 0.5+ silicon.  I like bunnies.
+ 		 */
+-		if (in_break) {
+-			if (ch != 0) {
+-				in_break = 0;
+-				ch = UART_GET_CHAR(uart);
+-				if (bfin_revid() < 5)
+-					return;
+-			} else
+-				return;
++		if (anomaly_start.tv_sec) {
++			struct timeval curr;
++			suseconds_t usecs;
++
++			if ((~ch & (~ch + 1)) & 0xff)
++				goto known_good_char;
++
++			do_gettimeofday(&curr);
++			if (curr.tv_sec - anomaly_start.tv_sec > 1)
++				goto known_good_char;
++
++			usecs = 0;
++			if (curr.tv_sec != anomaly_start.tv_sec)
++				usecs += USEC_PER_SEC;
++			usecs += curr.tv_usec - anomaly_start.tv_usec;
++
++			if (usecs > UART_GET_ANOMALY_THRESHOLD(uart))
++				goto known_good_char;
++
++			if (ch)
++				anomaly_start.tv_sec = 0;
++			else
++				anomaly_start = curr;
++
++			return;
++
++ known_good_char:
++			anomaly_start.tv_sec = 0;
+ 		}
+ 	}
+ 
+ 	if (status & BI) {
+ 		if (ANOMALY_05000230)
+-			in_break = 1;
++			if (bfin_revid() < 5)
++				do_gettimeofday(&anomaly_start);
+ 		uart->port.icount.brk++;
+ 		if (uart_handle_break(&uart->port))
+ 			goto ignore_char;
+@@ -324,7 +328,6 @@ static void bfin_serial_tx_chars(struct bfin_serial_port *uart)
+ 		UART_PUT_CHAR(uart, uart->port.x_char);
+ 		uart->port.icount.tx++;
+ 		uart->port.x_char = 0;
+-		return;
+ 	}
+ 	/*
+ 	 * Check the modem control lines before
+@@ -337,9 +340,12 @@ static void bfin_serial_tx_chars(struct bfin_serial_port *uart)
+ 		return;
+ 	}
+ 
+-	local_put_char(uart, xmit->buf[xmit->tail]);
+-	xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+-	uart->port.icount.tx++;
++	while ((UART_GET_LSR(uart) & THRE) && xmit->tail != xmit->head) {
++		UART_PUT_CHAR(uart, xmit->buf[xmit->tail]);
++		xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
++		uart->port.icount.tx++;
++		SSYNC();
++	}
+ 
+ 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 		uart_write_wakeup(&uart->port);
+@@ -352,21 +358,11 @@ static irqreturn_t bfin_serial_rx_int(int irq, void *dev_id)
+ {
+ 	struct bfin_serial_port *uart = dev_id;
+ 
+-#ifdef CONFIG_BF54x
+-	unsigned short status;
+-	spin_lock(&uart->port.lock);
+-	status = UART_GET_LSR(uart);
+-	while ((UART_GET_IER(uart) & ERBFI) && (status & DR)) {
+-		bfin_serial_rx_chars(uart);
+-		status = UART_GET_LSR(uart);
+-	}
+-	spin_unlock(&uart->port.lock);
+-#else
+ 	spin_lock(&uart->port.lock);
+-	while ((UART_GET_IIR(uart) & IIR_STATUS) == IIR_RX_READY)
++	while (UART_GET_LSR(uart) & DR)
+ 		bfin_serial_rx_chars(uart);
+ 	spin_unlock(&uart->port.lock);
+-#endif
++
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -374,25 +370,16 @@ static irqreturn_t bfin_serial_tx_int(int irq, void *dev_id)
+ {
+ 	struct bfin_serial_port *uart = dev_id;
+ 
+-#ifdef CONFIG_BF54x
+-	unsigned short status;
+ 	spin_lock(&uart->port.lock);
+-	status = UART_GET_LSR(uart);
+-	while ((UART_GET_IER(uart) & ETBEI) && (status & THRE)) {
++	if (UART_GET_LSR(uart) & THRE)
+ 		bfin_serial_tx_chars(uart);
+-		status = UART_GET_LSR(uart);
+-	}
+ 	spin_unlock(&uart->port.lock);
+-#else
+-	spin_lock(&uart->port.lock);
+-	while ((UART_GET_IIR(uart) & IIR_STATUS) == IIR_TX_READY)
+-		bfin_serial_tx_chars(uart);
+-	spin_unlock(&uart->port.lock);
+-#endif
++
+ 	return IRQ_HANDLED;
+ }
++#endif
+ 
+-
++#ifdef CONFIG_SERIAL_BFIN_CTSRTS
+ static void bfin_serial_do_work(struct work_struct *work)
+ {
+ 	struct bfin_serial_port *uart = container_of(work, struct bfin_serial_port, cts_workqueue);
+@@ -406,33 +393,27 @@ static void bfin_serial_dma_tx_chars(struct bfin_serial_port *uart)
+ {
+ 	struct circ_buf *xmit = &uart->port.info->xmit;
+ 	unsigned short ier;
+-	int flags = 0;
+-
+-	if (!uart->tx_done)
+-		return;
+ 
+ 	uart->tx_done = 0;
+ 
++	if (uart_circ_empty(xmit) || uart_tx_stopped(&uart->port)) {
++		uart->tx_count = 0;
++		uart->tx_done = 1;
++		return;
++	}
++
+ 	if (uart->port.x_char) {
+ 		UART_PUT_CHAR(uart, uart->port.x_char);
+ 		uart->port.icount.tx++;
+ 		uart->port.x_char = 0;
+-		uart->tx_done = 1;
+-		return;
+ 	}
++
+ 	/*
+ 	 * Check the modem control lines before
+ 	 * transmitting anything.
+ 	 */
+ 	bfin_serial_mctrl_check(uart);
+ 
+-	if (uart_circ_empty(xmit) || uart_tx_stopped(&uart->port)) {
+-		bfin_serial_stop_tx(&uart->port);
+-		uart->tx_done = 1;
+-		return;
+-	}
+-
+-	spin_lock_irqsave(&uart->port.lock, flags);
+ 	uart->tx_count = CIRC_CNT(xmit->head, xmit->tail, UART_XMIT_SIZE);
+ 	if (uart->tx_count > (UART_XMIT_SIZE - xmit->tail))
+ 		uart->tx_count = UART_XMIT_SIZE - xmit->tail;
+@@ -448,6 +429,7 @@ static void bfin_serial_dma_tx_chars(struct bfin_serial_port *uart)
+ 	set_dma_x_count(uart->tx_dma_channel, uart->tx_count);
+ 	set_dma_x_modify(uart->tx_dma_channel, 1);
+ 	enable_dma(uart->tx_dma_channel);
++
+ #ifdef CONFIG_BF54x
+ 	UART_SET_IER(uart, ETBEI);
+ #else
+@@ -455,7 +437,6 @@ static void bfin_serial_dma_tx_chars(struct bfin_serial_port *uart)
+ 	ier |= ETBEI;
+ 	UART_PUT_IER(uart, ier);
+ #endif
+-	spin_unlock_irqrestore(&uart->port.lock, flags);
+ }
+ 
+ static void bfin_serial_dma_rx_chars(struct bfin_serial_port *uart)
+@@ -464,7 +445,11 @@ static void bfin_serial_dma_rx_chars(struct bfin_serial_port *uart)
+ 	int i, flg, status;
+ 
+ 	status = UART_GET_LSR(uart);
+-	uart->port.icount.rx += CIRC_CNT(uart->rx_dma_buf.head, uart->rx_dma_buf.tail, UART_XMIT_SIZE);;
++	UART_CLEAR_LSR(uart);
++
++	uart->port.icount.rx +=
++		CIRC_CNT(uart->rx_dma_buf.head, uart->rx_dma_buf.tail,
++		UART_XMIT_SIZE);
+ 
+ 	if (status & BI) {
+ 		uart->port.icount.brk++;
+@@ -490,10 +475,12 @@ static void bfin_serial_dma_rx_chars(struct bfin_serial_port *uart)
+ 	else
+ 		flg = TTY_NORMAL;
+ 
+-	for (i = uart->rx_dma_buf.head; i < uart->rx_dma_buf.tail; i++) {
+-		if (uart_handle_sysrq_char(&uart->port, uart->rx_dma_buf.buf[i]))
+-			goto dma_ignore_char;
+-		uart_insert_char(&uart->port, status, OE, uart->rx_dma_buf.buf[i], flg);
++	for (i = uart->rx_dma_buf.tail; i != uart->rx_dma_buf.head; i++) {
++		if (i >= UART_XMIT_SIZE)
++			i = 0;
++		if (!uart_handle_sysrq_char(&uart->port, uart->rx_dma_buf.buf[i]))
++			uart_insert_char(&uart->port, status, OE,
++				uart->rx_dma_buf.buf[i], flg);
+ 	}
+ 
+  dma_ignore_char:
+@@ -503,23 +490,23 @@ static void bfin_serial_dma_rx_chars(struct bfin_serial_port *uart)
+ void bfin_serial_rx_dma_timeout(struct bfin_serial_port *uart)
+ {
+ 	int x_pos, pos;
+-	int flags = 0;
+-
+-	bfin_serial_dma_tx_chars(uart);
+ 
+-	spin_lock_irqsave(&uart->port.lock, flags);
+-	x_pos = DMA_RX_XCOUNT - get_dma_curr_xcount(uart->rx_dma_channel);
++	uart->rx_dma_nrows = get_dma_curr_ycount(uart->rx_dma_channel);
++	x_pos = get_dma_curr_xcount(uart->rx_dma_channel);
++	uart->rx_dma_nrows = DMA_RX_YCOUNT - uart->rx_dma_nrows;
++	if (uart->rx_dma_nrows == DMA_RX_YCOUNT)
++		uart->rx_dma_nrows = 0;
++	x_pos = DMA_RX_XCOUNT - x_pos;
+ 	if (x_pos == DMA_RX_XCOUNT)
+ 		x_pos = 0;
+ 
+ 	pos = uart->rx_dma_nrows * DMA_RX_XCOUNT + x_pos;
+-
+-	if (pos>uart->rx_dma_buf.tail) {
+-		uart->rx_dma_buf.tail = pos;
++	if (pos != uart->rx_dma_buf.tail) {
++		uart->rx_dma_buf.head = pos;
+ 		bfin_serial_dma_rx_chars(uart);
+-		uart->rx_dma_buf.head = uart->rx_dma_buf.tail;
++		uart->rx_dma_buf.tail = uart->rx_dma_buf.head;
+ 	}
+-	spin_unlock_irqrestore(&uart->port.lock, flags);
++
+ 	uart->rx_dma_timer.expires = jiffies + DMA_RX_FLUSH_JIFFIES;
+ 	add_timer(&(uart->rx_dma_timer));
+ }
+@@ -532,8 +519,8 @@ static irqreturn_t bfin_serial_dma_tx_int(int irq, void *dev_id)
+ 
+ 	spin_lock(&uart->port.lock);
+ 	if (!(get_dma_curr_irqstat(uart->tx_dma_channel)&DMA_RUN)) {
+-		clear_dma_irqstat(uart->tx_dma_channel);
+ 		disable_dma(uart->tx_dma_channel);
++		clear_dma_irqstat(uart->tx_dma_channel);
+ #ifdef CONFIG_BF54x
+ 		UART_CLEAR_IER(uart, ETBEI);
+ #else
+@@ -541,15 +528,13 @@ static irqreturn_t bfin_serial_dma_tx_int(int irq, void *dev_id)
+ 		ier &= ~ETBEI;
+ 		UART_PUT_IER(uart, ier);
+ #endif
+-		xmit->tail = (xmit->tail+uart->tx_count) &(UART_XMIT_SIZE -1);
+-		uart->port.icount.tx+=uart->tx_count;
++		xmit->tail = (xmit->tail + uart->tx_count) & (UART_XMIT_SIZE - 1);
++		uart->port.icount.tx += uart->tx_count;
+ 
+ 		if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 			uart_write_wakeup(&uart->port);
+ 
+-		if (uart_circ_empty(xmit))
+-			bfin_serial_stop_tx(&uart->port);
+-		uart->tx_done = 1;
++		bfin_serial_dma_tx_chars(uart);
+ 	}
+ 
+ 	spin_unlock(&uart->port.lock);
+@@ -561,18 +546,15 @@ static irqreturn_t bfin_serial_dma_rx_int(int irq, void *dev_id)
+ 	struct bfin_serial_port *uart = dev_id;
+ 	unsigned short irqstat;
+ 
+-	uart->rx_dma_nrows++;
+-	if (uart->rx_dma_nrows == DMA_RX_YCOUNT) {
+-		uart->rx_dma_nrows = 0;
+-		uart->rx_dma_buf.tail = DMA_RX_XCOUNT*DMA_RX_YCOUNT;
+-		bfin_serial_dma_rx_chars(uart);
+-		uart->rx_dma_buf.head = uart->rx_dma_buf.tail = 0;
+-	}
+ 	spin_lock(&uart->port.lock);
+ 	irqstat = get_dma_curr_irqstat(uart->rx_dma_channel);
+ 	clear_dma_irqstat(uart->rx_dma_channel);
+-
+ 	spin_unlock(&uart->port.lock);
++
++	del_timer(&(uart->rx_dma_timer));
++	uart->rx_dma_timer.expires = jiffies;
++	add_timer(&(uart->rx_dma_timer));
++
+ 	return IRQ_HANDLED;
+ }
+ #endif
+@@ -599,7 +581,11 @@ static unsigned int bfin_serial_get_mctrl(struct uart_port *port)
+ 	if (uart->cts_pin < 0)
+ 		return TIOCM_CTS | TIOCM_DSR | TIOCM_CAR;
+ 
++# ifdef BF54x
++	if (UART_GET_MSR(uart) & CTS)
++# else
+ 	if (gpio_get_value(uart->cts_pin))
++# endif
+ 		return TIOCM_DSR | TIOCM_CAR;
+ 	else
+ #endif
+@@ -614,9 +600,17 @@ static void bfin_serial_set_mctrl(struct uart_port *port, unsigned int mctrl)
+ 		return;
+ 
+ 	if (mctrl & TIOCM_RTS)
++# ifdef BF54x
++		UART_PUT_MCR(uart, UART_GET_MCR(uart) & ~MRTS);
++# else
+ 		gpio_set_value(uart->rts_pin, 0);
++# endif
+ 	else
++# ifdef BF54x
++		UART_PUT_MCR(uart, UART_GET_MCR(uart) | MRTS);
++# else
+ 		gpio_set_value(uart->rts_pin, 1);
++# endif
+ #endif
+ }
+ 
+@@ -627,22 +621,17 @@ static void bfin_serial_mctrl_check(struct bfin_serial_port *uart)
+ {
+ #ifdef CONFIG_SERIAL_BFIN_CTSRTS
+ 	unsigned int status;
+-# ifdef CONFIG_SERIAL_BFIN_DMA
+ 	struct uart_info *info = uart->port.info;
+ 	struct tty_struct *tty = info->tty;
+ 
+ 	status = bfin_serial_get_mctrl(&uart->port);
++	uart_handle_cts_change(&uart->port, status & TIOCM_CTS);
+ 	if (!(status & TIOCM_CTS)) {
+ 		tty->hw_stopped = 1;
++		schedule_work(&uart->cts_workqueue);
+ 	} else {
+ 		tty->hw_stopped = 0;
+ 	}
+-# else
+-	status = bfin_serial_get_mctrl(&uart->port);
+-	uart_handle_cts_change(&uart->port, status & TIOCM_CTS);
+-	if (!(status & TIOCM_CTS))
+-		schedule_work(&uart->cts_workqueue);
+-# endif
+ #endif
+ }
+ 
+@@ -743,6 +732,7 @@ static void bfin_serial_shutdown(struct uart_port *port)
+ 	disable_dma(uart->rx_dma_channel);
+ 	free_dma(uart->rx_dma_channel);
+ 	del_timer(&(uart->rx_dma_timer));
++	dma_free_coherent(NULL, PAGE_SIZE, uart->rx_dma_buf.buf, 0);
+ #else
+ #ifdef	CONFIG_KGDB_UART
+ 	if (uart->port.line != CONFIG_KGDB_UART_PORT)
+@@ -814,6 +804,8 @@ bfin_serial_set_termios(struct uart_port *port, struct ktermios *termios,
+ 	quot = uart_get_divisor(port, baud);
+ 	spin_lock_irqsave(&uart->port.lock, flags);
+ 
++	UART_SET_ANOMALY_THRESHOLD(uart, USEC_PER_SEC / baud * 15);
++
+ 	do {
+ 		lsr = UART_GET_LSR(uart);
+ 	} while (!(lsr & TEMT));
+@@ -956,10 +948,9 @@ static void __init bfin_serial_init_ports(void)
+ 		bfin_serial_ports[i].rx_dma_channel =
+ 			bfin_serial_resource[i].uart_rx_dma_channel;
+ 		init_timer(&(bfin_serial_ports[i].rx_dma_timer));
+-#else
+-		INIT_WORK(&bfin_serial_ports[i].cts_workqueue, bfin_serial_do_work);
+ #endif
+ #ifdef CONFIG_SERIAL_BFIN_CTSRTS
++		INIT_WORK(&bfin_serial_ports[i].cts_workqueue, bfin_serial_do_work);
+ 		bfin_serial_ports[i].cts_pin	    =
+ 			bfin_serial_resource[i].uart_cts_pin;
+ 		bfin_serial_ports[i].rts_pin	    =
+diff --git a/drivers/serial/sh-sci.c b/drivers/serial/sh-sci.c
+index 9ce12cb..a8c116b 100644
+--- a/drivers/serial/sh-sci.c
++++ b/drivers/serial/sh-sci.c
+@@ -41,6 +41,7 @@
+ #include <linux/delay.h>
+ #include <linux/console.h>
+ #include <linux/platform_device.h>
++#include <linux/serial_sci.h>
+ 
+ #ifdef CONFIG_CPU_FREQ
+ #include <linux/notifier.h>
+@@ -54,7 +55,6 @@
+ #include <asm/kgdb.h>
+ #endif
+ 
+-#include <asm/sci.h>
+ #include "sh-sci.h"
+ 
+ struct sci_port {
+diff --git a/drivers/sh/maple/maple.c b/drivers/sh/maple/maple.c
+index 9cfcfd8..617efb1 100644
+--- a/drivers/sh/maple/maple.c
++++ b/drivers/sh/maple/maple.c
+@@ -1,7 +1,7 @@
+ /*
+  * Core maple bus functionality
+  *
+- *  Copyright (C) 2007 Adrian McMenamin
++ *  Copyright (C) 2007, 2008 Adrian McMenamin
+  *
+  * Based on 2.4 code by:
+  *
+@@ -18,7 +18,6 @@
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/device.h>
+-#include <linux/module.h>
+ #include <linux/interrupt.h>
+ #include <linux/list.h>
+ #include <linux/io.h>
+@@ -54,7 +53,7 @@ static struct device maple_bus;
+ static int subdevice_map[MAPLE_PORTS];
+ static unsigned long *maple_sendbuf, *maple_sendptr, *maple_lastptr;
+ static unsigned long maple_pnp_time;
+-static int started, scanning, liststatus, realscan;
++static int started, scanning, liststatus, fullscan;
+ static struct kmem_cache *maple_queue_cache;
+ 
+ struct maple_device_specify {
+@@ -62,6 +61,9 @@ struct maple_device_specify {
+ 	int unit;
+ };
+ 
++static bool checked[4];
++static struct maple_device *baseunits[4];
++
+ /**
+  *  maple_driver_register - register a device driver
+  *  automatically makes the driver bus a maple bus
+@@ -309,11 +311,9 @@ static void maple_attach_driver(struct maple_device *mdev)
+ 		else
+ 			break;
+ 
+-	if (realscan) {
+-		printk(KERN_INFO "Maple device detected: %s\n",
+-			mdev->product_name);
+-		printk(KERN_INFO "Maple device: %s\n", mdev->product_licence);
+-	}
++	printk(KERN_INFO "Maple device detected: %s\n",
++		mdev->product_name);
++	printk(KERN_INFO "Maple device: %s\n", mdev->product_licence);
+ 
+ 	function = be32_to_cpu(mdev->devinfo.function);
+ 
+@@ -323,10 +323,9 @@ static void maple_attach_driver(struct maple_device *mdev)
+ 		mdev->driver = &maple_dummy_driver;
+ 		sprintf(mdev->dev.bus_id, "%d:0.port", mdev->port);
+ 	} else {
+-		if (realscan)
+-			printk(KERN_INFO
+-				"Maple bus at (%d, %d): Function 0x%lX\n",
+-				mdev->port, mdev->unit, function);
++		printk(KERN_INFO
++			"Maple bus at (%d, %d): Function 0x%lX\n",
++			mdev->port, mdev->unit, function);
+ 
+ 		matched =
+ 		    bus_for_each_drv(&maple_bus_type, NULL, mdev,
+@@ -334,9 +333,8 @@ static void maple_attach_driver(struct maple_device *mdev)
+ 
+ 		if (matched == 0) {
+ 			/* Driver does not exist yet */
+-			if (realscan)
+-				printk(KERN_INFO
+-					"No maple driver found.\n");
++			printk(KERN_INFO
++				"No maple driver found.\n");
+ 			mdev->driver = &maple_dummy_driver;
+ 		}
+ 		sprintf(mdev->dev.bus_id, "%d:0%d.%lX", mdev->port,
+@@ -472,9 +470,12 @@ static void maple_response_none(struct maple_device *mdev,
+ 		maple_detach_driver(mdev);
+ 		return;
+ 	}
+-	if (!started) {
+-		printk(KERN_INFO "No maple devices attached to port %d\n",
+-		       mdev->port);
++	if (!started || !fullscan) {
++		if (checked[mdev->port] == false) {
++			checked[mdev->port] = true;
++			printk(KERN_INFO "No maple devices attached"
++				" to port %d\n", mdev->port);
++		}
+ 		return;
+ 	}
+ 	maple_clean_submap(mdev);
+@@ -485,8 +486,14 @@ static void maple_response_devinfo(struct maple_device *mdev,
+ 				   char *recvbuf)
+ {
+ 	char submask;
+-	if ((!started) || (scanning == 2)) {
+-		maple_attach_driver(mdev);
++	if (!started || (scanning == 2) || !fullscan) {
++		if ((mdev->unit == 0) && (checked[mdev->port] == false)) {
++			checked[mdev->port] = true;
++			maple_attach_driver(mdev);
++		} else {
++			if (mdev->unit != 0)
++				maple_attach_driver(mdev);
++		}
+ 		return;
+ 	}
+ 	if (mdev->unit == 0) {
+@@ -505,6 +512,7 @@ static void maple_dma_handler(struct work_struct *work)
+ 	struct maple_device *dev;
+ 	char *recvbuf;
+ 	enum maple_code code;
++	int i;
+ 
+ 	if (!maple_dma_done())
+ 		return;
+@@ -557,6 +565,19 @@ static void maple_dma_handler(struct work_struct *work)
+ 		} else
+ 			scanning = 0;
+ 
++		if (!fullscan) {
++			fullscan = 1;
++			for (i = 0; i < MAPLE_PORTS; i++) {
++				if (checked[i] == false) {
++					fullscan = 0;
++					dev = baseunits[i];
++					dev->mq->command =
++						MAPLE_COMMAND_DEVINFO;
++					dev->mq->length = 0;
++					maple_add_packet(dev->mq);
++				}
++			}
++		}
+ 		if (started == 0)
+ 			started = 1;
+ 	}
+@@ -694,7 +715,9 @@ static int __init maple_bus_init(void)
+ 
+ 	/* setup maple ports */
+ 	for (i = 0; i < MAPLE_PORTS; i++) {
++		checked[i] = false;
+ 		mdev[i] = maple_alloc_dev(i, 0);
++		baseunits[i] = mdev[i];
+ 		if (!mdev[i]) {
+ 			while (i-- > 0)
+ 				maple_free_dev(mdev[i]);
+@@ -703,12 +726,9 @@ static int __init maple_bus_init(void)
+ 		mdev[i]->mq->command = MAPLE_COMMAND_DEVINFO;
+ 		mdev[i]->mq->length = 0;
+ 		maple_add_packet(mdev[i]->mq);
+-		/* delay aids hardware detection */
+-		mdelay(5);
+ 		subdevice_map[i] = 0;
+ 	}
+ 
+-	realscan = 1;
+ 	/* setup maplebus hardware */
+ 	maplebus_dma_reset();
+ 	/* initial detection */
+diff --git a/drivers/ssb/Kconfig b/drivers/ssb/Kconfig
+index 78fd331..adea792 100644
+--- a/drivers/ssb/Kconfig
++++ b/drivers/ssb/Kconfig
+@@ -35,6 +35,11 @@ config SSB_PCIHOST
+ 
+ 	  If unsure, say Y
+ 
++config SSB_B43_PCI_BRIDGE
++	bool
++	depends on SSB_PCIHOST
++	default n
++
+ config SSB_PCMCIAHOST_POSSIBLE
+ 	bool
+ 	depends on SSB && (PCMCIA = y || PCMCIA = SSB) && EXPERIMENTAL
+diff --git a/drivers/ssb/Makefile b/drivers/ssb/Makefile
+index e235144..de94c2e 100644
+--- a/drivers/ssb/Makefile
++++ b/drivers/ssb/Makefile
+@@ -14,6 +14,6 @@ ssb-$(CONFIG_SSB_DRIVER_PCICORE)	+= driver_pcicore.o
+ 
+ # b43 pci-ssb-bridge driver
+ # Not strictly a part of SSB, but kept here for convenience
+-ssb-$(CONFIG_SSB_PCIHOST)		+= b43_pci_bridge.o
++ssb-$(CONFIG_SSB_B43_PCI_BRIDGE)	+= b43_pci_bridge.o
+ 
+ obj-$(CONFIG_SSB)			+= ssb.o
+diff --git a/drivers/ssb/driver_pcicore.c b/drivers/ssb/driver_pcicore.c
+index 6d99a98..07ab48d 100644
+--- a/drivers/ssb/driver_pcicore.c
++++ b/drivers/ssb/driver_pcicore.c
+@@ -393,7 +393,7 @@ static int pcicore_is_in_hostmode(struct ssb_pcicore *pc)
+ 	    chipid_top != 0x5300)
+ 		return 0;
+ 
+-	if (bus->sprom.r1.boardflags_lo & SSB_PCICORE_BFL_NOPCI)
++	if (bus->sprom.boardflags_lo & SSB_PCICORE_BFL_NOPCI)
+ 		return 0;
+ 
+ 	/* The 200-pin BCM4712 package does not bond out PCI. Even when
+diff --git a/drivers/ssb/ssb_private.h b/drivers/ssb/ssb_private.h
+index a789364..21eca2b 100644
+--- a/drivers/ssb/ssb_private.h
++++ b/drivers/ssb/ssb_private.h
+@@ -120,10 +120,10 @@ extern int ssb_devices_thaw(struct ssb_bus *bus);
+ extern struct ssb_bus *ssb_pci_dev_to_bus(struct pci_dev *pdev);
+ 
+ /* b43_pci_bridge.c */
+-#ifdef CONFIG_SSB_PCIHOST
++#ifdef CONFIG_SSB_B43_PCI_BRIDGE
+ extern int __init b43_pci_ssb_bridge_init(void);
+ extern void __exit b43_pci_ssb_bridge_exit(void);
+-#else /* CONFIG_SSB_PCIHOST */
++#else /* CONFIG_SSB_B43_PCI_BRIDGR */
+ static inline int b43_pci_ssb_bridge_init(void)
+ {
+ 	return 0;
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index 33888bb..2c23bad 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -46,7 +46,7 @@ const struct file_operations ext4_dir_operations = {
+ #ifdef CONFIG_COMPAT
+ 	.compat_ioctl	= ext4_compat_ioctl,
+ #endif
+-	.fsync		= ext4_sync_file,	/* BKL held */
++	.fsync		= ext4_sync_file,
+ 	.release	= ext4_release_dir,
+ };
+ 
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index bc7081f..9ae6e67 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -148,6 +148,7 @@ static ext4_fsblk_t ext4_ext_find_goal(struct inode *inode,
+ {
+ 	struct ext4_inode_info *ei = EXT4_I(inode);
+ 	ext4_fsblk_t bg_start;
++	ext4_fsblk_t last_block;
+ 	ext4_grpblk_t colour;
+ 	int depth;
+ 
+@@ -169,8 +170,13 @@ static ext4_fsblk_t ext4_ext_find_goal(struct inode *inode,
+ 	/* OK. use inode's group */
+ 	bg_start = (ei->i_block_group * EXT4_BLOCKS_PER_GROUP(inode->i_sb)) +
+ 		le32_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_first_data_block);
+-	colour = (current->pid % 16) *
++	last_block = ext4_blocks_count(EXT4_SB(inode->i_sb)->s_es) - 1;
++
++	if (bg_start + EXT4_BLOCKS_PER_GROUP(inode->i_sb) <= last_block)
++		colour = (current->pid % 16) *
+ 			(EXT4_BLOCKS_PER_GROUP(inode->i_sb) / 16);
++	else
++		colour = (current->pid % 16) * ((last_block - bg_start) / 16);
+ 	return bg_start + colour + block;
+ }
+ 
+@@ -349,7 +355,7 @@ static void ext4_ext_show_leaf(struct inode *inode, struct ext4_ext_path *path)
+ #define ext4_ext_show_leaf(inode,path)
+ #endif
+ 
+-static void ext4_ext_drop_refs(struct ext4_ext_path *path)
++void ext4_ext_drop_refs(struct ext4_ext_path *path)
+ {
+ 	int depth = path->p_depth;
+ 	int i;
+@@ -2168,6 +2174,10 @@ static int ext4_ext_convert_to_initialized(handle_t *handle,
+ 	newblock = iblock - ee_block + ext_pblock(ex);
+ 	ex2 = ex;
+ 
++	err = ext4_ext_get_access(handle, inode, path + depth);
++	if (err)
++		goto out;
++
+ 	/* ex1: ee_block to iblock - 1 : uninitialized */
+ 	if (iblock > ee_block) {
+ 		ex1 = ex;
+@@ -2200,16 +2210,20 @@ static int ext4_ext_convert_to_initialized(handle_t *handle,
+ 		newdepth = ext_depth(inode);
+ 		if (newdepth != depth) {
+ 			depth = newdepth;
+-			path = ext4_ext_find_extent(inode, iblock, NULL);
++			ext4_ext_drop_refs(path);
++			path = ext4_ext_find_extent(inode, iblock, path);
+ 			if (IS_ERR(path)) {
+ 				err = PTR_ERR(path);
+-				path = NULL;
+ 				goto out;
+ 			}
+ 			eh = path[depth].p_hdr;
+ 			ex = path[depth].p_ext;
+ 			if (ex2 != &newex)
+ 				ex2 = ex;
++
++			err = ext4_ext_get_access(handle, inode, path + depth);
++			if (err)
++				goto out;
+ 		}
+ 		allocated = max_blocks;
+ 	}
+@@ -2230,9 +2244,6 @@ static int ext4_ext_convert_to_initialized(handle_t *handle,
+ 	ex2->ee_len = cpu_to_le16(allocated);
+ 	if (ex2 != ex)
+ 		goto insert;
+-	err = ext4_ext_get_access(handle, inode, path + depth);
+-	if (err)
+-		goto out;
+ 	/*
+ 	 * New (initialized) extent starts from the first block
+ 	 * in the current extent. i.e., ex2 == ex
+@@ -2276,9 +2287,22 @@ out:
+ }
+ 
+ /*
++ * Block allocation/map/preallocation routine for extents based files
++ *
++ *
+  * Need to be called with
+  * down_read(&EXT4_I(inode)->i_data_sem) if not allocating file system block
+  * (ie, create is zero). Otherwise down_write(&EXT4_I(inode)->i_data_sem)
++ *
++ * return > 0, number of of blocks already mapped/allocated
++ *          if create == 0 and these are pre-allocated blocks
++ *          	buffer head is unmapped
++ *          otherwise blocks are mapped
++ *
++ * return = 0, if plain look up failed (blocks have not been allocated)
++ *          buffer head is unmapped
++ *
++ * return < 0, error case.
+  */
+ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
+ 			ext4_lblk_t iblock,
+@@ -2623,7 +2647,7 @@ long ext4_fallocate(struct inode *inode, int mode, loff_t offset, loff_t len)
+ 	 * modify 1 super block, 1 block bitmap and 1 group descriptor.
+ 	 */
+ 	credits = EXT4_DATA_TRANS_BLOCKS(inode->i_sb) + 3;
+-	down_write((&EXT4_I(inode)->i_data_sem));
++	mutex_lock(&inode->i_mutex);
+ retry:
+ 	while (ret >= 0 && ret < max_blocks) {
+ 		block = block + ret;
+@@ -2634,16 +2658,17 @@ retry:
+ 			break;
+ 		}
+ 
+-		ret = ext4_ext_get_blocks(handle, inode, block,
++		ret = ext4_get_blocks_wrap(handle, inode, block,
+ 					  max_blocks, &map_bh,
+ 					  EXT4_CREATE_UNINITIALIZED_EXT, 0);
+-		WARN_ON(ret <= 0);
+ 		if (ret <= 0) {
+-			ext4_error(inode->i_sb, "ext4_fallocate",
+-				    "ext4_ext_get_blocks returned error: "
+-				    "inode#%lu, block=%u, max_blocks=%lu",
++#ifdef EXT4FS_DEBUG
++			WARN_ON(ret <= 0);
++			printk(KERN_ERR "%s: ext4_ext_get_blocks "
++				    "returned error inode#%lu, block=%u, "
++				    "max_blocks=%lu", __func__,
+ 				    inode->i_ino, block, max_blocks);
+-			ret = -EIO;
++#endif
+ 			ext4_mark_inode_dirty(handle, inode);
+ 			ret2 = ext4_journal_stop(handle);
+ 			break;
+@@ -2680,7 +2705,6 @@ retry:
+ 	if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
+ 		goto retry;
+ 
+-	up_write((&EXT4_I(inode)->i_data_sem));
+ 	/*
+ 	 * Time to update the file size.
+ 	 * Update only when preallocation was requested beyond the file size.
+@@ -2692,21 +2716,18 @@ retry:
+ 			 * if no error, we assume preallocation succeeded
+ 			 * completely
+ 			 */
+-			mutex_lock(&inode->i_mutex);
+ 			i_size_write(inode, offset + len);
+ 			EXT4_I(inode)->i_disksize = i_size_read(inode);
+-			mutex_unlock(&inode->i_mutex);
+ 		} else if (ret < 0 && nblocks) {
+ 			/* Handle partial allocation scenario */
+ 			loff_t newsize;
+ 
+-			mutex_lock(&inode->i_mutex);
+ 			newsize  = (nblocks << blkbits) + i_size_read(inode);
+ 			i_size_write(inode, EXT4_BLOCK_ALIGN(newsize, blkbits));
+ 			EXT4_I(inode)->i_disksize = i_size_read(inode);
+-			mutex_unlock(&inode->i_mutex);
+ 		}
+ 	}
+ 
++	mutex_unlock(&inode->i_mutex);
+ 	return ret > 0 ? ret2 : ret;
+ }
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index da18a74..8036b9b 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -702,7 +702,12 @@ got:
+ 	ei->i_dir_start_lookup = 0;
+ 	ei->i_disksize = 0;
+ 
+-	ei->i_flags = EXT4_I(dir)->i_flags & ~EXT4_INDEX_FL;
++	/*
++	 * Don't inherit extent flag from directory. We set extent flag on
++	 * newly created directory and file only if -o extent mount option is
++	 * specified
++	 */
++	ei->i_flags = EXT4_I(dir)->i_flags & ~(EXT4_INDEX_FL|EXT4_EXTENTS_FL);
+ 	if (S_ISLNK(mode))
+ 		ei->i_flags &= ~(EXT4_IMMUTABLE_FL|EXT4_APPEND_FL);
+ 	/* dirsync only applies to directories */
+@@ -745,12 +750,15 @@ got:
+ 		goto fail_free_drop;
+ 	}
+ 	if (test_opt(sb, EXTENTS)) {
+-		EXT4_I(inode)->i_flags |= EXT4_EXTENTS_FL;
+-		ext4_ext_tree_init(handle, inode);
+-		err = ext4_update_incompat_feature(handle, sb,
+-						EXT4_FEATURE_INCOMPAT_EXTENTS);
+-		if (err)
+-			goto fail;
++		/* set extent flag only for directory and file */
++		if (S_ISDIR(mode) || S_ISREG(mode)) {
++			EXT4_I(inode)->i_flags |= EXT4_EXTENTS_FL;
++			ext4_ext_tree_init(handle, inode);
++			err = ext4_update_incompat_feature(handle, sb,
++					EXT4_FEATURE_INCOMPAT_EXTENTS);
++			if (err)
++				goto fail;
++		}
+ 	}
+ 
+ 	ext4_debug("allocating inode %lu\n", inode->i_ino);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 7dd9b50..945cbf6 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -403,6 +403,7 @@ static ext4_fsblk_t ext4_find_near(struct inode *inode, Indirect *ind)
+ 	__le32 *start = ind->bh ? (__le32*) ind->bh->b_data : ei->i_data;
+ 	__le32 *p;
+ 	ext4_fsblk_t bg_start;
++	ext4_fsblk_t last_block;
+ 	ext4_grpblk_t colour;
+ 
+ 	/* Try to find previous block */
+@@ -420,8 +421,13 @@ static ext4_fsblk_t ext4_find_near(struct inode *inode, Indirect *ind)
+ 	 * into the same cylinder group then.
+ 	 */
+ 	bg_start = ext4_group_first_block_no(inode->i_sb, ei->i_block_group);
+-	colour = (current->pid % 16) *
++	last_block = ext4_blocks_count(EXT4_SB(inode->i_sb)->s_es) - 1;
++
++	if (bg_start + EXT4_BLOCKS_PER_GROUP(inode->i_sb) <= last_block)
++		colour = (current->pid % 16) *
+ 			(EXT4_BLOCKS_PER_GROUP(inode->i_sb) / 16);
++	else
++		colour = (current->pid % 16) * ((last_block - bg_start) / 16);
+ 	return bg_start + colour;
+ }
+ 
+@@ -768,7 +774,6 @@ err_out:
+  *
+  * `handle' can be NULL if create == 0.
+  *
+- * The BKL may not be held on entry here.  Be sure to take it early.
+  * return > 0, # of blocks mapped or allocated.
+  * return = 0, if plain lookup failed.
+  * return < 0, error case.
+@@ -903,11 +908,38 @@ out:
+  */
+ #define DIO_CREDITS 25
+ 
++
++/*
++ *
++ *
++ * ext4_ext4 get_block() wrapper function
++ * It will do a look up first, and returns if the blocks already mapped.
++ * Otherwise it takes the write lock of the i_data_sem and allocate blocks
++ * and store the allocated blocks in the result buffer head and mark it
++ * mapped.
++ *
++ * If file type is extents based, it will call ext4_ext_get_blocks(),
++ * Otherwise, call with ext4_get_blocks_handle() to handle indirect mapping
++ * based files
++ *
++ * On success, it returns the number of blocks being mapped or allocate.
++ * if create==0 and the blocks are pre-allocated and uninitialized block,
++ * the result buffer head is unmapped. If the create ==1, it will make sure
++ * the buffer head is mapped.
++ *
++ * It returns 0 if plain look up failed (blocks have not been allocated), in
++ * that casem, buffer head is unmapped
++ *
++ * It returns the error in case of allocation failure.
++ */
+ int ext4_get_blocks_wrap(handle_t *handle, struct inode *inode, sector_t block,
+ 			unsigned long max_blocks, struct buffer_head *bh,
+ 			int create, int extend_disksize)
+ {
+ 	int retval;
++
++	clear_buffer_mapped(bh);
++
+ 	/*
+ 	 * Try to see if we can get  the block without requesting
+ 	 * for new file system block.
+@@ -921,12 +953,26 @@ int ext4_get_blocks_wrap(handle_t *handle, struct inode *inode, sector_t block,
+ 				inode, block, max_blocks, bh, 0, 0);
+ 	}
+ 	up_read((&EXT4_I(inode)->i_data_sem));
+-	if (!create || (retval > 0))
++
++	/* If it is only a block(s) look up */
++	if (!create)
++		return retval;
++
++	/*
++	 * Returns if the blocks have already allocated
++	 *
++	 * Note that if blocks have been preallocated
++	 * ext4_ext_get_block() returns th create = 0
++	 * with buffer head unmapped.
++	 */
++	if (retval > 0 && buffer_mapped(bh))
+ 		return retval;
+ 
+ 	/*
+-	 * We need to allocate new blocks which will result
+-	 * in i_data update
++	 * New blocks allocate and/or writing to uninitialized extent
++	 * will possibly result in updating i_data, so we take
++	 * the write lock of i_data_sem, and call get_blocks()
++	 * with create == 1 flag.
+ 	 */
+ 	down_write((&EXT4_I(inode)->i_data_sem));
+ 	/*
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index dd0fcfc..ef97f19 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -627,21 +627,19 @@ static ext4_fsblk_t ext4_grp_offs_to_block(struct super_block *sb,
+ 	return block;
+ }
+ 
++static inline void *mb_correct_addr_and_bit(int *bit, void *addr)
++{
+ #if BITS_PER_LONG == 64
+-#define mb_correct_addr_and_bit(bit, addr)		\
+-{							\
+-	bit += ((unsigned long) addr & 7UL) << 3;	\
+-	addr = (void *) ((unsigned long) addr & ~7UL);	\
+-}
++	*bit += ((unsigned long) addr & 7UL) << 3;
++	addr = (void *) ((unsigned long) addr & ~7UL);
+ #elif BITS_PER_LONG == 32
+-#define mb_correct_addr_and_bit(bit, addr)		\
+-{							\
+-	bit += ((unsigned long) addr & 3UL) << 3;	\
+-	addr = (void *) ((unsigned long) addr & ~3UL);	\
+-}
++	*bit += ((unsigned long) addr & 3UL) << 3;
++	addr = (void *) ((unsigned long) addr & ~3UL);
+ #else
+ #error "how many bits you are?!"
+ #endif
++	return addr;
++}
+ 
+ static inline int mb_test_bit(int bit, void *addr)
+ {
+@@ -649,34 +647,54 @@ static inline int mb_test_bit(int bit, void *addr)
+ 	 * ext4_test_bit on architecture like powerpc
+ 	 * needs unsigned long aligned address
+ 	 */
+-	mb_correct_addr_and_bit(bit, addr);
++	addr = mb_correct_addr_and_bit(&bit, addr);
+ 	return ext4_test_bit(bit, addr);
+ }
+ 
+ static inline void mb_set_bit(int bit, void *addr)
+ {
+-	mb_correct_addr_and_bit(bit, addr);
++	addr = mb_correct_addr_and_bit(&bit, addr);
+ 	ext4_set_bit(bit, addr);
+ }
+ 
+ static inline void mb_set_bit_atomic(spinlock_t *lock, int bit, void *addr)
+ {
+-	mb_correct_addr_and_bit(bit, addr);
++	addr = mb_correct_addr_and_bit(&bit, addr);
+ 	ext4_set_bit_atomic(lock, bit, addr);
+ }
+ 
+ static inline void mb_clear_bit(int bit, void *addr)
+ {
+-	mb_correct_addr_and_bit(bit, addr);
++	addr = mb_correct_addr_and_bit(&bit, addr);
+ 	ext4_clear_bit(bit, addr);
+ }
+ 
+ static inline void mb_clear_bit_atomic(spinlock_t *lock, int bit, void *addr)
+ {
+-	mb_correct_addr_and_bit(bit, addr);
++	addr = mb_correct_addr_and_bit(&bit, addr);
+ 	ext4_clear_bit_atomic(lock, bit, addr);
+ }
+ 
++static inline int mb_find_next_zero_bit(void *addr, int max, int start)
++{
++	int fix = 0;
++	addr = mb_correct_addr_and_bit(&fix, addr);
++	max += fix;
++	start += fix;
++
++	return ext4_find_next_zero_bit(addr, max, start) - fix;
++}
++
++static inline int mb_find_next_bit(void *addr, int max, int start)
++{
++	int fix = 0;
++	addr = mb_correct_addr_and_bit(&fix, addr);
++	max += fix;
++	start += fix;
++
++	return ext4_find_next_bit(addr, max, start) - fix;
++}
++
+ static void *mb_find_buddy(struct ext4_buddy *e4b, int order, int *max)
+ {
+ 	char *bb;
+@@ -906,7 +924,7 @@ static void ext4_mb_mark_free_simple(struct super_block *sb,
+ 	unsigned short chunk;
+ 	unsigned short border;
+ 
+-	BUG_ON(len >= EXT4_BLOCKS_PER_GROUP(sb));
++	BUG_ON(len > EXT4_BLOCKS_PER_GROUP(sb));
+ 
+ 	border = 2 << sb->s_blocksize_bits;
+ 
+@@ -946,12 +964,12 @@ static void ext4_mb_generate_buddy(struct super_block *sb,
+ 
+ 	/* initialize buddy from bitmap which is aggregation
+ 	 * of on-disk bitmap and preallocations */
+-	i = ext4_find_next_zero_bit(bitmap, max, 0);
++	i = mb_find_next_zero_bit(bitmap, max, 0);
+ 	grp->bb_first_free = i;
+ 	while (i < max) {
+ 		fragments++;
+ 		first = i;
+-		i = ext4_find_next_bit(bitmap, max, i);
++		i = mb_find_next_bit(bitmap, max, i);
+ 		len = i - first;
+ 		free += len;
+ 		if (len > 1)
+@@ -959,7 +977,7 @@ static void ext4_mb_generate_buddy(struct super_block *sb,
+ 		else
+ 			grp->bb_counters[0]++;
+ 		if (i < max)
+-			i = ext4_find_next_zero_bit(bitmap, max, i);
++			i = mb_find_next_zero_bit(bitmap, max, i);
+ 	}
+ 	grp->bb_fragments = fragments;
+ 
+@@ -967,6 +985,10 @@ static void ext4_mb_generate_buddy(struct super_block *sb,
+ 		ext4_error(sb, __FUNCTION__,
+ 			"EXT4-fs: group %lu: %u blocks in bitmap, %u in gd\n",
+ 			group, free, grp->bb_free);
++		/*
++		 * If we intent to continue, we consider group descritor
++		 * corrupt and update bb_free using bitmap value
++		 */
+ 		grp->bb_free = free;
+ 	}
+ 
+@@ -1778,7 +1800,7 @@ static void ext4_mb_simple_scan_group(struct ext4_allocation_context *ac,
+ 		buddy = mb_find_buddy(e4b, i, &max);
+ 		BUG_ON(buddy == NULL);
+ 
+-		k = ext4_find_next_zero_bit(buddy, max, 0);
++		k = mb_find_next_zero_bit(buddy, max, 0);
+ 		BUG_ON(k >= max);
+ 
+ 		ac->ac_found++;
+@@ -1818,11 +1840,11 @@ static void ext4_mb_complex_scan_group(struct ext4_allocation_context *ac,
+ 	i = e4b->bd_info->bb_first_free;
+ 
+ 	while (free && ac->ac_status == AC_STATUS_CONTINUE) {
+-		i = ext4_find_next_zero_bit(bitmap,
++		i = mb_find_next_zero_bit(bitmap,
+ 						EXT4_BLOCKS_PER_GROUP(sb), i);
+ 		if (i >= EXT4_BLOCKS_PER_GROUP(sb)) {
+ 			/*
+-			 * IF we corrupt the bitmap  we won't find any
++			 * IF we have corrupt bitmap, we won't find any
+ 			 * free blocks even though group info says we
+ 			 * we have free blocks
+ 			 */
+@@ -1838,6 +1860,12 @@ static void ext4_mb_complex_scan_group(struct ext4_allocation_context *ac,
+ 			ext4_error(sb, __FUNCTION__, "%d free blocks as per "
+ 					"group info. But got %d blocks\n",
+ 					free, ex.fe_len);
++			/*
++			 * The number of free blocks differs. This mostly
++			 * indicate that the bitmap is corrupt. So exit
++			 * without claiming the space.
++			 */
++			break;
+ 		}
+ 
+ 		ext4_mb_measure_extent(ac, &ex, e4b);
+@@ -3740,10 +3768,10 @@ static int ext4_mb_release_inode_pa(struct ext4_buddy *e4b,
+ 	}
+ 
+ 	while (bit < end) {
+-		bit = ext4_find_next_zero_bit(bitmap_bh->b_data, end, bit);
++		bit = mb_find_next_zero_bit(bitmap_bh->b_data, end, bit);
+ 		if (bit >= end)
+ 			break;
+-		next = ext4_find_next_bit(bitmap_bh->b_data, end, bit);
++		next = mb_find_next_bit(bitmap_bh->b_data, end, bit);
+ 		if (next > end)
+ 			next = end;
+ 		start = group * EXT4_BLOCKS_PER_GROUP(sb) + bit +
+@@ -3771,6 +3799,10 @@ static int ext4_mb_release_inode_pa(struct ext4_buddy *e4b,
+ 			(unsigned long) pa->pa_len);
+ 		ext4_error(sb, __FUNCTION__, "free %u, pa_free %u\n",
+ 						free, pa->pa_free);
++		/*
++		 * pa is already deleted so we use the value obtained
++		 * from the bitmap and continue.
++		 */
+ 	}
+ 	atomic_add(free, &sbi->s_mb_discarded);
+ 	if (ac)
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index 8c6c685..5c1e27d 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -43,6 +43,7 @@ static int finish_range(handle_t *handle, struct inode *inode,
+ 
+ 	if (IS_ERR(path)) {
+ 		retval = PTR_ERR(path);
++		path = NULL;
+ 		goto err_out;
+ 	}
+ 
+@@ -74,6 +75,10 @@ static int finish_range(handle_t *handle, struct inode *inode,
+ 	}
+ 	retval = ext4_ext_insert_extent(handle, inode, path, &newext);
+ err_out:
++	if (path) {
++		ext4_ext_drop_refs(path);
++		kfree(path);
++	}
+ 	lb->first_pblock = 0;
+ 	return retval;
+ }
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index a9347fb..28aa2ed 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1804,12 +1804,8 @@ retry:
+ 	inode->i_fop = &ext4_dir_operations;
+ 	inode->i_size = EXT4_I(inode)->i_disksize = inode->i_sb->s_blocksize;
+ 	dir_block = ext4_bread (handle, inode, 0, 1, &err);
+-	if (!dir_block) {
+-		ext4_dec_count(handle, inode); /* is this nlink == 0? */
+-		ext4_mark_inode_dirty(handle, inode);
+-		iput (inode);
+-		goto out_stop;
+-	}
++	if (!dir_block)
++		goto out_clear_inode;
+ 	BUFFER_TRACE(dir_block, "get_write_access");
+ 	ext4_journal_get_write_access(handle, dir_block);
+ 	de = (struct ext4_dir_entry_2 *) dir_block->b_data;
+@@ -1832,7 +1828,8 @@ retry:
+ 	ext4_mark_inode_dirty(handle, inode);
+ 	err = ext4_add_entry (handle, dentry, inode);
+ 	if (err) {
+-		inode->i_nlink = 0;
++out_clear_inode:
++		clear_nlink(inode);
+ 		ext4_mark_inode_dirty(handle, inode);
+ 		iput (inode);
+ 		goto out_stop;
+@@ -2164,7 +2161,7 @@ static int ext4_unlink(struct inode * dir, struct dentry *dentry)
+ 	dir->i_ctime = dir->i_mtime = ext4_current_time(dir);
+ 	ext4_update_dx_flag(dir);
+ 	ext4_mark_inode_dirty(handle, dir);
+-	ext4_dec_count(handle, inode);
++	drop_nlink(inode);
+ 	if (!inode->i_nlink)
+ 		ext4_orphan_add(handle, inode);
+ 	inode->i_ctime = ext4_current_time(inode);
+@@ -2214,7 +2211,7 @@ retry:
+ 		err = __page_symlink(inode, symname, l,
+ 				mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS);
+ 		if (err) {
+-			ext4_dec_count(handle, inode);
++			clear_nlink(inode);
+ 			ext4_mark_inode_dirty(handle, inode);
+ 			iput (inode);
+ 			goto out_stop;
+@@ -2223,7 +2220,6 @@ retry:
+ 		inode->i_op = &ext4_fast_symlink_inode_operations;
+ 		memcpy((char*)&EXT4_I(inode)->i_data,symname,l);
+ 		inode->i_size = l-1;
+-		EXT4_I(inode)->i_flags &= ~EXT4_EXTENTS_FL;
+ 	}
+ 	EXT4_I(inode)->i_disksize = inode->i_size;
+ 	err = ext4_add_nondir(handle, dentry, inode);
+@@ -2407,7 +2403,7 @@ static int ext4_rename (struct inode * old_dir, struct dentry *old_dentry,
+ 		ext4_dec_count(handle, old_dir);
+ 		if (new_inode) {
+ 			/* checked empty_dir above, can't have another parent,
+-			 * ext3_dec_count() won't work for many-linked dirs */
++			 * ext4_dec_count() won't work for many-linked dirs */
+ 			new_inode->i_nlink = 0;
+ 		} else {
+ 			ext4_inc_count(handle, new_dir);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 9477a2b..e29efa0 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1037,6 +1037,7 @@ int ext4_group_extend(struct super_block *sb, struct ext4_super_block *es,
+ 		ext4_warning(sb, __FUNCTION__,
+ 			     "multiple resizers run on filesystem!");
+ 		unlock_super(sb);
++		ext4_journal_stop(handle);
+ 		err = -EBUSY;
+ 		goto exit_put;
+ 	}
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 96ee899..91a1bd6 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -314,9 +314,12 @@ static int proc_pid_schedstat(struct task_struct *task, char *buffer)
+ static int lstats_show_proc(struct seq_file *m, void *v)
+ {
+ 	int i;
+-	struct task_struct *task = m->private;
+-	seq_puts(m, "Latency Top version : v0.1\n");
++	struct inode *inode = m->private;
++	struct task_struct *task = get_proc_task(inode);
+ 
++	if (!task)
++		return -ESRCH;
++	seq_puts(m, "Latency Top version : v0.1\n");
+ 	for (i = 0; i < 32; i++) {
+ 		if (task->latency_record[i].backtrace[0]) {
+ 			int q;
+@@ -341,32 +344,24 @@ static int lstats_show_proc(struct seq_file *m, void *v)
+ 		}
+ 
+ 	}
++	put_task_struct(task);
+ 	return 0;
+ }
+ 
+ static int lstats_open(struct inode *inode, struct file *file)
+ {
+-	int ret;
+-	struct seq_file *m;
+-	struct task_struct *task = get_proc_task(inode);
+-
+-	ret = single_open(file, lstats_show_proc, NULL);
+-	if (!ret) {
+-		m = file->private_data;
+-		m->private = task;
+-	}
+-	return ret;
++	return single_open(file, lstats_show_proc, inode);
+ }
+ 
+ static ssize_t lstats_write(struct file *file, const char __user *buf,
+ 			    size_t count, loff_t *offs)
+ {
+-	struct seq_file *m;
+-	struct task_struct *task;
++	struct task_struct *task = get_proc_task(file->f_dentry->d_inode);
+ 
+-	m = file->private_data;
+-	task = m->private;
++	if (!task)
++		return -ESRCH;
+ 	clear_all_latency_tracing(task);
++	put_task_struct(task);
+ 
+ 	return count;
+ }
+diff --git a/fs/xfs/linux-2.6/xfs_super.c b/fs/xfs/linux-2.6/xfs_super.c
+index 21dfc9d..8831d95 100644
+--- a/fs/xfs/linux-2.6/xfs_super.c
++++ b/fs/xfs/linux-2.6/xfs_super.c
+@@ -171,7 +171,7 @@ xfs_parseargs(
+ 	char			*this_char, *value, *eov;
+ 	int			dsunit, dswidth, vol_dsunit, vol_dswidth;
+ 	int			iosize;
+-	int			ikeep = 0;
++	int			dmapi_implies_ikeep = 1;
+ 
+ 	args->flags |= XFSMNT_BARRIER;
+ 	args->flags2 |= XFSMNT2_COMPAT_IOSIZE;
+@@ -302,10 +302,10 @@ xfs_parseargs(
+ 		} else if (!strcmp(this_char, MNTOPT_NOBARRIER)) {
+ 			args->flags &= ~XFSMNT_BARRIER;
+ 		} else if (!strcmp(this_char, MNTOPT_IKEEP)) {
+-			ikeep = 1;
+-			args->flags &= ~XFSMNT_IDELETE;
++			args->flags |= XFSMNT_IKEEP;
+ 		} else if (!strcmp(this_char, MNTOPT_NOIKEEP)) {
+-			args->flags |= XFSMNT_IDELETE;
++			dmapi_implies_ikeep = 0;
++			args->flags &= ~XFSMNT_IKEEP;
+ 		} else if (!strcmp(this_char, MNTOPT_LARGEIO)) {
+ 			args->flags2 &= ~XFSMNT2_COMPAT_IOSIZE;
+ 		} else if (!strcmp(this_char, MNTOPT_NOLARGEIO)) {
+@@ -410,8 +410,8 @@ xfs_parseargs(
+ 	 * Note that if "ikeep" or "noikeep" mount options are
+ 	 * supplied, then they are honored.
+ 	 */
+-	if (!(args->flags & XFSMNT_DMAPI) && !ikeep)
+-		args->flags |= XFSMNT_IDELETE;
++	if ((args->flags & XFSMNT_DMAPI) && dmapi_implies_ikeep)
++		args->flags |= XFSMNT_IKEEP;
+ 
+ 	if ((args->flags & XFSMNT_NOALIGN) != XFSMNT_NOALIGN) {
+ 		if (dsunit) {
+@@ -446,6 +446,7 @@ xfs_showargs(
+ {
+ 	static struct proc_xfs_info xfs_info_set[] = {
+ 		/* the few simple ones we can get from the mount struct */
++		{ XFS_MOUNT_IKEEP,		"," MNTOPT_IKEEP },
+ 		{ XFS_MOUNT_WSYNC,		"," MNTOPT_WSYNC },
+ 		{ XFS_MOUNT_INO64,		"," MNTOPT_INO64 },
+ 		{ XFS_MOUNT_NOALIGN,		"," MNTOPT_NOALIGN },
+@@ -461,7 +462,6 @@ xfs_showargs(
+ 	};
+ 	static struct proc_xfs_info xfs_info_unset[] = {
+ 		/* the few simple ones we can get from the mount struct */
+-		{ XFS_MOUNT_IDELETE,		"," MNTOPT_IKEEP },
+ 		{ XFS_MOUNT_COMPAT_IOSIZE,	"," MNTOPT_LARGEIO },
+ 		{ XFS_MOUNT_BARRIER,		"," MNTOPT_NOBARRIER },
+ 		{ XFS_MOUNT_SMALL_INUMS,	"," MNTOPT_64BITINODE },
+diff --git a/fs/xfs/xfs_bit.c b/fs/xfs/xfs_bit.c
+index 4822884..fab0b6d 100644
+--- a/fs/xfs/xfs_bit.c
++++ b/fs/xfs/xfs_bit.c
+@@ -25,6 +25,109 @@
+  * XFS bit manipulation routines, used in non-realtime code.
+  */
+ 
++#ifndef HAVE_ARCH_HIGHBIT
++/*
++ * Index of high bit number in byte, -1 for none set, 0..7 otherwise.
++ */
++static const char xfs_highbit[256] = {
++       -1, 0, 1, 1, 2, 2, 2, 2,			/* 00 .. 07 */
++	3, 3, 3, 3, 3, 3, 3, 3,			/* 08 .. 0f */
++	4, 4, 4, 4, 4, 4, 4, 4,			/* 10 .. 17 */
++	4, 4, 4, 4, 4, 4, 4, 4,			/* 18 .. 1f */
++	5, 5, 5, 5, 5, 5, 5, 5,			/* 20 .. 27 */
++	5, 5, 5, 5, 5, 5, 5, 5,			/* 28 .. 2f */
++	5, 5, 5, 5, 5, 5, 5, 5,			/* 30 .. 37 */
++	5, 5, 5, 5, 5, 5, 5, 5,			/* 38 .. 3f */
++	6, 6, 6, 6, 6, 6, 6, 6,			/* 40 .. 47 */
++	6, 6, 6, 6, 6, 6, 6, 6,			/* 48 .. 4f */
++	6, 6, 6, 6, 6, 6, 6, 6,			/* 50 .. 57 */
++	6, 6, 6, 6, 6, 6, 6, 6,			/* 58 .. 5f */
++	6, 6, 6, 6, 6, 6, 6, 6,			/* 60 .. 67 */
++	6, 6, 6, 6, 6, 6, 6, 6,			/* 68 .. 6f */
++	6, 6, 6, 6, 6, 6, 6, 6,			/* 70 .. 77 */
++	6, 6, 6, 6, 6, 6, 6, 6,			/* 78 .. 7f */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* 80 .. 87 */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* 88 .. 8f */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* 90 .. 97 */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* 98 .. 9f */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* a0 .. a7 */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* a8 .. af */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* b0 .. b7 */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* b8 .. bf */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* c0 .. c7 */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* c8 .. cf */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* d0 .. d7 */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* d8 .. df */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* e0 .. e7 */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* e8 .. ef */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* f0 .. f7 */
++	7, 7, 7, 7, 7, 7, 7, 7,			/* f8 .. ff */
++};
++#endif
++
++/*
++ * xfs_highbit32: get high bit set out of 32-bit argument, -1 if none set.
++ */
++inline int
++xfs_highbit32(
++	__uint32_t	v)
++{
++#ifdef HAVE_ARCH_HIGHBIT
++	return highbit32(v);
++#else
++	int		i;
++
++	if (v & 0xffff0000)
++		if (v & 0xff000000)
++			i = 24;
++		else
++			i = 16;
++	else if (v & 0x0000ffff)
++		if (v & 0x0000ff00)
++			i = 8;
++		else
++			i = 0;
++	else
++		return -1;
++	return i + xfs_highbit[(v >> i) & 0xff];
++#endif
++}
++
++/*
++ * xfs_lowbit64: get low bit set out of 64-bit argument, -1 if none set.
++ */
++int
++xfs_lowbit64(
++	__uint64_t	v)
++{
++	__uint32_t	w = (__uint32_t)v;
++	int		n = 0;
++
++	if (w) {	/* lower bits */
++		n = ffs(w);
++	} else {	/* upper bits */
++		w = (__uint32_t)(v >> 32);
++		if (w && (n = ffs(w)))
++			n += 32;
++	}
++	return n - 1;
++}
++
++/*
++ * xfs_highbit64: get high bit set out of 64-bit argument, -1 if none set.
++ */
++int
++xfs_highbit64(
++	__uint64_t	v)
++{
++	__uint32_t	h = (__uint32_t)(v >> 32);
++
++	if (h)
++		return xfs_highbit32(h) + 32;
++	return xfs_highbit32((__uint32_t)v);
++}
++
++
+ /*
+  * Return whether bitmap is empty.
+  * Size is number of words in the bitmap, which is padded to word boundary
+diff --git a/fs/xfs/xfs_bit.h b/fs/xfs/xfs_bit.h
+index 325a007..082641a 100644
+--- a/fs/xfs/xfs_bit.h
++++ b/fs/xfs/xfs_bit.h
+@@ -47,30 +47,13 @@ static inline __uint64_t xfs_mask64lo(int n)
+ }
+ 
+ /* Get high bit set out of 32-bit argument, -1 if none set */
+-static inline int xfs_highbit32(__uint32_t v)
+-{
+-	return fls(v) - 1;
+-}
+-
+-/* Get high bit set out of 64-bit argument, -1 if none set */
+-static inline int xfs_highbit64(__uint64_t v)
+-{
+-	return fls64(v) - 1;
+-}
+-
+-/* Get low bit set out of 32-bit argument, -1 if none set */
+-static inline int xfs_lowbit32(__uint32_t v)
+-{
+-	__uint32_t t = v;
+-	return (t) ? find_first_bit((unsigned long *)&t, 32) : -1;
+-}
++extern int xfs_highbit32(__uint32_t v);
+ 
+ /* Get low bit set out of 64-bit argument, -1 if none set */
+-static inline int xfs_lowbit64(__uint64_t v)
+-{
+-	__uint64_t t = v;
+-	return (t) ? find_first_bit((unsigned long *)&t, 64) : -1;
+-}
++extern int xfs_lowbit64(__uint64_t v);
++
++/* Get high bit set out of 64-bit argument, -1 if none set */
++extern int xfs_highbit64(__uint64_t);
+ 
+ /* Return whether bitmap is empty (1 == empty) */
+ extern int xfs_bitmap_empty(uint *map, uint size);
+diff --git a/fs/xfs/xfs_clnt.h b/fs/xfs/xfs_clnt.h
+index d16c1b9..d5d1e60 100644
+--- a/fs/xfs/xfs_clnt.h
++++ b/fs/xfs/xfs_clnt.h
+@@ -86,7 +86,7 @@ struct xfs_mount_args {
+ #define XFSMNT_NOUUID		0x01000000	/* Ignore fs uuid */
+ #define XFSMNT_DMAPI		0x02000000	/* enable dmapi/xdsm */
+ #define XFSMNT_BARRIER		0x04000000	/* use write barriers */
+-#define XFSMNT_IDELETE		0x08000000	/* inode cluster delete */
++#define XFSMNT_IKEEP		0x08000000	/* inode cluster delete */
+ #define XFSMNT_SWALLOC		0x10000000	/* turn on stripe width
+ 						 * allocation */
+ #define XFSMNT_DIRSYNC		0x40000000	/* sync creat,link,unlink,rename
+diff --git a/fs/xfs/xfs_ialloc.c b/fs/xfs/xfs_ialloc.c
+index c5836b9..db9d5fa 100644
+--- a/fs/xfs/xfs_ialloc.c
++++ b/fs/xfs/xfs_ialloc.c
+@@ -1053,7 +1053,7 @@ xfs_difree(
+ 	/*
+ 	 * When an inode cluster is free, it becomes eligible for removal
+ 	 */
+-	if ((mp->m_flags & XFS_MOUNT_IDELETE) &&
++	if (!(mp->m_flags & XFS_MOUNT_IKEEP) &&
+ 	    (rec.ir_freecount == XFS_IALLOC_INODES(mp))) {
+ 
+ 		*delete = 1;
+diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
+index f7c620e..1d8a472 100644
+--- a/fs/xfs/xfs_mount.h
++++ b/fs/xfs/xfs_mount.h
+@@ -366,7 +366,7 @@ typedef struct xfs_mount {
+ #define XFS_MOUNT_SMALL_INUMS	(1ULL << 15)	/* users wants 32bit inodes */
+ #define XFS_MOUNT_NOUUID	(1ULL << 16)	/* ignore uuid during mount */
+ #define XFS_MOUNT_BARRIER	(1ULL << 17)
+-#define XFS_MOUNT_IDELETE	(1ULL << 18)	/* delete empty inode clusters*/
++#define XFS_MOUNT_IKEEP		(1ULL << 18)	/* keep empty inode clusters*/
+ #define XFS_MOUNT_SWALLOC	(1ULL << 19)	/* turn on stripe width
+ 						 * allocation */
+ #define XFS_MOUNT_RDONLY	(1ULL << 20)	/* read-only fs */
+diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c
+index ca83ddf..47082c0 100644
+--- a/fs/xfs/xfs_rtalloc.c
++++ b/fs/xfs/xfs_rtalloc.c
+@@ -73,6 +73,18 @@ STATIC int xfs_rtmodify_summary(xfs_mount_t *, xfs_trans_t *, int,
+  */
+ 
+ /*
++ * xfs_lowbit32: get low bit set out of 32-bit argument, -1 if none set.
++ */
++STATIC int
++xfs_lowbit32(
++	__uint32_t	v)
++{
++	if (v)
++		return ffs(v) - 1;
++	return -1;
++}
++
++/*
+  * Allocate space to the bitmap or summary file, and zero it, for growfs.
+  */
+ STATIC int				/* error */
+@@ -432,7 +444,6 @@ xfs_rtallocate_extent_near(
+ 	}
+ 	bbno = XFS_BITTOBLOCK(mp, bno);
+ 	i = 0;
+-	ASSERT(minlen != 0);
+ 	log2len = xfs_highbit32(minlen);
+ 	/*
+ 	 * Loop over all bitmap blocks (bbno + i is current block).
+@@ -601,8 +612,6 @@ xfs_rtallocate_extent_size(
+ 	xfs_suminfo_t	sum;		/* summary information for extents */
+ 
+ 	ASSERT(minlen % prod == 0 && maxlen % prod == 0);
+-	ASSERT(maxlen != 0);
+-
+ 	/*
+ 	 * Loop over all the levels starting with maxlen.
+ 	 * At each level, look at all the bitmap blocks, to see if there
+@@ -660,9 +669,6 @@ xfs_rtallocate_extent_size(
+ 		*rtblock = NULLRTBLOCK;
+ 		return 0;
+ 	}
+-	ASSERT(minlen != 0);
+-	ASSERT(maxlen != 0);
+-
+ 	/*
+ 	 * Loop over sizes, from maxlen down to minlen.
+ 	 * This time, when we do the allocations, allow smaller ones
+@@ -1948,7 +1954,6 @@ xfs_growfs_rt(
+ 				  nsbp->sb_blocksize * nsbp->sb_rextsize);
+ 		nsbp->sb_rextents = nsbp->sb_rblocks;
+ 		do_div(nsbp->sb_rextents, nsbp->sb_rextsize);
+-		ASSERT(nsbp->sb_rextents != 0);
+ 		nsbp->sb_rextslog = xfs_highbit32(nsbp->sb_rextents);
+ 		nrsumlevels = nmp->m_rsumlevels = nsbp->sb_rextslog + 1;
+ 		nrsumsize =
+diff --git a/fs/xfs/xfs_vfsops.c b/fs/xfs/xfs_vfsops.c
+index 413587f..7321304 100644
+--- a/fs/xfs/xfs_vfsops.c
++++ b/fs/xfs/xfs_vfsops.c
+@@ -281,8 +281,8 @@ xfs_start_flags(
+ 		mp->m_readio_log = mp->m_writeio_log = ap->iosizelog;
+ 	}
+ 
+-	if (ap->flags & XFSMNT_IDELETE)
+-		mp->m_flags |= XFS_MOUNT_IDELETE;
++	if (ap->flags & XFSMNT_IKEEP)
++		mp->m_flags |= XFS_MOUNT_IKEEP;
+ 	if (ap->flags & XFSMNT_DIRSYNC)
+ 		mp->m_flags |= XFS_MOUNT_DIRSYNC;
+ 	if (ap->flags & XFSMNT_ATTR2)
+diff --git a/include/asm-arm/arch-pxa/entry-macro.S b/include/asm-arm/arch-pxa/entry-macro.S
+index b7e7308..c145bb0 100644
+--- a/include/asm-arm/arch-pxa/entry-macro.S
++++ b/include/asm-arm/arch-pxa/entry-macro.S
+@@ -35,7 +35,7 @@
+ 1004:
+ 		mrc	p6, 0, \irqstat, c6, c0, 0	@ ICIP2
+ 		mrc	p6, 0, \irqnr, c7, c0, 0	@ ICMR2
+-		ands	\irqstat, \irqstat, \irqnr
++		ands	\irqnr, \irqstat, \irqnr
+ 		beq	1003f
+ 		rsb	\irqstat, \irqnr, #0
+ 		and	\irqstat, \irqstat, \irqnr
+diff --git a/include/asm-arm/arch-pxa/pxa-regs.h b/include/asm-arm/arch-pxa/pxa-regs.h
+index ac175b4..2357a73 100644
+--- a/include/asm-arm/arch-pxa/pxa-regs.h
++++ b/include/asm-arm/arch-pxa/pxa-regs.h
+@@ -520,6 +520,9 @@
+ #define MCCR_FSRIE	(1 << 1)	/* FIFO Service Request Interrupt Enable */
+ 
+ #define GCR		__REG(0x4050000C)  /* Global Control Register */
++#ifdef CONFIG_PXA3xx
++#define GCR_CLKBPB	(1 << 31)	/* Internal clock enable */
++#endif
+ #define GCR_nDMAEN	(1 << 24)	/* non DMA Enable */
+ #define GCR_CDONE_IE	(1 << 19)	/* Command Done Interrupt Enable */
+ #define GCR_SDONE_IE	(1 << 18)	/* Status Done Interrupt Enable */
+diff --git a/include/asm-arm/kexec.h b/include/asm-arm/kexec.h
+index 1ee17b6..47fe34d 100644
+--- a/include/asm-arm/kexec.h
++++ b/include/asm-arm/kexec.h
+@@ -8,7 +8,7 @@
+ /* Maximum address we can reach in physical address mode */
+ #define KEXEC_DESTINATION_MEMORY_LIMIT (-1UL)
+ /* Maximum address we can use for the control code buffer */
+-#define KEXEC_CONTROL_MEMORY_LIMIT TASK_SIZE
++#define KEXEC_CONTROL_MEMORY_LIMIT (-1UL)
+ 
+ #define KEXEC_CONTROL_CODE_SIZE	4096
+ 
+diff --git a/include/asm-arm/unaligned.h b/include/asm-arm/unaligned.h
+index 8431f6e..5db03cf 100644
+--- a/include/asm-arm/unaligned.h
++++ b/include/asm-arm/unaligned.h
+@@ -40,16 +40,16 @@ extern int __bug_unaligned_x(const void *ptr);
+  */
+ 
+ #define __get_unaligned_2_le(__p)					\
+-	(__p[0] | __p[1] << 8)
++	(unsigned int)(__p[0] | __p[1] << 8)
+ 
+ #define __get_unaligned_2_be(__p)					\
+-	(__p[0] << 8 | __p[1])
++	(unsigned int)(__p[0] << 8 | __p[1])
+ 
+ #define __get_unaligned_4_le(__p)					\
+-	(__p[0] | __p[1] << 8 | __p[2] << 16 | __p[3] << 24)
++	(unsigned int)(__p[0] | __p[1] << 8 | __p[2] << 16 | __p[3] << 24)
+ 
+ #define __get_unaligned_4_be(__p)					\
+-	(__p[0] << 24 | __p[1] << 16 | __p[2] << 8 | __p[3])
++	(unsigned int)(__p[0] << 24 | __p[1] << 16 | __p[2] << 8 | __p[3])
+ 
+ #define __get_unaligned_8_le(__p)					\
+ 	((unsigned long long)__get_unaligned_4_le((__p+4)) << 32 |	\
+diff --git a/include/asm-avr32/pgtable.h b/include/asm-avr32/pgtable.h
+index 018f6e2..3ae7b54 100644
+--- a/include/asm-avr32/pgtable.h
++++ b/include/asm-avr32/pgtable.h
+@@ -157,6 +157,7 @@ extern struct page *empty_zero_page;
+ #define _PAGE_S(x)	_PAGE_NORMAL(x)
+ 
+ #define PAGE_COPY	_PAGE_P(PAGE_WRITE | PAGE_READ)
++#define PAGE_SHARED	_PAGE_S(PAGE_WRITE | PAGE_READ)
+ 
+ #ifndef __ASSEMBLY__
+ /*
+diff --git a/include/asm-blackfin/gptimers.h b/include/asm-blackfin/gptimers.h
+index 8265ea4..4f318f1 100644
+--- a/include/asm-blackfin/gptimers.h
++++ b/include/asm-blackfin/gptimers.h
+@@ -1,12 +1,11 @@
+ /*
+- * include/asm/bf5xx_timers.h
+- *
+- * This file contains the major Data structures and constants
+- * used for General Purpose Timer Implementation in BF5xx
++ * gptimers.h - Blackfin General Purpose Timer structs/defines/prototypes
+  *
++ * Copyright (c) 2005-2008 Analog Devices Inc.
+  * Copyright (C) 2005 John DeHority
+  * Copyright (C) 2006 Hella Aglaia GmbH (awe at aglaia-gmbh.de)
+  *
++ * Licensed under the GPL-2.
+  */
+ 
+ #ifndef _BLACKFIN_TIMERS_H_
+diff --git a/include/asm-blackfin/irq.h b/include/asm-blackfin/irq.h
+index 65480da..86b6783 100644
+--- a/include/asm-blackfin/irq.h
++++ b/include/asm-blackfin/irq.h
+@@ -67,4 +67,6 @@ static __inline__ int irq_canonicalize(int irq)
+ #define NO_IRQ ((unsigned int)(-1))
+ #endif
+ 
++#define SIC_SYSIRQ(irq)	(irq - (IRQ_CORETMR + 1))
++
+ #endif				/* _BFIN_IRQ_H_ */
+diff --git a/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h
+index 15dbc21..c0694ec 100644
+--- a/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h
++++ b/include/asm-blackfin/mach-bf527/bfin_serial_5xx.h
+@@ -23,7 +23,6 @@
+ #define UART_GET_DLH(uart)	bfin_read16(((uart)->port.membase + OFFSET_DLH))
+ #define UART_GET_IIR(uart)      bfin_read16(((uart)->port.membase + OFFSET_IIR))
+ #define UART_GET_LCR(uart)      bfin_read16(((uart)->port.membase + OFFSET_LCR))
+-#define UART_GET_LSR(uart)      bfin_read16(((uart)->port.membase + OFFSET_LSR))
+ #define UART_GET_GCTL(uart)     bfin_read16(((uart)->port.membase + OFFSET_GCTL))
+ 
+ #define UART_PUT_CHAR(uart, v)   bfin_write16(((uart)->port.membase + OFFSET_THR), v)
+@@ -58,6 +57,7 @@
+ struct bfin_serial_port {
+ 	struct uart_port port;
+ 	unsigned int old_status;
++	unsigned int lsr;
+ #ifdef CONFIG_SERIAL_BFIN_DMA
+ 	int tx_done;
+ 	int tx_count;
+@@ -67,15 +67,31 @@ struct bfin_serial_port {
+ 	unsigned int tx_dma_channel;
+ 	unsigned int rx_dma_channel;
+ 	struct work_struct tx_dma_workqueue;
+-#else
+-	struct work_struct cts_workqueue;
+ #endif
+ #ifdef CONFIG_SERIAL_BFIN_CTSRTS
++	struct work_struct cts_workqueue;
+ 	int cts_pin;
+ 	int rts_pin;
+ #endif
+ };
+ 
++/* The hardware clears the LSR bits upon read, so we need to cache
++ * some of the more fun bits in software so they don't get lost
++ * when checking the LSR in other code paths (TX).
++ */
++static inline unsigned int UART_GET_LSR(struct bfin_serial_port *uart)
++{
++	unsigned int lsr = bfin_read16(uart->port.membase + OFFSET_LSR);
++	uart->lsr |= (lsr & (BI|FE|PE|OE));
++	return lsr | uart->lsr;
++}
++
++static inline void UART_CLEAR_LSR(struct bfin_serial_port *uart)
++{
++	uart->lsr = 0;
++	bfin_write16(uart->port.membase + OFFSET_LSR, -1);
++}
++
+ struct bfin_serial_port bfin_serial_ports[NR_PORTS];
+ struct bfin_serial_res {
+ 	unsigned long uart_base_addr;
+diff --git a/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h
+index 7871d43..b6f513b 100644
+--- a/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h
++++ b/include/asm-blackfin/mach-bf533/bfin_serial_5xx.h
+@@ -23,7 +23,6 @@
+ #define UART_GET_DLH(uart)	bfin_read16(((uart)->port.membase + OFFSET_DLH))
+ #define UART_GET_IIR(uart)      bfin_read16(((uart)->port.membase + OFFSET_IIR))
+ #define UART_GET_LCR(uart)      bfin_read16(((uart)->port.membase + OFFSET_LCR))
+-#define UART_GET_LSR(uart)      bfin_read16(((uart)->port.membase + OFFSET_LSR))
+ #define UART_GET_GCTL(uart)     bfin_read16(((uart)->port.membase + OFFSET_GCTL))
+ 
+ #define UART_PUT_CHAR(uart,v)   bfin_write16(((uart)->port.membase + OFFSET_THR),v)
+@@ -46,6 +45,7 @@
+ struct bfin_serial_port {
+         struct uart_port        port;
+         unsigned int            old_status;
++	unsigned int lsr;
+ #ifdef CONFIG_SERIAL_BFIN_DMA
+ 	int			tx_done;
+ 	int			tx_count;
+@@ -56,14 +56,34 @@ struct bfin_serial_port {
+ 	unsigned int		rx_dma_channel;
+ 	struct work_struct	tx_dma_workqueue;
+ #else
+-	struct work_struct 	cts_workqueue;
++# if ANOMALY_05000230
++	unsigned int anomaly_threshold;
++# endif
+ #endif
+ #ifdef CONFIG_SERIAL_BFIN_CTSRTS
++	struct work_struct 	cts_workqueue;
+ 	int			cts_pin;
+ 	int			rts_pin;
+ #endif
+ };
+ 
++/* The hardware clears the LSR bits upon read, so we need to cache
++ * some of the more fun bits in software so they don't get lost
++ * when checking the LSR in other code paths (TX).
++ */
++static inline unsigned int UART_GET_LSR(struct bfin_serial_port *uart)
++{
++	unsigned int lsr = bfin_read16(uart->port.membase + OFFSET_LSR);
++	uart->lsr |= (lsr & (BI|FE|PE|OE));
++	return lsr | uart->lsr;
++}
++
++static inline void UART_CLEAR_LSR(struct bfin_serial_port *uart)
++{
++	uart->lsr = 0;
++	bfin_write16(uart->port.membase + OFFSET_LSR, -1);
++}
++
+ struct bfin_serial_port bfin_serial_ports[NR_PORTS];
+ struct bfin_serial_res {
+ 	unsigned long	uart_base_addr;
+diff --git a/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h
+index 86e45c3..8fc672d 100644
+--- a/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h
++++ b/include/asm-blackfin/mach-bf537/bfin_serial_5xx.h
+@@ -23,7 +23,6 @@
+ #define UART_GET_DLH(uart)	bfin_read16(((uart)->port.membase + OFFSET_DLH))
+ #define UART_GET_IIR(uart)      bfin_read16(((uart)->port.membase + OFFSET_IIR))
+ #define UART_GET_LCR(uart)      bfin_read16(((uart)->port.membase + OFFSET_LCR))
+-#define UART_GET_LSR(uart)      bfin_read16(((uart)->port.membase + OFFSET_LSR))
+ #define UART_GET_GCTL(uart)     bfin_read16(((uart)->port.membase + OFFSET_GCTL))
+ 
+ #define UART_PUT_CHAR(uart,v)   bfin_write16(((uart)->port.membase + OFFSET_THR),v)
+@@ -58,6 +57,7 @@
+ struct bfin_serial_port {
+         struct uart_port        port;
+         unsigned int            old_status;
++	unsigned int lsr;
+ #ifdef CONFIG_SERIAL_BFIN_DMA
+ 	int			tx_done;
+ 	int			tx_count;
+@@ -67,15 +67,31 @@ struct bfin_serial_port {
+ 	unsigned int		tx_dma_channel;
+ 	unsigned int		rx_dma_channel;
+ 	struct work_struct	tx_dma_workqueue;
+-#else
+-	struct work_struct 	cts_workqueue;
+ #endif
+ #ifdef CONFIG_SERIAL_BFIN_CTSRTS
++	struct work_struct 	cts_workqueue;
+ 	int		cts_pin;
+ 	int 		rts_pin;
+ #endif
+ };
+ 
++/* The hardware clears the LSR bits upon read, so we need to cache
++ * some of the more fun bits in software so they don't get lost
++ * when checking the LSR in other code paths (TX).
++ */
++static inline unsigned int UART_GET_LSR(struct bfin_serial_port *uart)
++{
++	unsigned int lsr = bfin_read16(uart->port.membase + OFFSET_LSR);
++	uart->lsr |= (lsr & (BI|FE|PE|OE));
++	return lsr | uart->lsr;
++}
++
++static inline void UART_CLEAR_LSR(struct bfin_serial_port *uart)
++{
++	uart->lsr = 0;
++	bfin_write16(uart->port.membase + OFFSET_LSR, -1);
++}
++
+ struct bfin_serial_port bfin_serial_ports[NR_PORTS];
+ struct bfin_serial_res {
+ 	unsigned long	uart_base_addr;
+diff --git a/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h
+index 3770aa3..7e6339f 100644
+--- a/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h
++++ b/include/asm-blackfin/mach-bf548/bfin_serial_5xx.h
+@@ -24,6 +24,8 @@
+ #define UART_GET_LCR(uart)      bfin_read16(((uart)->port.membase + OFFSET_LCR))
+ #define UART_GET_LSR(uart)      bfin_read16(((uart)->port.membase + OFFSET_LSR))
+ #define UART_GET_GCTL(uart)     bfin_read16(((uart)->port.membase + OFFSET_GCTL))
++#define UART_GET_MSR(uart)      bfin_read16(((uart)->port.membase + OFFSET_MSR))
++#define UART_GET_MCR(uart)      bfin_read16(((uart)->port.membase + OFFSET_MCR))
+ 
+ #define UART_PUT_CHAR(uart,v)   bfin_write16(((uart)->port.membase + OFFSET_THR),v)
+ #define UART_PUT_DLL(uart,v)    bfin_write16(((uart)->port.membase + OFFSET_DLL),v)
+@@ -32,7 +34,9 @@
+ #define UART_PUT_DLH(uart,v)    bfin_write16(((uart)->port.membase + OFFSET_DLH),v)
+ #define UART_PUT_LSR(uart,v)	bfin_write16(((uart)->port.membase + OFFSET_LSR),v)
+ #define UART_PUT_LCR(uart,v)    bfin_write16(((uart)->port.membase + OFFSET_LCR),v)
++#define UART_CLEAR_LSR(uart)    bfin_write16(((uart)->port.membase + OFFSET_LSR), -1)
+ #define UART_PUT_GCTL(uart,v)   bfin_write16(((uart)->port.membase + OFFSET_GCTL),v)
++#define UART_PUT_MCR(uart,v)    bfin_write16(((uart)->port.membase + OFFSET_MCR),v)
+ 
+ #if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS)
+ # define CONFIG_SERIAL_BFIN_CTSRTS
+@@ -68,10 +72,9 @@ struct bfin_serial_port {
+ 	unsigned int		tx_dma_channel;
+ 	unsigned int		rx_dma_channel;
+ 	struct work_struct	tx_dma_workqueue;
+-#else
+-	struct work_struct 	cts_workqueue;
+ #endif
+ #ifdef CONFIG_SERIAL_BFIN_CTSRTS
++	struct work_struct 	cts_workqueue;
+ 	int		cts_pin;
+ 	int 		rts_pin;
+ #endif
+diff --git a/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h b/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h
+index 7871d43..b6f513b 100644
+--- a/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h
++++ b/include/asm-blackfin/mach-bf561/bfin_serial_5xx.h
+@@ -23,7 +23,6 @@
+ #define UART_GET_DLH(uart)	bfin_read16(((uart)->port.membase + OFFSET_DLH))
+ #define UART_GET_IIR(uart)      bfin_read16(((uart)->port.membase + OFFSET_IIR))
+ #define UART_GET_LCR(uart)      bfin_read16(((uart)->port.membase + OFFSET_LCR))
+-#define UART_GET_LSR(uart)      bfin_read16(((uart)->port.membase + OFFSET_LSR))
+ #define UART_GET_GCTL(uart)     bfin_read16(((uart)->port.membase + OFFSET_GCTL))
+ 
+ #define UART_PUT_CHAR(uart,v)   bfin_write16(((uart)->port.membase + OFFSET_THR),v)
+@@ -46,6 +45,7 @@
+ struct bfin_serial_port {
+         struct uart_port        port;
+         unsigned int            old_status;
++	unsigned int lsr;
+ #ifdef CONFIG_SERIAL_BFIN_DMA
+ 	int			tx_done;
+ 	int			tx_count;
+@@ -56,14 +56,34 @@ struct bfin_serial_port {
+ 	unsigned int		rx_dma_channel;
+ 	struct work_struct	tx_dma_workqueue;
+ #else
+-	struct work_struct 	cts_workqueue;
++# if ANOMALY_05000230
++	unsigned int anomaly_threshold;
++# endif
+ #endif
+ #ifdef CONFIG_SERIAL_BFIN_CTSRTS
++	struct work_struct 	cts_workqueue;
+ 	int			cts_pin;
+ 	int			rts_pin;
+ #endif
+ };
+ 
++/* The hardware clears the LSR bits upon read, so we need to cache
++ * some of the more fun bits in software so they don't get lost
++ * when checking the LSR in other code paths (TX).
++ */
++static inline unsigned int UART_GET_LSR(struct bfin_serial_port *uart)
++{
++	unsigned int lsr = bfin_read16(uart->port.membase + OFFSET_LSR);
++	uart->lsr |= (lsr & (BI|FE|PE|OE));
++	return lsr | uart->lsr;
++}
++
++static inline void UART_CLEAR_LSR(struct bfin_serial_port *uart)
++{
++	uart->lsr = 0;
++	bfin_write16(uart->port.membase + OFFSET_LSR, -1);
++}
++
+ struct bfin_serial_port bfin_serial_ports[NR_PORTS];
+ struct bfin_serial_res {
+ 	unsigned long	uart_base_addr;
+diff --git a/include/asm-blackfin/mach-bf561/blackfin.h b/include/asm-blackfin/mach-bf561/blackfin.h
+index 362617f..3a16df2 100644
+--- a/include/asm-blackfin/mach-bf561/blackfin.h
++++ b/include/asm-blackfin/mach-bf561/blackfin.h
+@@ -49,7 +49,8 @@
+ #define bfin_read_FIO_INEN() bfin_read_FIO0_INEN()
+ #define bfin_write_FIO_INEN(val) bfin_write_FIO0_INEN(val)
+ 
+-
++#define SIC_IWR0 SICA_IWR0
++#define SIC_IWR1 SICA_IWR1
+ #define SIC_IAR0 SICA_IAR0
+ #define bfin_write_SIC_IMASK0 bfin_write_SICA_IMASK0
+ #define bfin_write_SIC_IMASK1 bfin_write_SICA_IMASK1
+diff --git a/include/asm-blackfin/mach-bf561/cdefBF561.h b/include/asm-blackfin/mach-bf561/cdefBF561.h
+index d667816..1bc8d2f 100644
+--- a/include/asm-blackfin/mach-bf561/cdefBF561.h
++++ b/include/asm-blackfin/mach-bf561/cdefBF561.h
+@@ -559,6 +559,7 @@ static __inline__ void bfin_write_VR_CTL(unsigned int val)
+ #define bfin_write_PPI0_CONTROL(val)         bfin_write16(PPI0_CONTROL,val)
+ #define bfin_read_PPI0_STATUS()              bfin_read16(PPI0_STATUS)
+ #define bfin_write_PPI0_STATUS(val)          bfin_write16(PPI0_STATUS,val)
++#define bfin_clear_PPI0_STATUS()             bfin_read_PPI0_STATUS()
+ #define bfin_read_PPI0_COUNT()               bfin_read16(PPI0_COUNT)
+ #define bfin_write_PPI0_COUNT(val)           bfin_write16(PPI0_COUNT,val)
+ #define bfin_read_PPI0_DELAY()               bfin_read16(PPI0_DELAY)
+@@ -570,6 +571,7 @@ static __inline__ void bfin_write_VR_CTL(unsigned int val)
+ #define bfin_write_PPI1_CONTROL(val)         bfin_write16(PPI1_CONTROL,val)
+ #define bfin_read_PPI1_STATUS()              bfin_read16(PPI1_STATUS)
+ #define bfin_write_PPI1_STATUS(val)          bfin_write16(PPI1_STATUS,val)
++#define bfin_clear_PPI1_STATUS()             bfin_read_PPI1_STATUS()
+ #define bfin_read_PPI1_COUNT()               bfin_read16(PPI1_COUNT)
+ #define bfin_write_PPI1_COUNT(val)           bfin_write16(PPI1_COUNT,val)
+ #define bfin_read_PPI1_DELAY()               bfin_read16(PPI1_DELAY)
+diff --git a/include/asm-sh/cpu-sh3/cache.h b/include/asm-sh/cpu-sh3/cache.h
+index 56bd838..bee2d81 100644
+--- a/include/asm-sh/cpu-sh3/cache.h
++++ b/include/asm-sh/cpu-sh3/cache.h
+@@ -35,7 +35,7 @@
+     defined(CONFIG_CPU_SUBTYPE_SH7710) || \
+     defined(CONFIG_CPU_SUBTYPE_SH7720) || \
+     defined(CONFIG_CPU_SUBTYPE_SH7721)
+-#define CCR3	0xa40000b4
++#define CCR3_REG	0xa40000b4
+ #define CCR_CACHE_16KB  0x00010000
+ #define CCR_CACHE_32KB	0x00020000
+ #endif
+diff --git a/include/asm-sh/entry-macros.S b/include/asm-sh/entry-macros.S
+index 500030e..2dab0b8 100644
+--- a/include/asm-sh/entry-macros.S
++++ b/include/asm-sh/entry-macros.S
+@@ -12,7 +12,7 @@
+ 	not	r11, r11
+ 	stc	sr, r10
+ 	and	r11, r10
+-#ifdef CONFIG_HAS_SR_RB
++#ifdef CONFIG_CPU_HAS_SR_RB
+ 	stc	k_g_imask, r11
+ 	or	r11, r10
+ #endif
+@@ -20,7 +20,7 @@
+ 	.endm
+ 
+ 	.macro	get_current_thread_info, ti, tmp
+-#ifdef CONFIG_HAS_SR_RB
++#ifdef CONFIG_CPU_HAS_SR_RB
+ 	stc	r7_bank, \ti
+ #else
+ 	mov	#((THREAD_SIZE - 1) >> 10) ^ 0xff, \tmp
+diff --git a/include/asm-sh/sci.h b/include/asm-sh/sci.h
+deleted file mode 100644
+index 52e7366..0000000
+--- a/include/asm-sh/sci.h
++++ /dev/null
+@@ -1,34 +0,0 @@
+-#ifndef __ASM_SH_SCI_H
+-#define __ASM_SH_SCI_H
+-
+-#include <linux/serial_core.h>
+-
+-/*
+- * Generic header for SuperH SCI(F)
+- *
+- * Do not place SH-specific parts in here, sh64 and h8300 depend on this too.
+- */
+-
+-/* Offsets into the sci_port->irqs array */
+-enum {
+-	SCIx_ERI_IRQ,
+-	SCIx_RXI_IRQ,
+-	SCIx_TXI_IRQ,
+-	SCIx_BRI_IRQ,
+-	SCIx_NR_IRQS,
+-};
+-
+-/*
+- * Platform device specific platform_data struct
+- */
+-struct plat_sci_port {
+-	void __iomem	*membase;		/* io cookie */
+-	unsigned long	mapbase;		/* resource base */
+-	unsigned int	irqs[SCIx_NR_IRQS];	/* ERI, RXI, TXI, BRI */
+-	unsigned int	type;			/* SCI / SCIF / IRDA */
+-	upf_t		flags;			/* UPF_* flags */
+-};
+-
+-int early_sci_setup(struct uart_port *port);
+-
+-#endif /* __ASM_SH_SCI_H */
+diff --git a/include/asm-x86/futex.h b/include/asm-x86/futex.h
+index cd9f894..c9952ea 100644
+--- a/include/asm-x86/futex.h
++++ b/include/asm-x86/futex.h
+@@ -102,6 +102,13 @@ futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
+ static inline int
+ futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+ {
++
++#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_BSWAP)
++	/* Real i386 machines have no cmpxchg instruction */
++	if (boot_cpu_data.x86 == 3)
++		return -ENOSYS;
++#endif
++
+ 	if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ 		return -EFAULT;
+ 
+diff --git a/include/asm-x86/lguest.h b/include/asm-x86/lguest.h
+index 4d9367b..9b17571 100644
+--- a/include/asm-x86/lguest.h
++++ b/include/asm-x86/lguest.h
+@@ -23,6 +23,17 @@
+ /* Found in switcher.S */
+ extern unsigned long default_idt_entries[];
+ 
++/* Declarations for definitions in lguest_guest.S */
++extern char lguest_noirq_start[], lguest_noirq_end[];
++extern const char lgstart_cli[], lgend_cli[];
++extern const char lgstart_sti[], lgend_sti[];
++extern const char lgstart_popf[], lgend_popf[];
++extern const char lgstart_pushf[], lgend_pushf[];
++extern const char lgstart_iret[], lgend_iret[];
++
++extern void lguest_iret(void);
++extern void lguest_init(void);
++
+ struct lguest_regs
+ {
+ 	/* Manually saved part. */
+diff --git a/include/asm-x86/nops.h b/include/asm-x86/nops.h
+index fec025c..e3b2bce 100644
+--- a/include/asm-x86/nops.h
++++ b/include/asm-x86/nops.h
+@@ -3,17 +3,29 @@
+ 
+ /* Define nops for use with alternative() */
+ 
+-/* generic versions from gas */
+-#define GENERIC_NOP1	".byte 0x90\n"
+-#define GENERIC_NOP2    	".byte 0x89,0xf6\n"
+-#define GENERIC_NOP3        ".byte 0x8d,0x76,0x00\n"
+-#define GENERIC_NOP4        ".byte 0x8d,0x74,0x26,0x00\n"
+-#define GENERIC_NOP5        GENERIC_NOP1 GENERIC_NOP4
+-#define GENERIC_NOP6	".byte 0x8d,0xb6,0x00,0x00,0x00,0x00\n"
+-#define GENERIC_NOP7	".byte 0x8d,0xb4,0x26,0x00,0x00,0x00,0x00\n"
+-#define GENERIC_NOP8	GENERIC_NOP1 GENERIC_NOP7
++/* generic versions from gas
++   1: nop
++   2: movl %esi,%esi
++   3: leal 0x00(%esi),%esi
++   4: leal 0x00(,%esi,1),%esi
++   6: leal 0x00000000(%esi),%esi
++   7: leal 0x00000000(,%esi,1),%esi
++*/
++#define GENERIC_NOP1 ".byte 0x90\n"
++#define GENERIC_NOP2 ".byte 0x89,0xf6\n"
++#define GENERIC_NOP3 ".byte 0x8d,0x76,0x00\n"
++#define GENERIC_NOP4 ".byte 0x8d,0x74,0x26,0x00\n"
++#define GENERIC_NOP5 GENERIC_NOP1 GENERIC_NOP4
++#define GENERIC_NOP6 ".byte 0x8d,0xb6,0x00,0x00,0x00,0x00\n"
++#define GENERIC_NOP7 ".byte 0x8d,0xb4,0x26,0x00,0x00,0x00,0x00\n"
++#define GENERIC_NOP8 GENERIC_NOP1 GENERIC_NOP7
+ 
+-/* Opteron 64bit nops */
++/* Opteron 64bit nops
++   1: nop
++   2: osp nop
++   3: osp osp nop
++   4: osp osp osp nop
++*/
+ #define K8_NOP1 GENERIC_NOP1
+ #define K8_NOP2	".byte 0x66,0x90\n"
+ #define K8_NOP3	".byte 0x66,0x66,0x90\n"
+@@ -23,19 +35,35 @@
+ #define K8_NOP7	K8_NOP4 K8_NOP3
+ #define K8_NOP8	K8_NOP4 K8_NOP4
+ 
+-/* K7 nops */
+-/* uses eax dependencies (arbitary choice) */
+-#define K7_NOP1  GENERIC_NOP1
++/* K7 nops
++   uses eax dependencies (arbitary choice)
++   1: nop
++   2: movl %eax,%eax
++   3: leal (,%eax,1),%eax
++   4: leal 0x00(,%eax,1),%eax
++   6: leal 0x00000000(%eax),%eax
++   7: leal 0x00000000(,%eax,1),%eax
++*/
++#define K7_NOP1	GENERIC_NOP1
+ #define K7_NOP2	".byte 0x8b,0xc0\n"
+ #define K7_NOP3	".byte 0x8d,0x04,0x20\n"
+ #define K7_NOP4	".byte 0x8d,0x44,0x20,0x00\n"
+ #define K7_NOP5	K7_NOP4 ASM_NOP1
+ #define K7_NOP6	".byte 0x8d,0x80,0,0,0,0\n"
+-#define K7_NOP7        ".byte 0x8D,0x04,0x05,0,0,0,0\n"
+-#define K7_NOP8        K7_NOP7 ASM_NOP1
++#define K7_NOP7	".byte 0x8D,0x04,0x05,0,0,0,0\n"
++#define K7_NOP8	K7_NOP7 ASM_NOP1
+ 
+-/* P6 nops */
+-/* uses eax dependencies (Intel-recommended choice) */
++/* P6 nops
++   uses eax dependencies (Intel-recommended choice)
++   1: nop
++   2: osp nop
++   3: nopl (%eax)
++   4: nopl 0x00(%eax)
++   5: nopl 0x00(%eax,%eax,1)
++   6: osp nopl 0x00(%eax,%eax,1)
++   7: nopl 0x00000000(%eax)
++   8: nopl 0x00000000(%eax,%eax,1)
++*/
+ #define P6_NOP1	GENERIC_NOP1
+ #define P6_NOP2	".byte 0x66,0x90\n"
+ #define P6_NOP3	".byte 0x0f,0x1f,0x00\n"
+@@ -63,9 +91,7 @@
+ #define ASM_NOP6 K7_NOP6
+ #define ASM_NOP7 K7_NOP7
+ #define ASM_NOP8 K7_NOP8
+-#elif defined(CONFIG_M686) || defined(CONFIG_MPENTIUMII) || \
+-      defined(CONFIG_MPENTIUMIII) || defined(CONFIG_MPENTIUMM) || \
+-      defined(CONFIG_MCORE2) || defined(CONFIG_PENTIUM4)
++#elif defined(CONFIG_X86_P6_NOP)
+ #define ASM_NOP1 P6_NOP1
+ #define ASM_NOP2 P6_NOP2
+ #define ASM_NOP3 P6_NOP3
+diff --git a/include/asm-x86/page_64.h b/include/asm-x86/page_64.h
+index f7393bc..1435460 100644
+--- a/include/asm-x86/page_64.h
++++ b/include/asm-x86/page_64.h
+@@ -47,8 +47,12 @@
+ #define __PHYSICAL_MASK_SHIFT	46
+ #define __VIRTUAL_MASK_SHIFT	48
+ 
+-#define KERNEL_TEXT_SIZE  (40*1024*1024)
+-#define KERNEL_TEXT_START _AC(0xffffffff80000000, UL)
++/*
++ * Kernel image size is limited to 128 MB (see level2_kernel_pgt in
++ * arch/x86/kernel/head_64.S), and it is mapped here:
++ */
++#define KERNEL_IMAGE_SIZE	(128*1024*1024)
++#define KERNEL_IMAGE_START	_AC(0xffffffff80000000, UL)
+ 
+ #ifndef __ASSEMBLY__
+ void clear_page(void *page);
+diff --git a/include/asm-x86/pgtable_32.h b/include/asm-x86/pgtable_32.h
+index a842c72..b478efa 100644
+--- a/include/asm-x86/pgtable_32.h
++++ b/include/asm-x86/pgtable_32.h
+@@ -91,7 +91,9 @@ extern unsigned long pg0[];
+ /* To avoid harmful races, pmd_none(x) should check only the lower when PAE */
+ #define pmd_none(x)	(!(unsigned long)pmd_val(x))
+ #define pmd_present(x)	(pmd_val(x) & _PAGE_PRESENT)
+-#define	pmd_bad(x)	((pmd_val(x) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
++#define	pmd_bad(x)	((pmd_val(x) \
++			  & ~(PAGE_MASK | _PAGE_USER | _PAGE_PSE | _PAGE_NX)) \
++			 != _KERNPG_TABLE)
+ 
+ 
+ #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
+diff --git a/include/asm-x86/pgtable_64.h b/include/asm-x86/pgtable_64.h
+index 0a0b77b..0a92583 100644
+--- a/include/asm-x86/pgtable_64.h
++++ b/include/asm-x86/pgtable_64.h
+@@ -153,12 +153,14 @@ static inline unsigned long pgd_bad(pgd_t pgd)
+ 
+ static inline unsigned long pud_bad(pud_t pud)
+ {
+-	return pud_val(pud) & ~(PTE_MASK | _KERNPG_TABLE | _PAGE_USER);
++	return pud_val(pud) &
++		~(PTE_MASK | _KERNPG_TABLE | _PAGE_USER | _PAGE_PSE | _PAGE_NX);
+ }
+ 
+ static inline unsigned long pmd_bad(pmd_t pmd)
+ {
+-	return pmd_val(pmd) & ~(PTE_MASK | _KERNPG_TABLE | _PAGE_USER);
++	return pmd_val(pmd) &
++		~(PTE_MASK | _KERNPG_TABLE | _PAGE_USER | _PAGE_PSE | _PAGE_NX);
+ }
+ 
+ #define pte_none(x)	(!pte_val(x))
+diff --git a/include/asm-x86/ptrace-abi.h b/include/asm-x86/ptrace-abi.h
+index 81a8ee4..f224eb3 100644
+--- a/include/asm-x86/ptrace-abi.h
++++ b/include/asm-x86/ptrace-abi.h
+@@ -89,13 +89,13 @@
+ */
+ struct ptrace_bts_config {
+ 	/* requested or actual size of BTS buffer in bytes */
+-	u32 size;
++	__u32 size;
+ 	/* bitmask of below flags */
+-	u32 flags;
++	__u32 flags;
+ 	/* buffer overflow signal */
+-	u32 signal;
++	__u32 signal;
+ 	/* actual size of bts_struct in bytes */
+-	u32 bts_size;
++	__u32 bts_size;
+ };
+ #endif
+ 
+diff --git a/include/linux/connector.h b/include/linux/connector.h
+index da6dd95..96a89d3 100644
+--- a/include/linux/connector.h
++++ b/include/linux/connector.h
+@@ -170,7 +170,5 @@ int cn_cb_equal(struct cb_id *, struct cb_id *);
+ 
+ void cn_queue_wrapper(struct work_struct *work);
+ 
+-extern int cn_already_initialized;
+-
+ #endif				/* __KERNEL__ */
+ #endif				/* __CONNECTOR_H */
+diff --git a/include/linux/elfcore-compat.h b/include/linux/elfcore-compat.h
+index 532d13a..0a90e1c 100644
+--- a/include/linux/elfcore-compat.h
++++ b/include/linux/elfcore-compat.h
+@@ -45,8 +45,8 @@ struct compat_elf_prpsinfo
+ 	char				pr_zomb;
+ 	char				pr_nice;
+ 	compat_ulong_t			pr_flag;
+-	compat_uid_t			pr_uid;
+-	compat_gid_t			pr_gid;
++	__compat_uid_t			pr_uid;
++	__compat_gid_t			pr_gid;
+ 	compat_pid_t			pr_pid, pr_ppid, pr_pgrp, pr_sid;
+ 	char				pr_fname[16];
+ 	char				pr_psargs[ELF_PRARGSZ];
+diff --git a/include/linux/ext4_fs_extents.h b/include/linux/ext4_fs_extents.h
+index 697da4b..1285c58 100644
+--- a/include/linux/ext4_fs_extents.h
++++ b/include/linux/ext4_fs_extents.h
+@@ -227,5 +227,6 @@ extern int ext4_ext_search_left(struct inode *, struct ext4_ext_path *,
+ 						ext4_lblk_t *, ext4_fsblk_t *);
+ extern int ext4_ext_search_right(struct inode *, struct ext4_ext_path *,
+ 						ext4_lblk_t *, ext4_fsblk_t *);
++extern void ext4_ext_drop_refs(struct ext4_ext_path *);
+ #endif /* _LINUX_EXT4_EXTENTS */
+ 
+diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
+index 2961ec7..4982998 100644
+--- a/include/linux/hardirq.h
++++ b/include/linux/hardirq.h
+@@ -109,6 +109,14 @@ static inline void account_system_vtime(struct task_struct *tsk)
+ }
+ #endif
+ 
++#if defined(CONFIG_PREEMPT_RCU) && defined(CONFIG_NO_HZ)
++extern void rcu_irq_enter(void);
++extern void rcu_irq_exit(void);
++#else
++# define rcu_irq_enter() do { } while (0)
++# define rcu_irq_exit() do { } while (0)
++#endif /* CONFIG_PREEMPT_RCU */
++
+ /*
+  * It is safe to do non-atomic ops on ->hardirq_context,
+  * because NMI handlers may not preempt and the ops are
+@@ -117,6 +125,7 @@ static inline void account_system_vtime(struct task_struct *tsk)
+  */
+ #define __irq_enter()					\
+ 	do {						\
++		rcu_irq_enter();			\
+ 		account_system_vtime(current);		\
+ 		add_preempt_count(HARDIRQ_OFFSET);	\
+ 		trace_hardirq_enter();			\
+@@ -135,6 +144,7 @@ extern void irq_enter(void);
+ 		trace_hardirq_exit();			\
+ 		account_system_vtime(current);		\
+ 		sub_preempt_count(HARDIRQ_OFFSET);	\
++		rcu_irq_exit();				\
+ 	} while (0)
+ 
+ /*
+diff --git a/include/linux/maple.h b/include/linux/maple.h
+index 3f01e2b..d31e36e 100644
+--- a/include/linux/maple.h
++++ b/include/linux/maple.h
+@@ -64,7 +64,6 @@ struct maple_driver {
+ 	int (*connect) (struct maple_device * dev);
+ 	void (*disconnect) (struct maple_device * dev);
+ 	struct device_driver drv;
+-	int registered;
+ };
+ 
+ void maple_getcond_callback(struct maple_device *dev,
+diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
+index b74b615..f0680c2 100644
+--- a/include/linux/netfilter.h
++++ b/include/linux/netfilter.h
+@@ -31,7 +31,7 @@
+ #define NF_VERDICT_QMASK 0xffff0000
+ #define NF_VERDICT_QBITS 16
+ 
+-#define NF_QUEUE_NR(x) (((x << NF_VERDICT_QBITS) & NF_VERDICT_QMASK) | NF_QUEUE)
++#define NF_QUEUE_NR(x) ((((x) << NF_VERDICT_BITS) & NF_VERDICT_QMASK) | NF_QUEUE)
+ 
+ /* only for userspace compatibility */
+ #ifndef __KERNEL__
+diff --git a/include/linux/rcuclassic.h b/include/linux/rcuclassic.h
+index 4d66242..b3dccd6 100644
+--- a/include/linux/rcuclassic.h
++++ b/include/linux/rcuclassic.h
+@@ -160,5 +160,8 @@ extern void rcu_restart_cpu(int cpu);
+ extern long rcu_batches_completed(void);
+ extern long rcu_batches_completed_bh(void);
+ 
++#define rcu_enter_nohz()	do { } while (0)
++#define rcu_exit_nohz()		do { } while (0)
++
+ #endif /* __KERNEL__ */
+ #endif /* __LINUX_RCUCLASSIC_H */
+diff --git a/include/linux/rcupreempt.h b/include/linux/rcupreempt.h
+index 60c2a03..01152ed 100644
+--- a/include/linux/rcupreempt.h
++++ b/include/linux/rcupreempt.h
+@@ -82,5 +82,27 @@ extern struct rcupreempt_trace *rcupreempt_trace_cpu(int cpu);
+ 
+ struct softirq_action;
+ 
++#ifdef CONFIG_NO_HZ
++DECLARE_PER_CPU(long, dynticks_progress_counter);
++
++static inline void rcu_enter_nohz(void)
++{
++	__get_cpu_var(dynticks_progress_counter)++;
++	WARN_ON(__get_cpu_var(dynticks_progress_counter) & 0x1);
++	mb();
++}
++
++static inline void rcu_exit_nohz(void)
++{
++	mb();
++	__get_cpu_var(dynticks_progress_counter)++;
++	WARN_ON(!(__get_cpu_var(dynticks_progress_counter) & 0x1));
++}
++
++#else /* CONFIG_NO_HZ */
++#define rcu_enter_nohz()	do { } while (0)
++#define rcu_exit_nohz()		do { } while (0)
++#endif /* CONFIG_NO_HZ */
++
+ #endif /* __KERNEL__ */
+ #endif /* __LINUX_RCUPREEMPT_H */
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index e217d18..2c9621f 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -242,6 +242,7 @@ struct task_struct;
+ 
+ extern void sched_init(void);
+ extern void sched_init_smp(void);
++extern asmlinkage void schedule_tail(struct task_struct *prev);
+ extern void init_idle(struct task_struct *idle, int cpu);
+ extern void init_idle_bootup_task(struct task_struct *idle);
+ 
+@@ -1189,7 +1190,7 @@ struct task_struct {
+ 	int softirq_context;
+ #endif
+ #ifdef CONFIG_LOCKDEP
+-# define MAX_LOCK_DEPTH 30UL
++# define MAX_LOCK_DEPTH 48UL
+ 	u64 curr_chain_key;
+ 	int lockdep_depth;
+ 	struct held_lock held_locks[MAX_LOCK_DEPTH];
+diff --git a/include/linux/serial_sci.h b/include/linux/serial_sci.h
+new file mode 100644
+index 0000000..893cc53
+--- /dev/null
++++ b/include/linux/serial_sci.h
+@@ -0,0 +1,32 @@
++#ifndef __LINUX_SERIAL_SCI_H
++#define __LINUX_SERIAL_SCI_H
++
++#include <linux/serial_core.h>
++
++/*
++ * Generic header for SuperH SCI(F) (used by sh/sh64/h8300 and related parts)
++ */
++
++/* Offsets into the sci_port->irqs array */
++enum {
++	SCIx_ERI_IRQ,
++	SCIx_RXI_IRQ,
++	SCIx_TXI_IRQ,
++	SCIx_BRI_IRQ,
++	SCIx_NR_IRQS,
++};
++
++/*
++ * Platform device specific platform_data struct
++ */
++struct plat_sci_port {
++	void __iomem	*membase;		/* io cookie */
++	unsigned long	mapbase;		/* resource base */
++	unsigned int	irqs[SCIx_NR_IRQS];	/* ERI, RXI, TXI, BRI */
++	unsigned int	type;			/* SCI / SCIF / IRDA */
++	upf_t		flags;			/* UPF_* flags */
++};
++
++int early_sci_setup(struct uart_port *port);
++
++#endif /* __LINUX_SERIAL_SCI_H */
+diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
+index 75370ec..9f1b4b4 100644
+--- a/include/linux/vmstat.h
++++ b/include/linux/vmstat.h
+@@ -246,8 +246,7 @@ static inline void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
+ static inline void __dec_zone_page_state(struct page *page,
+ 			enum zone_stat_item item)
+ {
+-	atomic_long_dec(&page_zone(page)->vm_stat[item]);
+-	atomic_long_dec(&vm_stat[item]);
++	__dec_zone_state(page_zone(page), item);
+ }
+ 
+ /*
+diff --git a/include/net/sctp/user.h b/include/net/sctp/user.h
+index 9462d6a..9619b9d 100644
+--- a/include/net/sctp/user.h
++++ b/include/net/sctp/user.h
+@@ -411,6 +411,7 @@ struct sctp_event_subscribe {
+ 	__u8 sctp_shutdown_event;
+ 	__u8 sctp_partial_delivery_event;
+ 	__u8 sctp_adaptation_layer_event;
++	__u8 sctp_authentication_event;
+ };
+ 
+ /*
+@@ -587,7 +588,7 @@ struct sctp_authchunk {
+  * endpoint requires the peer to use.
+ */
+ struct sctp_hmacalgo {
+-	__u16		shmac_num_idents;
++	__u32		shmac_num_idents;
+ 	__u16		shmac_idents[];
+ };
+ 
+@@ -600,7 +601,7 @@ struct sctp_hmacalgo {
+ struct sctp_authkey {
+ 	sctp_assoc_t	sca_assoc_id;
+ 	__u16		sca_keynumber;
+-	__u16		sca_keylen;
++	__u16		sca_keylength;
+ 	__u8		sca_key[];
+ };
+ 
+@@ -693,8 +694,9 @@ struct sctp_status {
+  * the peer requires to be received authenticated only.
+  */
+ struct sctp_authchunks {
+-	sctp_assoc_t            gauth_assoc_id;
+-	uint8_t                 gauth_chunks[];
++	sctp_assoc_t	gauth_assoc_id;
++	__u32		gauth_number_of_chunks;
++	uint8_t		gauth_chunks[];
+ };
+ 
+ /*
+diff --git a/kernel/audit.c b/kernel/audit.c
+index 2eeea9a..10c4930 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -170,7 +170,9 @@ void audit_panic(const char *message)
+ 			printk(KERN_ERR "audit: %s\n", message);
+ 		break;
+ 	case AUDIT_FAIL_PANIC:
+-		panic("audit: %s\n", message);
++		/* test audit_pid since printk is always losey, why bother? */
++		if (audit_pid)
++			panic("audit: %s\n", message);
+ 		break;
+ 	}
+ }
+@@ -352,6 +354,7 @@ static int kauditd_thread(void *dummy)
+ 				if (err < 0) {
+ 					BUG_ON(err != -ECONNREFUSED); /* Shoudn't happen */
+ 					printk(KERN_ERR "audit: *NO* daemon at audit_pid=%d\n", audit_pid);
++					audit_log_lost("auditd dissapeared\n");
+ 					audit_pid = 0;
+ 				}
+ 			} else {
+@@ -1350,17 +1353,19 @@ void audit_log_end(struct audit_buffer *ab)
+ 	if (!audit_rate_check()) {
+ 		audit_log_lost("rate limit exceeded");
+ 	} else {
++		struct nlmsghdr *nlh = nlmsg_hdr(ab->skb);
+ 		if (audit_pid) {
+-			struct nlmsghdr *nlh = nlmsg_hdr(ab->skb);
+ 			nlh->nlmsg_len = ab->skb->len - NLMSG_SPACE(0);
+ 			skb_queue_tail(&audit_skb_queue, ab->skb);
+ 			ab->skb = NULL;
+ 			wake_up_interruptible(&kauditd_wait);
+-		} else if (printk_ratelimit()) {
+-			struct nlmsghdr *nlh = nlmsg_hdr(ab->skb);
+-			printk(KERN_NOTICE "type=%d %s\n", nlh->nlmsg_type, ab->skb->data + NLMSG_SPACE(0));
+-		} else {
+-			audit_log_lost("printk limit exceeded\n");
++		} else if (nlh->nlmsg_type != AUDIT_EOE) {
++			if (printk_ratelimit()) {
++				printk(KERN_NOTICE "type=%d %s\n",
++					nlh->nlmsg_type,
++					ab->skb->data + NLMSG_SPACE(0));
++			} else
++				audit_log_lost("printk limit exceeded\n");
+ 		}
+ 	}
+ 	audit_buffer_free(ab);
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 2087d6d..782262e 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1070,7 +1070,7 @@ static int audit_log_single_execve_arg(struct audit_context *context,
+ 		 * so we can be sure nothing was lost.
+ 		 */
+ 		if ((i == 0) && (too_long))
+-			audit_log_format(*ab, "a%d_len=%ld ", arg_num,
++			audit_log_format(*ab, "a%d_len=%zu ", arg_num,
+ 					 has_cntl ? 2*len : len);
+ 
+ 		/*
+diff --git a/kernel/lockdep.c b/kernel/lockdep.c
+index 3574379..81a4e4a 100644
+--- a/kernel/lockdep.c
++++ b/kernel/lockdep.c
+@@ -779,6 +779,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
+ 	 * parallel walking of the hash-list safe:
+ 	 */
+ 	list_add_tail_rcu(&class->hash_entry, hash_head);
++	/*
++	 * Add it to the global list of classes:
++	 */
++	list_add_tail_rcu(&class->lock_entry, &all_lock_classes);
+ 
+ 	if (verbose(class)) {
+ 		graph_unlock();
+@@ -2282,10 +2286,6 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
+ 			return 0;
+ 		break;
+ 	case LOCK_USED:
+-		/*
+-		 * Add it to the global list of classes:
+-		 */
+-		list_add_tail_rcu(&this->class->lock_entry, &all_lock_classes);
+ 		debug_atomic_dec(&nr_unused_locks);
+ 		break;
+ 	default:
+diff --git a/kernel/printk.c b/kernel/printk.c
+index bee3610..9adc2a4 100644
+--- a/kernel/printk.c
++++ b/kernel/printk.c
+@@ -666,7 +666,7 @@ asmlinkage int vprintk(const char *fmt, va_list args)
+ 	}
+ 	/* Emit the output into the temporary buffer */
+ 	printed_len += vscnprintf(printk_buf + printed_len,
+-				  sizeof(printk_buf), fmt, args);
++				  sizeof(printk_buf) - printed_len, fmt, args);
+ 
+ 	/*
+ 	 * Copy the output into log_buf.  If the caller didn't provide
+diff --git a/kernel/rcupreempt.c b/kernel/rcupreempt.c
+index 987cfb7..e951701 100644
+--- a/kernel/rcupreempt.c
++++ b/kernel/rcupreempt.c
+@@ -23,6 +23,10 @@
+  *		to Suparna Bhattacharya for pushing me completely away
+  *		from atomic instructions on the read side.
+  *
++ *  - Added handling of Dynamic Ticks
++ *      Copyright 2007 - Paul E. Mckenney <paulmck at us.ibm.com>
++ *                     - Steven Rostedt <srostedt at redhat.com>
++ *
+  * Papers:  http://www.rdrop.com/users/paulmck/RCU
+  *
+  * Design Document: http://lwn.net/Articles/253651/
+@@ -409,6 +413,212 @@ static void __rcu_advance_callbacks(struct rcu_data *rdp)
+ 	}
+ }
+ 
++#ifdef CONFIG_NO_HZ
++
++DEFINE_PER_CPU(long, dynticks_progress_counter) = 1;
++static DEFINE_PER_CPU(long, rcu_dyntick_snapshot);
++static DEFINE_PER_CPU(int, rcu_update_flag);
++
++/**
++ * rcu_irq_enter - Called from Hard irq handlers and NMI/SMI.
++ *
++ * If the CPU was idle with dynamic ticks active, this updates the
++ * dynticks_progress_counter to let the RCU handling know that the
++ * CPU is active.
++ */
++void rcu_irq_enter(void)
++{
++	int cpu = smp_processor_id();
++
++	if (per_cpu(rcu_update_flag, cpu))
++		per_cpu(rcu_update_flag, cpu)++;
++
++	/*
++	 * Only update if we are coming from a stopped ticks mode
++	 * (dynticks_progress_counter is even).
++	 */
++	if (!in_interrupt() &&
++	    (per_cpu(dynticks_progress_counter, cpu) & 0x1) == 0) {
++		/*
++		 * The following might seem like we could have a race
++		 * with NMI/SMIs. But this really isn't a problem.
++		 * Here we do a read/modify/write, and the race happens
++		 * when an NMI/SMI comes in after the read and before
++		 * the write. But NMI/SMIs will increment this counter
++		 * twice before returning, so the zero bit will not
++		 * be corrupted by the NMI/SMI which is the most important
++		 * part.
++		 *
++		 * The only thing is that we would bring back the counter
++		 * to a postion that it was in during the NMI/SMI.
++		 * But the zero bit would be set, so the rest of the
++		 * counter would again be ignored.
++		 *
++		 * On return from the IRQ, the counter may have the zero
++		 * bit be 0 and the counter the same as the return from
++		 * the NMI/SMI. If the state machine was so unlucky to
++		 * see that, it still doesn't matter, since all
++		 * RCU read-side critical sections on this CPU would
++		 * have already completed.
++		 */
++		per_cpu(dynticks_progress_counter, cpu)++;
++		/*
++		 * The following memory barrier ensures that any
++		 * rcu_read_lock() primitives in the irq handler
++		 * are seen by other CPUs to follow the above
++		 * increment to dynticks_progress_counter. This is
++		 * required in order for other CPUs to correctly
++		 * determine when it is safe to advance the RCU
++		 * grace-period state machine.
++		 */
++		smp_mb(); /* see above block comment. */
++		/*
++		 * Since we can't determine the dynamic tick mode from
++		 * the dynticks_progress_counter after this routine,
++		 * we use a second flag to acknowledge that we came
++		 * from an idle state with ticks stopped.
++		 */
++		per_cpu(rcu_update_flag, cpu)++;
++		/*
++		 * If we take an NMI/SMI now, they will also increment
++		 * the rcu_update_flag, and will not update the
++		 * dynticks_progress_counter on exit. That is for
++		 * this IRQ to do.
++		 */
++	}
++}
++
++/**
++ * rcu_irq_exit - Called from exiting Hard irq context.
++ *
++ * If the CPU was idle with dynamic ticks active, update the
++ * dynticks_progress_counter to put let the RCU handling be
++ * aware that the CPU is going back to idle with no ticks.
++ */
++void rcu_irq_exit(void)
++{
++	int cpu = smp_processor_id();
++
++	/*
++	 * rcu_update_flag is set if we interrupted the CPU
++	 * when it was idle with ticks stopped.
++	 * Once this occurs, we keep track of interrupt nesting
++	 * because a NMI/SMI could also come in, and we still
++	 * only want the IRQ that started the increment of the
++	 * dynticks_progress_counter to be the one that modifies
++	 * it on exit.
++	 */
++	if (per_cpu(rcu_update_flag, cpu)) {
++		if (--per_cpu(rcu_update_flag, cpu))
++			return;
++
++		/* This must match the interrupt nesting */
++		WARN_ON(in_interrupt());
++
++		/*
++		 * If an NMI/SMI happens now we are still
++		 * protected by the dynticks_progress_counter being odd.
++		 */
++
++		/*
++		 * The following memory barrier ensures that any
++		 * rcu_read_unlock() primitives in the irq handler
++		 * are seen by other CPUs to preceed the following
++		 * increment to dynticks_progress_counter. This
++		 * is required in order for other CPUs to determine
++		 * when it is safe to advance the RCU grace-period
++		 * state machine.
++		 */
++		smp_mb(); /* see above block comment. */
++		per_cpu(dynticks_progress_counter, cpu)++;
++		WARN_ON(per_cpu(dynticks_progress_counter, cpu) & 0x1);
++	}
++}
++
++static void dyntick_save_progress_counter(int cpu)
++{
++	per_cpu(rcu_dyntick_snapshot, cpu) =
++		per_cpu(dynticks_progress_counter, cpu);
++}
++
++static inline int
++rcu_try_flip_waitack_needed(int cpu)
++{
++	long curr;
++	long snap;
++
++	curr = per_cpu(dynticks_progress_counter, cpu);
++	snap = per_cpu(rcu_dyntick_snapshot, cpu);
++	smp_mb(); /* force ordering with cpu entering/leaving dynticks. */
++
++	/*
++	 * If the CPU remained in dynticks mode for the entire time
++	 * and didn't take any interrupts, NMIs, SMIs, or whatever,
++	 * then it cannot be in the middle of an rcu_read_lock(), so
++	 * the next rcu_read_lock() it executes must use the new value
++	 * of the counter.  So we can safely pretend that this CPU
++	 * already acknowledged the counter.
++	 */
++
++	if ((curr == snap) && ((curr & 0x1) == 0))
++		return 0;
++
++	/*
++	 * If the CPU passed through or entered a dynticks idle phase with
++	 * no active irq handlers, then, as above, we can safely pretend
++	 * that this CPU already acknowledged the counter.
++	 */
++
++	if ((curr - snap) > 2 || (snap & 0x1) == 0)
++		return 0;
++
++	/* We need this CPU to explicitly acknowledge the counter flip. */
++
++	return 1;
++}
++
++static inline int
++rcu_try_flip_waitmb_needed(int cpu)
++{
++	long curr;
++	long snap;
++
++	curr = per_cpu(dynticks_progress_counter, cpu);
++	snap = per_cpu(rcu_dyntick_snapshot, cpu);
++	smp_mb(); /* force ordering with cpu entering/leaving dynticks. */
++
++	/*
++	 * If the CPU remained in dynticks mode for the entire time
++	 * and didn't take any interrupts, NMIs, SMIs, or whatever,
++	 * then it cannot have executed an RCU read-side critical section
++	 * during that time, so there is no need for it to execute a
++	 * memory barrier.
++	 */
++
++	if ((curr == snap) && ((curr & 0x1) == 0))
++		return 0;
++
++	/*
++	 * If the CPU either entered or exited an outermost interrupt,
++	 * SMI, NMI, or whatever handler, then we know that it executed
++	 * a memory barrier when doing so.  So we don't need another one.
++	 */
++	if (curr != snap)
++		return 0;
++
++	/* We need the CPU to execute a memory barrier. */
++
++	return 1;
++}
++
++#else /* !CONFIG_NO_HZ */
++
++# define dyntick_save_progress_counter(cpu)	do { } while (0)
++# define rcu_try_flip_waitack_needed(cpu)	(1)
++# define rcu_try_flip_waitmb_needed(cpu)	(1)
++
++#endif /* CONFIG_NO_HZ */
++
+ /*
+  * Get here when RCU is idle.  Decide whether we need to
+  * move out of idle state, and return non-zero if so.
+@@ -447,8 +657,10 @@ rcu_try_flip_idle(void)
+ 
+ 	/* Now ask each CPU for acknowledgement of the flip. */
+ 
+-	for_each_cpu_mask(cpu, rcu_cpu_online_map)
++	for_each_cpu_mask(cpu, rcu_cpu_online_map) {
+ 		per_cpu(rcu_flip_flag, cpu) = rcu_flipped;
++		dyntick_save_progress_counter(cpu);
++	}
+ 
+ 	return 1;
+ }
+@@ -464,7 +676,8 @@ rcu_try_flip_waitack(void)
+ 
+ 	RCU_TRACE_ME(rcupreempt_trace_try_flip_a1);
+ 	for_each_cpu_mask(cpu, rcu_cpu_online_map)
+-		if (per_cpu(rcu_flip_flag, cpu) != rcu_flip_seen) {
++		if (rcu_try_flip_waitack_needed(cpu) &&
++		    per_cpu(rcu_flip_flag, cpu) != rcu_flip_seen) {
+ 			RCU_TRACE_ME(rcupreempt_trace_try_flip_ae1);
+ 			return 0;
+ 		}
+@@ -509,8 +722,10 @@ rcu_try_flip_waitzero(void)
+ 	smp_mb();  /*  ^^^^^^^^^^^^ */
+ 
+ 	/* Call for a memory barrier from each CPU. */
+-	for_each_cpu_mask(cpu, rcu_cpu_online_map)
++	for_each_cpu_mask(cpu, rcu_cpu_online_map) {
+ 		per_cpu(rcu_mb_flag, cpu) = rcu_mb_needed;
++		dyntick_save_progress_counter(cpu);
++	}
+ 
+ 	RCU_TRACE_ME(rcupreempt_trace_try_flip_z2);
+ 	return 1;
+@@ -528,7 +743,8 @@ rcu_try_flip_waitmb(void)
+ 
+ 	RCU_TRACE_ME(rcupreempt_trace_try_flip_m1);
+ 	for_each_cpu_mask(cpu, rcu_cpu_online_map)
+-		if (per_cpu(rcu_mb_flag, cpu) != rcu_mb_done) {
++		if (rcu_try_flip_waitmb_needed(cpu) &&
++		    per_cpu(rcu_mb_flag, cpu) != rcu_mb_done) {
+ 			RCU_TRACE_ME(rcupreempt_trace_try_flip_me1);
+ 			return 0;
+ 		}
+@@ -702,8 +918,9 @@ void rcu_offline_cpu(int cpu)
+ 	 * fix.
+ 	 */
+ 
++	local_irq_save(flags);
+ 	rdp = RCU_DATA_ME();
+-	spin_lock_irqsave(&rdp->lock, flags);
++	spin_lock(&rdp->lock);
+ 	*rdp->nexttail = list;
+ 	if (list)
+ 		rdp->nexttail = tail;
+@@ -735,9 +952,11 @@ static void rcu_process_callbacks(struct softirq_action *unused)
+ {
+ 	unsigned long flags;
+ 	struct rcu_head *next, *list;
+-	struct rcu_data *rdp = RCU_DATA_ME();
++	struct rcu_data *rdp;
+ 
+-	spin_lock_irqsave(&rdp->lock, flags);
++	local_irq_save(flags);
++	rdp = RCU_DATA_ME();
++	spin_lock(&rdp->lock);
+ 	list = rdp->donelist;
+ 	if (list == NULL) {
+ 		spin_unlock_irqrestore(&rdp->lock, flags);
+diff --git a/kernel/sched.c b/kernel/sched.c
+index b387a8d..f06950c 100644
+--- a/kernel/sched.c
++++ b/kernel/sched.c
+@@ -668,6 +668,8 @@ const_debug unsigned int sysctl_sched_nr_migrate = 32;
+  */
+ unsigned int sysctl_sched_rt_period = 1000000;
+ 
++static __read_mostly int scheduler_running;
++
+ /*
+  * part of the period that we allow rt tasks to run in us.
+  * default: 0.95s
+@@ -689,14 +691,16 @@ unsigned long long cpu_clock(int cpu)
+ 	unsigned long flags;
+ 	struct rq *rq;
+ 
+-	local_irq_save(flags);
+-	rq = cpu_rq(cpu);
+ 	/*
+ 	 * Only call sched_clock() if the scheduler has already been
+ 	 * initialized (some code might call cpu_clock() very early):
+ 	 */
+-	if (rq->idle)
+-		update_rq_clock(rq);
++	if (unlikely(!scheduler_running))
++		return 0;
++
++	local_irq_save(flags);
++	rq = cpu_rq(cpu);
++	update_rq_clock(rq);
+ 	now = rq->clock;
+ 	local_irq_restore(flags);
+ 
+@@ -3885,7 +3889,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev)
+ asmlinkage void __sched schedule(void)
+ {
+ 	struct task_struct *prev, *next;
+-	long *switch_count;
++	unsigned long *switch_count;
+ 	struct rq *rq;
+ 	int cpu;
+ 
+@@ -7284,6 +7288,8 @@ void __init sched_init(void)
+ 	 * During early bootup we pretend to be a normal task:
+ 	 */
+ 	current->sched_class = &fair_sched_class;
++
++	scheduler_running = 1;
+ }
+ 
+ #ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
+diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
+index 6c091d6..c8e6492 100644
+--- a/kernel/sched_fair.c
++++ b/kernel/sched_fair.c
+@@ -202,17 +202,12 @@ static struct sched_entity *__pick_next_entity(struct cfs_rq *cfs_rq)
+ 
+ static inline struct sched_entity *__pick_last_entity(struct cfs_rq *cfs_rq)
+ {
+-	struct rb_node **link = &cfs_rq->tasks_timeline.rb_node;
+-	struct sched_entity *se = NULL;
+-	struct rb_node *parent;
++	struct rb_node *last = rb_last(&cfs_rq->tasks_timeline);
+ 
+-	while (*link) {
+-		parent = *link;
+-		se = rb_entry(parent, struct sched_entity, run_node);
+-		link = &parent->rb_right;
+-	}
++	if (!last)
++		return NULL;
+ 
+-	return se;
++	return rb_entry(last, struct sched_entity, run_node);
+ }
+ 
+ /**************************************************************
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index 5b3aea5..31e9f2a 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -313,6 +313,7 @@ void irq_exit(void)
+ 	/* Make sure that timer wheel updates are propagated */
+ 	if (!in_interrupt() && idle_cpu(smp_processor_id()) && !need_resched())
+ 		tick_nohz_stop_sched_tick();
++	rcu_irq_exit();
+ #endif
+ 	preempt_enable_no_resched();
+ }
+diff --git a/kernel/softlockup.c b/kernel/softlockup.c
+index 7c2da88..01b6522 100644
+--- a/kernel/softlockup.c
++++ b/kernel/softlockup.c
+@@ -216,26 +216,27 @@ static int watchdog(void *__bind_cpu)
+ 	/* initialize timestamp */
+ 	touch_softlockup_watchdog();
+ 
++	set_current_state(TASK_INTERRUPTIBLE);
+ 	/*
+ 	 * Run briefly once per second to reset the softlockup timestamp.
+ 	 * If this gets delayed for more than 60 seconds then the
+ 	 * debug-printout triggers in softlockup_tick().
+ 	 */
+ 	while (!kthread_should_stop()) {
+-		set_current_state(TASK_INTERRUPTIBLE);
+ 		touch_softlockup_watchdog();
+ 		schedule();
+ 
+ 		if (kthread_should_stop())
+ 			break;
+ 
+-		if (this_cpu != check_cpu)
+-			continue;
+-
+-		if (sysctl_hung_task_timeout_secs)
+-			check_hung_uninterruptible_tasks(this_cpu);
++		if (this_cpu == check_cpu) {
++			if (sysctl_hung_task_timeout_secs)
++				check_hung_uninterruptible_tasks(this_cpu);
++		}
+ 
++		set_current_state(TASK_INTERRUPTIBLE);
+ 	}
++	__set_current_state(TASK_RUNNING);
+ 
+ 	return 0;
+ }
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index fa9bb73..2968298 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -282,6 +282,7 @@ void tick_nohz_stop_sched_tick(void)
+ 			ts->idle_tick = ts->sched_timer.expires;
+ 			ts->tick_stopped = 1;
+ 			ts->idle_jiffies = last_jiffies;
++			rcu_enter_nohz();
+ 		}
+ 
+ 		/*
+@@ -375,6 +376,8 @@ void tick_nohz_restart_sched_tick(void)
+ 		return;
+ 	}
+ 
++	rcu_exit_nohz();
++
+ 	/* Update jiffies first */
+ 	select_nohz_load_balancer(0);
+ 	now = ktime_get();
+diff --git a/net/8021q/vlanproc.c b/net/8021q/vlanproc.c
+index a0ec479..146cfb0 100644
+--- a/net/8021q/vlanproc.c
++++ b/net/8021q/vlanproc.c
+@@ -161,11 +161,10 @@ int __init vlan_proc_init(void)
+ 	if (!proc_vlan_dir)
+ 		goto err;
+ 
+-	proc_vlan_conf = create_proc_entry(name_conf, S_IFREG|S_IRUSR|S_IWUSR,
+-					   proc_vlan_dir);
++	proc_vlan_conf = proc_create(name_conf, S_IFREG|S_IRUSR|S_IWUSR,
++				     proc_vlan_dir, &vlan_fops);
+ 	if (!proc_vlan_conf)
+ 		goto err;
+-	proc_vlan_conf->proc_fops = &vlan_fops;
+ 	return 0;
+ 
+ err:
+@@ -182,13 +181,11 @@ int vlan_proc_add_dev(struct net_device *vlandev)
+ {
+ 	struct vlan_dev_info *dev_info = vlan_dev_info(vlandev);
+ 
+-	dev_info->dent = create_proc_entry(vlandev->name,
+-					   S_IFREG|S_IRUSR|S_IWUSR,
+-					   proc_vlan_dir);
++	dev_info->dent = proc_create(vlandev->name, S_IFREG|S_IRUSR|S_IWUSR,
++				     proc_vlan_dir, &vlandev_fops);
+ 	if (!dev_info->dent)
+ 		return -ENOBUFS;
+ 
+-	dev_info->dent->proc_fops = &vlandev_fops;
+ 	dev_info->dent->data = vlandev;
+ 	return 0;
+ }
+diff --git a/net/appletalk/atalk_proc.c b/net/appletalk/atalk_proc.c
+index 8e8dcfd..162199a 100644
+--- a/net/appletalk/atalk_proc.c
++++ b/net/appletalk/atalk_proc.c
+@@ -283,25 +283,24 @@ int __init atalk_proc_init(void)
+ 		goto out;
+ 	atalk_proc_dir->owner = THIS_MODULE;
+ 
+-	p = create_proc_entry("interface", S_IRUGO, atalk_proc_dir);
++	p = proc_create("interface", S_IRUGO, atalk_proc_dir,
++			&atalk_seq_interface_fops);
+ 	if (!p)
+ 		goto out_interface;
+-	p->proc_fops = &atalk_seq_interface_fops;
+ 
+-	p = create_proc_entry("route", S_IRUGO, atalk_proc_dir);
++	p = proc_create("route", S_IRUGO, atalk_proc_dir,
++			&atalk_seq_route_fops);
+ 	if (!p)
+ 		goto out_route;
+-	p->proc_fops = &atalk_seq_route_fops;
+ 
+-	p = create_proc_entry("socket", S_IRUGO, atalk_proc_dir);
++	p = proc_create("socket", S_IRUGO, atalk_proc_dir,
++			&atalk_seq_socket_fops);
+ 	if (!p)
+ 		goto out_socket;
+-	p->proc_fops = &atalk_seq_socket_fops;
+ 
+-	p = create_proc_entry("arp", S_IRUGO, atalk_proc_dir);
++	p = proc_create("arp", S_IRUGO, atalk_proc_dir, &atalk_seq_arp_fops);
+ 	if (!p)
+ 		goto out_arp;
+-	p->proc_fops = &atalk_seq_arp_fops;
+ 
+ 	rc = 0;
+ out:
+diff --git a/net/atm/br2684.c b/net/atm/br2684.c
+index 574d9a9..1b22806 100644
+--- a/net/atm/br2684.c
++++ b/net/atm/br2684.c
+@@ -742,9 +742,9 @@ static int __init br2684_init(void)
+ {
+ #ifdef CONFIG_PROC_FS
+ 	struct proc_dir_entry *p;
+-	if ((p = create_proc_entry("br2684", 0, atm_proc_root)) == NULL)
++	p = proc_create("br2684", 0, atm_proc_root, &br2684_proc_ops);
++	if (p == NULL)
+ 		return -ENOMEM;
+-	p->proc_fops = &br2684_proc_ops;
+ #endif
+ 	register_atm_ioctl(&br2684_ioctl_ops);
+ 	return 0;
+diff --git a/net/atm/clip.c b/net/atm/clip.c
+index 86b885e..d30167c 100644
+--- a/net/atm/clip.c
++++ b/net/atm/clip.c
+@@ -962,9 +962,7 @@ static int __init atm_clip_init(void)
+ 	{
+ 		struct proc_dir_entry *p;
+ 
+-		p = create_proc_entry("arp", S_IRUGO, atm_proc_root);
+-		if (p)
+-			p->proc_fops = &arp_seq_fops;
++		p = proc_create("arp", S_IRUGO, atm_proc_root, &arp_seq_fops);
+ 	}
+ #endif
+ 
+diff --git a/net/atm/lec.c b/net/atm/lec.c
+index 1a8c4c6..0e450d1 100644
+--- a/net/atm/lec.c
++++ b/net/atm/lec.c
+@@ -1249,9 +1249,7 @@ static int __init lane_module_init(void)
+ #ifdef CONFIG_PROC_FS
+ 	struct proc_dir_entry *p;
+ 
+-	p = create_proc_entry("lec", S_IRUGO, atm_proc_root);
+-	if (p)
+-		p->proc_fops = &lec_seq_fops;
++	p = proc_create("lec", S_IRUGO, atm_proc_root, &lec_seq_fops);
+ #endif
+ 
+ 	register_atm_ioctl(&lane_ioctl_ops);
+diff --git a/net/atm/mpoa_proc.c b/net/atm/mpoa_proc.c
+index 91f3ffc..4990541 100644
+--- a/net/atm/mpoa_proc.c
++++ b/net/atm/mpoa_proc.c
+@@ -276,12 +276,11 @@ int mpc_proc_init(void)
+ {
+ 	struct proc_dir_entry *p;
+ 
+-	p = create_proc_entry(STAT_FILE_NAME, 0, atm_proc_root);
++	p = proc_create(STAT_FILE_NAME, 0, atm_proc_root, &mpc_file_operations);
+ 	if (!p) {
+ 		printk(KERN_ERR "Unable to initialize /proc/atm/%s\n", STAT_FILE_NAME);
+ 		return -ENOMEM;
+ 	}
+-	p->proc_fops = &mpc_file_operations;
+ 	p->owner = THIS_MODULE;
+ 	return 0;
+ }
+diff --git a/net/atm/proc.c b/net/atm/proc.c
+index 4912511..e9693ae 100644
+--- a/net/atm/proc.c
++++ b/net/atm/proc.c
+@@ -435,11 +435,11 @@ int atm_proc_dev_register(struct atm_dev *dev)
+ 		goto err_out;
+ 	sprintf(dev->proc_name,"%s:%d",dev->type, dev->number);
+ 
+-	dev->proc_entry = create_proc_entry(dev->proc_name, 0, atm_proc_root);
++	dev->proc_entry = proc_create(dev->proc_name, 0, atm_proc_root,
++				      &proc_atm_dev_ops);
+ 	if (!dev->proc_entry)
+ 		goto err_free_name;
+ 	dev->proc_entry->data = dev;
+-	dev->proc_entry->proc_fops = &proc_atm_dev_ops;
+ 	dev->proc_entry->owner = THIS_MODULE;
+ 	return 0;
+ err_free_name:
+@@ -492,10 +492,10 @@ int __init atm_proc_init(void)
+ 	for (e = atm_proc_ents; e->name; e++) {
+ 		struct proc_dir_entry *dirent;
+ 
+-		dirent = create_proc_entry(e->name, S_IRUGO, atm_proc_root);
++		dirent = proc_create(e->name, S_IRUGO,
++				     atm_proc_root, e->proc_fops);
+ 		if (!dirent)
+ 			goto err_out_remove;
+-		dirent->proc_fops = e->proc_fops;
+ 		dirent->owner = THIS_MODULE;
+ 		e->dirent = dirent;
+ 	}
+diff --git a/net/bluetooth/l2cap.c b/net/bluetooth/l2cap.c
+index a8811c0..7c5459c 100644
+--- a/net/bluetooth/l2cap.c
++++ b/net/bluetooth/l2cap.c
+@@ -417,6 +417,8 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err)
+ 		l2cap_sock_kill(sk);
+ 	}
+ 
++	del_timer_sync(&conn->info_timer);
++
+ 	hcon->l2cap_data = NULL;
+ 	kfree(conn);
+ }
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 2328acb..aef0153 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1389,10 +1389,10 @@ void neigh_table_init_no_netlink(struct neigh_table *tbl)
+ 		panic("cannot create neighbour cache statistics");
+ 
+ #ifdef CONFIG_PROC_FS
+-	tbl->pde = create_proc_entry(tbl->id, 0, init_net.proc_net_stat);
++	tbl->pde = proc_create(tbl->id, 0, init_net.proc_net_stat,
++			       &neigh_stat_seq_fops);
+ 	if (!tbl->pde)
+ 		panic("cannot create neighbour proc dir entry");
+-	tbl->pde->proc_fops = &neigh_stat_seq_fops;
+ 	tbl->pde->data = tbl;
+ #endif
+ 
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index bfcdfae..20e63b3 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -3570,14 +3570,14 @@ static int pktgen_add_device(struct pktgen_thread *t, const char *ifname)
+ 	if (err)
+ 		goto out1;
+ 
+-	pkt_dev->entry = create_proc_entry(ifname, 0600, pg_proc_dir);
++	pkt_dev->entry = proc_create(ifname, 0600,
++				     pg_proc_dir, &pktgen_if_fops);
+ 	if (!pkt_dev->entry) {
+ 		printk(KERN_ERR "pktgen: cannot create %s/%s procfs entry.\n",
+ 		       PG_PROC_DIR, ifname);
+ 		err = -EINVAL;
+ 		goto out2;
+ 	}
+-	pkt_dev->entry->proc_fops = &pktgen_if_fops;
+ 	pkt_dev->entry->data = pkt_dev;
+ #ifdef CONFIG_XFRM
+ 	pkt_dev->ipsmode = XFRM_MODE_TRANSPORT;
+@@ -3628,7 +3628,7 @@ static int __init pktgen_create_thread(int cpu)
+ 	kthread_bind(p, cpu);
+ 	t->tsk = p;
+ 
+-	pe = create_proc_entry(t->tsk->comm, 0600, pg_proc_dir);
++	pe = proc_create(t->tsk->comm, 0600, pg_proc_dir, &pktgen_thread_fops);
+ 	if (!pe) {
+ 		printk(KERN_ERR "pktgen: cannot create %s/%s procfs entry.\n",
+ 		       PG_PROC_DIR, t->tsk->comm);
+@@ -3638,7 +3638,6 @@ static int __init pktgen_create_thread(int cpu)
+ 		return -EINVAL;
+ 	}
+ 
+-	pe->proc_fops = &pktgen_thread_fops;
+ 	pe->data = t;
+ 
+ 	wake_up_process(p);
+@@ -3709,7 +3708,7 @@ static int __init pg_init(void)
+ 		return -ENODEV;
+ 	pg_proc_dir->owner = THIS_MODULE;
+ 
+-	pe = create_proc_entry(PGCTRL, 0600, pg_proc_dir);
++	pe = proc_create(PGCTRL, 0600, pg_proc_dir, &pktgen_fops);
+ 	if (pe == NULL) {
+ 		printk(KERN_ERR "pktgen: ERROR: cannot create %s "
+ 		       "procfs entry.\n", PGCTRL);
+@@ -3717,7 +3716,6 @@ static int __init pg_init(void)
+ 		return -EINVAL;
+ 	}
+ 
+-	pe->proc_fops = &pktgen_fops;
+ 	pe->data = NULL;
+ 
+ 	/* Register us to receive netdevice events */
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index f282b26..87490f7 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -752,6 +752,7 @@ int devinet_ioctl(unsigned int cmd, void __user *arg)
+ 			inet_del_ifa(in_dev, ifap, 0);
+ 			ifa->ifa_broadcast = 0;
+ 			ifa->ifa_anycast = 0;
++			ifa->ifa_scope = 0;
+ 		}
+ 
+ 		ifa->ifa_address = ifa->ifa_local = sin->sin_addr.s_addr;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 906cb1a..e7821ba 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -266,20 +266,24 @@ static struct ip_tunnel * ipgre_tunnel_locate(struct ip_tunnel_parm *parms, int
+ 	if (!dev)
+ 	  return NULL;
+ 
++	if (strchr(name, '%')) {
++		if (dev_alloc_name(dev, name) < 0)
++			goto failed_free;
++	}
++
+ 	dev->init = ipgre_tunnel_init;
+ 	nt = netdev_priv(dev);
+ 	nt->parms = *parms;
+ 
+-	if (register_netdevice(dev) < 0) {
+-		free_netdev(dev);
+-		goto failed;
+-	}
++	if (register_netdevice(dev) < 0)
++		goto failed_free;
+ 
+ 	dev_hold(dev);
+ 	ipgre_tunnel_link(nt);
+ 	return nt;
+ 
+-failed:
++failed_free:
++	free_netdev(dev);
+ 	return NULL;
+ }
+ 
+diff --git a/net/ipv4/ipcomp.c b/net/ipv4/ipcomp.c
+index ae1f45f..58b60b2 100644
+--- a/net/ipv4/ipcomp.c
++++ b/net/ipv4/ipcomp.c
+@@ -108,8 +108,11 @@ static int ipcomp_compress(struct xfrm_state *x, struct sk_buff *skb)
+ 	const int cpu = get_cpu();
+ 	u8 *scratch = *per_cpu_ptr(ipcomp_scratches, cpu);
+ 	struct crypto_comp *tfm = *per_cpu_ptr(ipcd->tfms, cpu);
+-	int err = crypto_comp_compress(tfm, start, plen, scratch, &dlen);
++	int err;
+ 
++	local_bh_disable();
++	err = crypto_comp_compress(tfm, start, plen, scratch, &dlen);
++	local_bh_enable();
+ 	if (err)
+ 		goto out;
+ 
+diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c
+index e77e3b8..dbaed69 100644
+--- a/net/ipv4/ipip.c
++++ b/net/ipv4/ipip.c
+@@ -228,20 +228,24 @@ static struct ip_tunnel * ipip_tunnel_locate(struct ip_tunnel_parm *parms, int c
+ 	if (dev == NULL)
+ 		return NULL;
+ 
++	if (strchr(name, '%')) {
++		if (dev_alloc_name(dev, name) < 0)
++			goto failed_free;
++	}
++
+ 	nt = netdev_priv(dev);
+ 	dev->init = ipip_tunnel_init;
+ 	nt->parms = *parms;
+ 
+-	if (register_netdevice(dev) < 0) {
+-		free_netdev(dev);
+-		goto failed;
+-	}
++	if (register_netdevice(dev) < 0)
++		goto failed_free;
+ 
+ 	dev_hold(dev);
+ 	ipip_tunnel_link(nt);
+ 	return nt;
+ 
+-failed:
++failed_free:
++	free_netdev(dev);
+ 	return NULL;
+ }
+ 
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 525787b..7b5e8e1 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -542,12 +542,11 @@ static __init int ip_rt_proc_init(struct net *net)
+ 	if (!pde)
+ 		goto err1;
+ 
+-	pde = create_proc_entry("rt_cache", S_IRUGO, net->proc_net_stat);
++	pde = proc_create("rt_cache", S_IRUGO,
++			  net->proc_net_stat, &rt_cpu_seq_fops);
+ 	if (!pde)
+ 		goto err2;
+ 
+-	pde->proc_fops = &rt_cpu_seq_fops;
+-
+ #ifdef CONFIG_NET_CLS_ROUTE
+ 	pde = create_proc_read_entry("rt_acct", 0, net->proc_net,
+ 			ip_rt_acct_read, NULL);
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index e40213d..101e0e7 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -1557,6 +1557,7 @@ addrconf_prefix_route(struct in6_addr *pfx, int plen, struct net_device *dev,
+ 		.fc_expires = expires,
+ 		.fc_dst_len = plen,
+ 		.fc_flags = RTF_UP | flags,
++		.fc_nlinfo.nl_net = &init_net,
+ 	};
+ 
+ 	ipv6_addr_copy(&cfg.fc_dst, pfx);
+@@ -1583,6 +1584,7 @@ static void addrconf_add_mroute(struct net_device *dev)
+ 		.fc_ifindex = dev->ifindex,
+ 		.fc_dst_len = 8,
+ 		.fc_flags = RTF_UP,
++		.fc_nlinfo.nl_net = &init_net,
+ 	};
+ 
+ 	ipv6_addr_set(&cfg.fc_dst, htonl(0xFF000000), 0, 0, 0);
+@@ -1599,6 +1601,7 @@ static void sit_route_add(struct net_device *dev)
+ 		.fc_ifindex = dev->ifindex,
+ 		.fc_dst_len = 96,
+ 		.fc_flags = RTF_UP | RTF_NONEXTHOP,
++		.fc_nlinfo.nl_net = &init_net,
+ 	};
+ 
+ 	/* prefix length - 96 bits "::d.d.d.d" */
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 2a124e9..78f4388 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -238,17 +238,24 @@ static struct ip6_tnl *ip6_tnl_create(struct ip6_tnl_parm *p)
+ 	if (dev == NULL)
+ 		goto failed;
+ 
++	if (strchr(name, '%')) {
++		if (dev_alloc_name(dev, name) < 0)
++			goto failed_free;
++	}
++
+ 	t = netdev_priv(dev);
+ 	dev->init = ip6_tnl_dev_init;
+ 	t->parms = *p;
+ 
+-	if ((err = register_netdevice(dev)) < 0) {
+-		free_netdev(dev);
+-		goto failed;
+-	}
++	if ((err = register_netdevice(dev)) < 0)
++		goto failed_free;
++
+ 	dev_hold(dev);
+ 	ip6_tnl_link(t);
+ 	return t;
++
++failed_free:
++	free_netdev(dev);
+ failed:
+ 	return NULL;
+ }
+diff --git a/net/ipv6/ipcomp6.c b/net/ipv6/ipcomp6.c
+index b900395..e3dcfa2 100644
+--- a/net/ipv6/ipcomp6.c
++++ b/net/ipv6/ipcomp6.c
+@@ -146,7 +146,9 @@ static int ipcomp6_output(struct xfrm_state *x, struct sk_buff *skb)
+ 	scratch = *per_cpu_ptr(ipcomp6_scratches, cpu);
+ 	tfm = *per_cpu_ptr(ipcd->tfms, cpu);
+ 
++	local_bh_disable();
+ 	err = crypto_comp_compress(tfm, start, plen, scratch, &dlen);
++	local_bh_enable();
+ 	if (err || (dlen + sizeof(*ipch)) >= plen) {
+ 		put_cpu();
+ 		goto out_ok;
+diff --git a/net/ipv6/proc.c b/net/ipv6/proc.c
+index 35e502a..199ef37 100644
+--- a/net/ipv6/proc.c
++++ b/net/ipv6/proc.c
+@@ -217,12 +217,12 @@ int snmp6_register_dev(struct inet6_dev *idev)
+ 	if (!proc_net_devsnmp6)
+ 		return -ENOENT;
+ 
+-	p = create_proc_entry(idev->dev->name, S_IRUGO, proc_net_devsnmp6);
++	p = proc_create(idev->dev->name, S_IRUGO,
++			proc_net_devsnmp6, &snmp6_seq_fops);
+ 	if (!p)
+ 		return -ENOMEM;
+ 
+ 	p->data = idev;
+-	p->proc_fops = &snmp6_seq_fops;
+ 
+ 	idev->stats.proc_dir_entry = p;
+ 	return 0;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 6e7b56e..e8b241c 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1719,6 +1719,8 @@ static void rtmsg_to_fib6_config(struct in6_rtmsg *rtmsg,
+ 	cfg->fc_src_len = rtmsg->rtmsg_src_len;
+ 	cfg->fc_flags = rtmsg->rtmsg_flags;
+ 
++	cfg->fc_nlinfo.nl_net = &init_net;
++
+ 	ipv6_addr_copy(&cfg->fc_dst, &rtmsg->rtmsg_dst);
+ 	ipv6_addr_copy(&cfg->fc_src, &rtmsg->rtmsg_src);
+ 	ipv6_addr_copy(&cfg->fc_gateway, &rtmsg->rtmsg_gateway);
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index dde7801..1656c00 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -171,6 +171,11 @@ static struct ip_tunnel * ipip6_tunnel_locate(struct ip_tunnel_parm *parms, int
+ 	if (dev == NULL)
+ 		return NULL;
+ 
++	if (strchr(name, '%')) {
++		if (dev_alloc_name(dev, name) < 0)
++			goto failed_free;
++	}
++
+ 	nt = netdev_priv(dev);
+ 	dev->init = ipip6_tunnel_init;
+ 	nt->parms = *parms;
+@@ -178,16 +183,16 @@ static struct ip_tunnel * ipip6_tunnel_locate(struct ip_tunnel_parm *parms, int
+ 	if (parms->i_flags & SIT_ISATAP)
+ 		dev->priv_flags |= IFF_ISATAP;
+ 
+-	if (register_netdevice(dev) < 0) {
+-		free_netdev(dev);
+-		goto failed;
+-	}
++	if (register_netdevice(dev) < 0)
++		goto failed_free;
+ 
+ 	dev_hold(dev);
+ 
+ 	ipip6_tunnel_link(nt);
+ 	return nt;
+ 
++failed_free:
++	free_netdev(dev);
+ failed:
+ 	return NULL;
+ }
+diff --git a/net/ipv6/sysctl_net_ipv6.c b/net/ipv6/sysctl_net_ipv6.c
+index 408691b..d6d3e68 100644
+--- a/net/ipv6/sysctl_net_ipv6.c
++++ b/net/ipv6/sysctl_net_ipv6.c
+@@ -102,9 +102,6 @@ static int ipv6_sysctl_net_init(struct net *net)
+ 	net->ipv6.sysctl.table = register_net_sysctl_table(net, net_ipv6_ctl_path,
+ 							   ipv6_table);
+ 	if (!net->ipv6.sysctl.table)
+-		return -ENOMEM;
+-
+-	if (!net->ipv6.sysctl.table)
+ 		goto out_ipv6_icmp_table;
+ 
+ 	err = 0;
+diff --git a/net/ipx/ipx_proc.c b/net/ipx/ipx_proc.c
+index d483a00..5ed97ad 100644
+--- a/net/ipx/ipx_proc.c
++++ b/net/ipx/ipx_proc.c
+@@ -358,22 +358,19 @@ int __init ipx_proc_init(void)
+ 
+ 	if (!ipx_proc_dir)
+ 		goto out;
+-	p = create_proc_entry("interface", S_IRUGO, ipx_proc_dir);
++	p = proc_create("interface", S_IRUGO,
++			ipx_proc_dir, &ipx_seq_interface_fops);
+ 	if (!p)
+ 		goto out_interface;
+ 
+-	p->proc_fops = &ipx_seq_interface_fops;
+-	p = create_proc_entry("route", S_IRUGO, ipx_proc_dir);
++	p = proc_create("route", S_IRUGO, ipx_proc_dir, &ipx_seq_route_fops);
+ 	if (!p)
+ 		goto out_route;
+ 
+-	p->proc_fops = &ipx_seq_route_fops;
+-	p = create_proc_entry("socket", S_IRUGO, ipx_proc_dir);
++	p = proc_create("socket", S_IRUGO, ipx_proc_dir, &ipx_seq_socket_fops);
+ 	if (!p)
+ 		goto out_socket;
+ 
+-	p->proc_fops = &ipx_seq_socket_fops;
+-
+ 	rc = 0;
+ out:
+ 	return rc;
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 1c85392..8b5f486 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -3807,17 +3807,16 @@ static int pfkey_init_proc(void)
+ {
+ 	struct proc_dir_entry *e;
+ 
+-	e = create_proc_entry("pfkey", 0, init_net.proc_net);
++	e = proc_net_fops_create(&init_net, "pfkey", 0, &pfkey_proc_ops);
+ 	if (e == NULL)
+ 		return -ENOMEM;
+ 
+-	e->proc_fops = &pfkey_proc_ops;
+ 	return 0;
+ }
+ 
+ static void pfkey_exit_proc(void)
+ {
+-	remove_proc_entry("net/pfkey", NULL);
++	proc_net_remove(&init_net, "pfkey");
+ }
+ #else
+ static inline int pfkey_init_proc(void)
+diff --git a/net/llc/llc_proc.c b/net/llc/llc_proc.c
+index cb34bc0..48212c0 100644
+--- a/net/llc/llc_proc.c
++++ b/net/llc/llc_proc.c
+@@ -239,18 +239,14 @@ int __init llc_proc_init(void)
+ 		goto out;
+ 	llc_proc_dir->owner = THIS_MODULE;
+ 
+-	p = create_proc_entry("socket", S_IRUGO, llc_proc_dir);
++	p = proc_create("socket", S_IRUGO, llc_proc_dir, &llc_seq_socket_fops);
+ 	if (!p)
+ 		goto out_socket;
+ 
+-	p->proc_fops = &llc_seq_socket_fops;
+-
+-	p = create_proc_entry("core", S_IRUGO, llc_proc_dir);
++	p = proc_create("core", S_IRUGO, llc_proc_dir, &llc_seq_core_fops);
+ 	if (!p)
+ 		goto out_core;
+ 
+-	p->proc_fops = &llc_seq_core_fops;
+-
+ 	rc = 0;
+ out:
+ 	return rc;
+diff --git a/net/mac80211/ieee80211_sta.c b/net/mac80211/ieee80211_sta.c
+index 2019b4f..9aeed53 100644
+--- a/net/mac80211/ieee80211_sta.c
++++ b/net/mac80211/ieee80211_sta.c
+@@ -1116,9 +1116,10 @@ static void ieee80211_sta_process_addba_request(struct net_device *dev,
+ 	/* prepare reordering buffer */
+ 	tid_agg_rx->reorder_buf =
+ 		kmalloc(buf_size * sizeof(struct sk_buf *), GFP_ATOMIC);
+-	if ((!tid_agg_rx->reorder_buf) && net_ratelimit()) {
+-		printk(KERN_ERR "can not allocate reordering buffer "
+-						"to tid %d\n", tid);
++	if (!tid_agg_rx->reorder_buf) {
++		if (net_ratelimit())
++			printk(KERN_ERR "can not allocate reordering buffer "
++			       "to tid %d\n", tid);
+ 		goto end;
+ 	}
+ 	memset(tid_agg_rx->reorder_buf, 0,
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 327e847..b77eb56 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -256,13 +256,19 @@ __nf_conntrack_find(const struct nf_conntrack_tuple *tuple)
+ 	struct hlist_node *n;
+ 	unsigned int hash = hash_conntrack(tuple);
+ 
++	/* Disable BHs the entire time since we normally need to disable them
++	 * at least once for the stats anyway.
++	 */
++	local_bh_disable();
+ 	hlist_for_each_entry_rcu(h, n, &nf_conntrack_hash[hash], hnode) {
+ 		if (nf_ct_tuple_equal(tuple, &h->tuple)) {
+ 			NF_CT_STAT_INC(found);
++			local_bh_enable();
+ 			return h;
+ 		}
+ 		NF_CT_STAT_INC(searched);
+ 	}
++	local_bh_enable();
+ 
+ 	return NULL;
+ }
+@@ -400,17 +406,20 @@ nf_conntrack_tuple_taken(const struct nf_conntrack_tuple *tuple,
+ 	struct hlist_node *n;
+ 	unsigned int hash = hash_conntrack(tuple);
+ 
+-	rcu_read_lock();
++	/* Disable BHs the entire time since we need to disable them at
++	 * least once for the stats anyway.
++	 */
++	rcu_read_lock_bh();
+ 	hlist_for_each_entry_rcu(h, n, &nf_conntrack_hash[hash], hnode) {
+ 		if (nf_ct_tuplehash_to_ctrack(h) != ignored_conntrack &&
+ 		    nf_ct_tuple_equal(tuple, &h->tuple)) {
+ 			NF_CT_STAT_INC(found);
+-			rcu_read_unlock();
++			rcu_read_unlock_bh();
+ 			return 1;
+ 		}
+ 		NF_CT_STAT_INC(searched);
+ 	}
+-	rcu_read_unlock();
++	rcu_read_unlock_bh();
+ 
+ 	return 0;
+ }
+diff --git a/net/netfilter/xt_conntrack.c b/net/netfilter/xt_conntrack.c
+index 8533085..0c50b28 100644
+--- a/net/netfilter/xt_conntrack.c
++++ b/net/netfilter/xt_conntrack.c
+@@ -122,7 +122,7 @@ conntrack_addrcmp(const union nf_inet_addr *kaddr,
+                   const union nf_inet_addr *umask, unsigned int l3proto)
+ {
+ 	if (l3proto == AF_INET)
+-		return (kaddr->ip & umask->ip) == uaddr->ip;
++		return ((kaddr->ip ^ uaddr->ip) & umask->ip) == 0;
+ 	else if (l3proto == AF_INET6)
+ 		return ipv6_masked_addr_cmp(&kaddr->in6, &umask->in6,
+ 		       &uaddr->in6) == 0;
+@@ -231,7 +231,7 @@ conntrack_mt(const struct sk_buff *skb, const struct net_device *in,
+ 			if (test_bit(IPS_DST_NAT_BIT, &ct->status))
+ 				statebit |= XT_CONNTRACK_STATE_DNAT;
+ 		}
+-		if ((info->state_mask & statebit) ^
++		if (!!(info->state_mask & statebit) ^
+ 		    !(info->invert_flags & XT_CONNTRACK_STATE))
+ 			return false;
+ 	}
+diff --git a/net/sctp/auth.c b/net/sctp/auth.c
+index 8bb79f2..675a5c3 100644
+--- a/net/sctp/auth.c
++++ b/net/sctp/auth.c
+@@ -838,11 +838,11 @@ int sctp_auth_set_key(struct sctp_endpoint *ep,
+ 	}
+ 
+ 	/* Create a new key data based on the info passed in */
+-	key = sctp_auth_create_key(auth_key->sca_keylen, GFP_KERNEL);
++	key = sctp_auth_create_key(auth_key->sca_keylength, GFP_KERNEL);
+ 	if (!key)
+ 		goto nomem;
+ 
+-	memcpy(key->data, &auth_key->sca_key[0], auth_key->sca_keylen);
++	memcpy(key->data, &auth_key->sca_key[0], auth_key->sca_keylength);
+ 
+ 	/* If we are replacing, remove the old keys data from the
+ 	 * key id.  If we are adding new key id, add it to the
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index 4d7ec96..87f9405 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -966,7 +966,7 @@ static struct inet6_protocol sctpv6_protocol = {
+ 	.flags        = INET6_PROTO_NOPOLICY | INET6_PROTO_FINAL,
+ };
+ 
+-static struct sctp_af sctp_ipv6_specific = {
++static struct sctp_af sctp_af_inet6 = {
+ 	.sa_family	   = AF_INET6,
+ 	.sctp_xmit	   = sctp_v6_xmit,
+ 	.setsockopt	   = ipv6_setsockopt,
+@@ -998,7 +998,7 @@ static struct sctp_af sctp_ipv6_specific = {
+ #endif
+ };
+ 
+-static struct sctp_pf sctp_pf_inet6_specific = {
++static struct sctp_pf sctp_pf_inet6 = {
+ 	.event_msgname = sctp_inet6_event_msgname,
+ 	.skb_msgname   = sctp_inet6_skb_msgname,
+ 	.af_supported  = sctp_inet6_af_supported,
+@@ -1008,7 +1008,7 @@ static struct sctp_pf sctp_pf_inet6_specific = {
+ 	.supported_addrs = sctp_inet6_supported_addrs,
+ 	.create_accept_sk = sctp_v6_create_accept_sk,
+ 	.addr_v4map    = sctp_v6_addr_v4map,
+-	.af            = &sctp_ipv6_specific,
++	.af            = &sctp_af_inet6,
+ };
+ 
+ /* Initialize IPv6 support and register with socket layer.  */
+@@ -1017,10 +1017,10 @@ int sctp_v6_init(void)
+ 	int rc;
+ 
+ 	/* Register the SCTP specific PF_INET6 functions. */
+-	sctp_register_pf(&sctp_pf_inet6_specific, PF_INET6);
++	sctp_register_pf(&sctp_pf_inet6, PF_INET6);
+ 
+ 	/* Register the SCTP specific AF_INET6 functions. */
+-	sctp_register_af(&sctp_ipv6_specific);
++	sctp_register_af(&sctp_af_inet6);
+ 
+ 	rc = proto_register(&sctpv6_prot, 1);
+ 	if (rc)
+@@ -1051,7 +1051,7 @@ void sctp_v6_exit(void)
+ 	inet6_unregister_protosw(&sctpv6_seqpacket_protosw);
+ 	inet6_unregister_protosw(&sctpv6_stream_protosw);
+ 	proto_unregister(&sctpv6_prot);
+-	list_del(&sctp_ipv6_specific.list);
++	list_del(&sctp_af_inet6.list);
+ }
+ 
+ /* Unregister with inet6 layer. */
+diff --git a/net/sctp/objcnt.c b/net/sctp/objcnt.c
+index 14e294e..cfeb07e 100644
+--- a/net/sctp/objcnt.c
++++ b/net/sctp/objcnt.c
+@@ -132,12 +132,11 @@ void sctp_dbg_objcnt_init(void)
+ {
+ 	struct proc_dir_entry *ent;
+ 
+-	ent = create_proc_entry("sctp_dbg_objcnt", 0, proc_net_sctp);
++	ent = proc_create("sctp_dbg_objcnt", 0,
++			  proc_net_sctp, &sctp_objcnt_ops);
+ 	if (!ent)
+ 		printk(KERN_WARNING
+ 			"sctp_dbg_objcnt: Unable to create /proc entry.\n");
+-	else
+-		ent->proc_fops = &sctp_objcnt_ops;
+ }
+ 
+ /* Cleanup the objcount entry in the proc filesystem.  */
+diff --git a/net/sctp/proc.c b/net/sctp/proc.c
+index 69bb5a6..9e214da 100644
+--- a/net/sctp/proc.c
++++ b/net/sctp/proc.c
+@@ -108,12 +108,10 @@ int __init sctp_snmp_proc_init(void)
+ {
+ 	struct proc_dir_entry *p;
+ 
+-	p = create_proc_entry("snmp", S_IRUGO, proc_net_sctp);
++	p = proc_create("snmp", S_IRUGO, proc_net_sctp, &sctp_snmp_seq_fops);
+ 	if (!p)
+ 		return -ENOMEM;
+ 
+-	p->proc_fops = &sctp_snmp_seq_fops;
+-
+ 	return 0;
+ }
+ 
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 22a1657..688546d 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -832,7 +832,7 @@ static inline int sctp_v4_xmit(struct sk_buff *skb,
+ 	return ip_queue_xmit(skb, ipfragok);
+ }
+ 
+-static struct sctp_af sctp_ipv4_specific;
++static struct sctp_af sctp_af_inet;
+ 
+ static struct sctp_pf sctp_pf_inet = {
+ 	.event_msgname = sctp_inet_event_msgname,
+@@ -844,7 +844,7 @@ static struct sctp_pf sctp_pf_inet = {
+ 	.supported_addrs = sctp_inet_supported_addrs,
+ 	.create_accept_sk = sctp_v4_create_accept_sk,
+ 	.addr_v4map	= sctp_v4_addr_v4map,
+-	.af            = &sctp_ipv4_specific,
++	.af            = &sctp_af_inet
+ };
+ 
+ /* Notifier for inetaddr addition/deletion events.  */
+@@ -906,7 +906,7 @@ static struct net_protocol sctp_protocol = {
+ };
+ 
+ /* IPv4 address related functions.  */
+-static struct sctp_af sctp_ipv4_specific = {
++static struct sctp_af sctp_af_inet = {
+ 	.sa_family	   = AF_INET,
+ 	.sctp_xmit	   = sctp_v4_xmit,
+ 	.setsockopt	   = ip_setsockopt,
+@@ -1192,7 +1192,7 @@ SCTP_STATIC __init int sctp_init(void)
+ 	sctp_sysctl_register();
+ 
+ 	INIT_LIST_HEAD(&sctp_address_families);
+-	sctp_register_af(&sctp_ipv4_specific);
++	sctp_register_af(&sctp_af_inet);
+ 
+ 	status = proto_register(&sctp_prot, 1);
+ 	if (status)
+@@ -1249,7 +1249,7 @@ err_v6_init:
+ 	proto_unregister(&sctp_prot);
+ err_proto_register:
+ 	sctp_sysctl_unregister();
+-	list_del(&sctp_ipv4_specific.list);
++	list_del(&sctp_af_inet.list);
+ 	free_pages((unsigned long)sctp_port_hashtable,
+ 		   get_order(sctp_port_hashsize *
+ 			     sizeof(struct sctp_bind_hashbucket)));
+@@ -1299,7 +1299,7 @@ SCTP_STATIC __exit void sctp_exit(void)
+ 	inet_unregister_protosw(&sctp_seqpacket_protosw);
+ 
+ 	sctp_sysctl_unregister();
+-	list_del(&sctp_ipv4_specific.list);
++	list_del(&sctp_af_inet.list);
+ 
+ 	free_pages((unsigned long)sctp_assoc_hashtable,
+ 		   get_order(sctp_assoc_hashsize *
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 44797ad..9398926 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -1964,7 +1964,7 @@ static int sctp_setsockopt_disable_fragments(struct sock *sk,
+ static int sctp_setsockopt_events(struct sock *sk, char __user *optval,
+ 					int optlen)
+ {
+-	if (optlen != sizeof(struct sctp_event_subscribe))
++	if (optlen > sizeof(struct sctp_event_subscribe))
+ 		return -EINVAL;
+ 	if (copy_from_user(&sctp_sk(sk)->subscribe, optval, optlen))
+ 		return -EFAULT;
+@@ -5070,6 +5070,7 @@ static int sctp_getsockopt_peer_auth_chunks(struct sock *sk, int len,
+ 	struct sctp_authchunks val;
+ 	struct sctp_association *asoc;
+ 	struct sctp_chunks_param *ch;
++	u32    num_chunks;
+ 	char __user *to;
+ 
+ 	if (len <= sizeof(struct sctp_authchunks))
+@@ -5086,12 +5087,15 @@ static int sctp_getsockopt_peer_auth_chunks(struct sock *sk, int len,
+ 	ch = asoc->peer.peer_chunks;
+ 
+ 	/* See if the user provided enough room for all the data */
+-	if (len < ntohs(ch->param_hdr.length))
++	num_chunks = ntohs(ch->param_hdr.length) - sizeof(sctp_paramhdr_t);
++	if (len < num_chunks)
+ 		return -EINVAL;
+ 
+-	len = ntohs(ch->param_hdr.length);
++	len = num_chunks;
+ 	if (put_user(len, optlen))
+ 		return -EFAULT;
++	if (put_user(num_chunks, &p->gauth_number_of_chunks))
++		return -EFAULT;
+ 	if (copy_to_user(to, ch->chunks, len))
+ 		return -EFAULT;
+ 
+@@ -5105,6 +5109,7 @@ static int sctp_getsockopt_local_auth_chunks(struct sock *sk, int len,
+ 	struct sctp_authchunks val;
+ 	struct sctp_association *asoc;
+ 	struct sctp_chunks_param *ch;
++	u32    num_chunks;
+ 	char __user *to;
+ 
+ 	if (len <= sizeof(struct sctp_authchunks))
+@@ -5123,12 +5128,15 @@ static int sctp_getsockopt_local_auth_chunks(struct sock *sk, int len,
+ 	else
+ 		ch = sctp_sk(sk)->ep->auth_chunk_list;
+ 
+-	if (len < ntohs(ch->param_hdr.length))
++	num_chunks = ntohs(ch->param_hdr.length) - sizeof(sctp_paramhdr_t);
++	if (len < num_chunks)
+ 		return -EINVAL;
+ 
+-	len = ntohs(ch->param_hdr.length);
++	len = num_chunks;
+ 	if (put_user(len, optlen))
+ 		return -EFAULT;
++	if (put_user(num_chunks, &p->gauth_number_of_chunks))
++		return -EFAULT;
+ 	if (copy_to_user(to, ch->chunks, len))
+ 		return -EFAULT;
+ 
+diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c
+index e27b11f..b43f1f1 100644
+--- a/net/sctp/ulpevent.c
++++ b/net/sctp/ulpevent.c
+@@ -206,7 +206,7 @@ struct sctp_ulpevent  *sctp_ulpevent_make_assoc_change(
+ 	 * This field is the total length of the notification data, including
+ 	 * the notification header.
+ 	 */
+-	sac->sac_length = sizeof(struct sctp_assoc_change);
++	sac->sac_length = skb->len;
+ 
+ 	/* Socket Extensions for SCTP
+ 	 * 5.3.1.1 SCTP_ASSOC_CHANGE
+diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
+index 636c8e0..b5f2786 100644
+--- a/net/sunrpc/cache.c
++++ b/net/sunrpc/cache.c
+@@ -316,31 +316,29 @@ static int create_cache_proc_entries(struct cache_detail *cd)
+ 	cd->proc_ent->owner = cd->owner;
+ 	cd->channel_ent = cd->content_ent = NULL;
+ 
+-	p = create_proc_entry("flush", S_IFREG|S_IRUSR|S_IWUSR, cd->proc_ent);
++	p = proc_create("flush", S_IFREG|S_IRUSR|S_IWUSR,
++			cd->proc_ent, &cache_flush_operations);
+ 	cd->flush_ent = p;
+ 	if (p == NULL)
+ 		goto out_nomem;
+-	p->proc_fops = &cache_flush_operations;
+ 	p->owner = cd->owner;
+ 	p->data = cd;
+ 
+ 	if (cd->cache_request || cd->cache_parse) {
+-		p = create_proc_entry("channel", S_IFREG|S_IRUSR|S_IWUSR,
+-				      cd->proc_ent);
++		p = proc_create("channel", S_IFREG|S_IRUSR|S_IWUSR,
++				cd->proc_ent, &cache_file_operations);
+ 		cd->channel_ent = p;
+ 		if (p == NULL)
+ 			goto out_nomem;
+-		p->proc_fops = &cache_file_operations;
+ 		p->owner = cd->owner;
+ 		p->data = cd;
+ 	}
+ 	if (cd->cache_show) {
+-		p = create_proc_entry("content", S_IFREG|S_IRUSR|S_IWUSR,
+-				      cd->proc_ent);
++		p = proc_create("content", S_IFREG|S_IRUSR|S_IWUSR,
++				cd->proc_ent, &content_file_operations);
+ 		cd->content_ent = p;
+ 		if (p == NULL)
+ 			goto out_nomem;
+-		p->proc_fops = &content_file_operations;
+ 		p->owner = cd->owner;
+ 		p->data = cd;
+ 	}
+diff --git a/net/sunrpc/stats.c b/net/sunrpc/stats.c
+index 5a16875..c6061a4 100644
+--- a/net/sunrpc/stats.c
++++ b/net/sunrpc/stats.c
+@@ -229,9 +229,8 @@ do_register(const char *name, void *data, const struct file_operations *fops)
+ 	rpc_proc_init();
+ 	dprintk("RPC:       registering /proc/net/rpc/%s\n", name);
+ 
+-	ent = create_proc_entry(name, 0, proc_net_rpc);
++	ent = proc_create(name, 0, proc_net_rpc, fops);
+ 	if (ent) {
+-		ent->proc_fops = fops;
+ 		ent->data = data;
+ 	}
+ 	return ent;
+diff --git a/net/tipc/cluster.c b/net/tipc/cluster.c
+index 95b3739..4bb3404 100644
+--- a/net/tipc/cluster.c
++++ b/net/tipc/cluster.c
+@@ -142,7 +142,7 @@ void tipc_cltr_attach_node(struct cluster *c_ptr, struct node *n_ptr)
+ 		max_n_num = tipc_highest_allowed_slave;
+ 	assert(n_num > 0);
+ 	assert(n_num <= max_n_num);
+-	assert(c_ptr->nodes[n_num] == 0);
++	assert(c_ptr->nodes[n_num] == NULL);
+ 	c_ptr->nodes[n_num] = n_ptr;
+ 	if (n_num > c_ptr->highest_node)
+ 		c_ptr->highest_node = n_num;
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index 1b17fec..cefa998 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -3251,7 +3251,7 @@ static void link_print(struct link *l_ptr, struct print_buf *buf,
+ 		if ((mod(msg_seqno(buf_msg(l_ptr->last_out)) -
+ 			 msg_seqno(buf_msg(l_ptr->first_out)))
+ 		     != (l_ptr->out_queue_size - 1))
+-		    || (l_ptr->last_out->next != 0)) {
++		    || (l_ptr->last_out->next != NULL)) {
+ 			tipc_printf(buf, "\nSend queue inconsistency\n");
+ 			tipc_printf(buf, "first_out= %x ", l_ptr->first_out);
+ 			tipc_printf(buf, "next_out= %x ", l_ptr->next_out);
+diff --git a/net/tipc/ref.c b/net/tipc/ref.c
+index 6704a58..c38744c 100644
+--- a/net/tipc/ref.c
++++ b/net/tipc/ref.c
+@@ -148,7 +148,7 @@ u32 tipc_ref_acquire(void *object, spinlock_t **lock)
+ 		reference = (next_plus_upper & ~index_mask) + index;
+ 		entry->data.reference = reference;
+ 		entry->object = object;
+-		if (lock != 0)
++		if (lock != NULL)
+ 			*lock = &entry->lock;
+ 		spin_unlock_bh(&entry->lock);
+ 	}
+diff --git a/net/tipc/zone.c b/net/tipc/zone.c
+index 114e173..3506f85 100644
+--- a/net/tipc/zone.c
++++ b/net/tipc/zone.c
+@@ -82,7 +82,7 @@ void tipc_zone_attach_cluster(struct _zone *z_ptr, struct cluster *c_ptr)
+ 
+ 	assert(c_ptr->addr);
+ 	assert(c_num <= tipc_max_clusters);
+-	assert(z_ptr->clusters[c_num] == 0);
++	assert(z_ptr->clusters[c_num] == NULL);
+ 	z_ptr->clusters[c_num] = c_ptr;
+ }
+ 
+diff --git a/net/wanrouter/wanproc.c b/net/wanrouter/wanproc.c
+index f2e54c3..5bebe40 100644
+--- a/net/wanrouter/wanproc.c
++++ b/net/wanrouter/wanproc.c
+@@ -292,14 +292,12 @@ int __init wanrouter_proc_init(void)
+ 	if (!proc_router)
+ 		goto fail;
+ 
+-	p = create_proc_entry("config", S_IRUGO, proc_router);
++	p = proc_create("config", S_IRUGO, proc_router, &config_fops);
+ 	if (!p)
+ 		goto fail_config;
+-	p->proc_fops = &config_fops;
+-	p = create_proc_entry("status", S_IRUGO, proc_router);
++	p = proc_create("status", S_IRUGO, proc_router, &status_fops);
+ 	if (!p)
+ 		goto fail_stat;
+-	p->proc_fops = &status_fops;
+ 	return 0;
+ fail_stat:
+ 	remove_proc_entry("config", proc_router);
+@@ -329,10 +327,10 @@ int wanrouter_proc_add(struct wan_device* wandev)
+ 	if (wandev->magic != ROUTER_MAGIC)
+ 		return -EINVAL;
+ 
+-	wandev->dent = create_proc_entry(wandev->name, S_IRUGO, proc_router);
++	wandev->dent = proc_create(wandev->name, S_IRUGO,
++				   proc_router, &wandev_fops);
+ 	if (!wandev->dent)
+ 		return -ENOMEM;
+-	wandev->dent->proc_fops	= &wandev_fops;
+ 	wandev->dent->data	= wandev;
+ 	return 0;
+ }
+diff --git a/net/x25/x25_proc.c b/net/x25/x25_proc.c
+index 3f52b09..1afa44d 100644
+--- a/net/x25/x25_proc.c
++++ b/net/x25/x25_proc.c
+@@ -312,20 +312,18 @@ int __init x25_proc_init(void)
+ 	if (!x25_proc_dir)
+ 		goto out;
+ 
+-	p = create_proc_entry("route", S_IRUGO, x25_proc_dir);
++	p = proc_create("route", S_IRUGO, x25_proc_dir, &x25_seq_route_fops);
+ 	if (!p)
+ 		goto out_route;
+-	p->proc_fops = &x25_seq_route_fops;
+ 
+-	p = create_proc_entry("socket", S_IRUGO, x25_proc_dir);
++	p = proc_create("socket", S_IRUGO, x25_proc_dir, &x25_seq_socket_fops);
+ 	if (!p)
+ 		goto out_socket;
+-	p->proc_fops = &x25_seq_socket_fops;
+ 
+-	p = create_proc_entry("forward", S_IRUGO, x25_proc_dir);
++	p = proc_create("forward", S_IRUGO, x25_proc_dir,
++			&x25_seq_forward_fops);
+ 	if (!p)
+ 		goto out_forward;
+-	p->proc_fops = &x25_seq_forward_fops;
+ 	rc = 0;
+ 
+ out:
+diff --git a/sound/isa/sb/sb8_main.c b/sound/isa/sb/sb8_main.c
+index 6304c3a..fe03bb8 100644
+--- a/sound/isa/sb/sb8_main.c
++++ b/sound/isa/sb/sb8_main.c
+@@ -277,7 +277,7 @@ static int snd_sb8_capture_prepare(struct snd_pcm_substream *substream)
+ 	} else {
+ 		snd_sbdsp_command(chip, 256 - runtime->rate_den);
+ 	}
+-	if (chip->capture_format != SB_DSP_OUTPUT) {
++	if (chip->capture_format != SB_DSP_INPUT) {
+ 		count--;
+ 		snd_sbdsp_command(chip, SB_DSP_BLOCK_SIZE);
+ 		snd_sbdsp_command(chip, count & 0xff);
+diff --git a/sound/pci/hda/patch_analog.c b/sound/pci/hda/patch_analog.c
+index 19f0884..c864928 100644
+--- a/sound/pci/hda/patch_analog.c
++++ b/sound/pci/hda/patch_analog.c
+@@ -1778,9 +1778,9 @@ static hda_nid_t ad1988_capsrc_nids[3] = {
+ static struct hda_input_mux ad1988_6stack_capture_source = {
+ 	.num_items = 5,
+ 	.items = {
+-		{ "Front Mic", 0x0 },
+-		{ "Line", 0x1 },
+-		{ "Mic", 0x4 },
++		{ "Front Mic", 0x1 },	/* port-B */
++		{ "Line", 0x2 },	/* port-C */
++		{ "Mic", 0x4 },		/* port-E */
+ 		{ "CD", 0x5 },
+ 		{ "Mix", 0x9 },
+ 	},
+@@ -1789,7 +1789,7 @@ static struct hda_input_mux ad1988_6stack_capture_source = {
+ static struct hda_input_mux ad1988_laptop_capture_source = {
+ 	.num_items = 3,
+ 	.items = {
+-		{ "Mic/Line", 0x0 },
++		{ "Mic/Line", 0x1 },	/* port-B */
+ 		{ "CD", 0x5 },
+ 		{ "Mix", 0x9 },
+ 	},
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index f7cd3a8..7206b30 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -1230,6 +1230,11 @@ static struct hda_verb cxt5047_toshiba_init_verbs[] = {
+ static struct hda_verb cxt5047_hp_init_verbs[] = {
+ 	/* pin sensing on HP jack */
+ 	{0x13, AC_VERB_SET_UNSOLICITED_ENABLE, AC_USRSP_EN | CONEXANT_HP_EVENT},
++	/* 0x13 is actually shared by both HP and speaker;
++	 * setting the connection to 0 (=0x19) makes the master volume control
++	 * working mysteriouslly...
++	 */
++	{0x13, AC_VERB_SET_CONNECT_SEL, 0x0},
+ 	/* Record selector: Ext Mic */
+ 	{0x12, AC_VERB_SET_CONNECT_SEL,0x03},
+ 	{0x19, AC_VERB_SET_AMP_GAIN_MUTE,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 777f8c0..33282f9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3973,8 +3973,8 @@ static struct snd_kcontrol_new alc260_fujitsu_mixer[] = {
+ 	ALC_PIN_MODE("Mic/Line Jack Mode", 0x12, ALC_PIN_DIR_IN),
+ 	HDA_CODEC_VOLUME("Beep Playback Volume", 0x07, 0x05, HDA_INPUT),
+ 	HDA_CODEC_MUTE("Beep Playback Switch", 0x07, 0x05, HDA_INPUT),
+-	HDA_CODEC_VOLUME("Internal Speaker Playback Volume", 0x09, 0x0, HDA_OUTPUT),
+-	HDA_BIND_MUTE("Internal Speaker Playback Switch", 0x09, 2, HDA_INPUT),
++	HDA_CODEC_VOLUME("Speaker Playback Volume", 0x09, 0x0, HDA_OUTPUT),
++	HDA_BIND_MUTE("Speaker Playback Switch", 0x09, 2, HDA_INPUT),
+ 	{ } /* end */
+ };
+ 
+@@ -4005,9 +4005,9 @@ static struct snd_kcontrol_new alc260_acer_mixer[] = {
+ 	HDA_CODEC_VOLUME("Master Playback Volume", 0x08, 0x0, HDA_OUTPUT),
+ 	HDA_BIND_MUTE("Master Playback Switch", 0x08, 2, HDA_INPUT),
+ 	ALC_PIN_MODE("Headphone Jack Mode", 0x0f, ALC_PIN_DIR_INOUT),
+-	HDA_CODEC_VOLUME_MONO("Mono Speaker Playback Volume", 0x0a, 1, 0x0,
++	HDA_CODEC_VOLUME_MONO("Speaker Playback Volume", 0x0a, 1, 0x0,
+ 			      HDA_OUTPUT),
+-	HDA_BIND_MUTE_MONO("Mono Speaker Playback Switch", 0x0a, 1, 2,
++	HDA_BIND_MUTE_MONO("Speaker Playback Switch", 0x0a, 1, 2,
+ 			   HDA_INPUT),
+ 	HDA_CODEC_VOLUME("CD Playback Volume", 0x07, 0x04, HDA_INPUT),
+ 	HDA_CODEC_MUTE("CD Playback Switch", 0x07, 0x04, HDA_INPUT),
+@@ -7639,6 +7639,7 @@ static struct snd_pci_quirk alc883_cfg_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3bfc, "Lenovo NB0763", ALC883_LENOVO_NB0763),
+ 	SND_PCI_QUIRK(0x17aa, 0x3bfd, "Lenovo NB0763", ALC883_LENOVO_NB0763),
+ 	SND_PCI_QUIRK(0x17c0, 0x4071, "MEDION MD2", ALC883_MEDION_MD2),
++	SND_PCI_QUIRK(0x17f2, 0x5000, "Albatron KI690-AM2", ALC883_6ST_DIG),
+ 	SND_PCI_QUIRK(0x1991, 0x5625, "Haier W66", ALC883_HAIER_W66),
+ 	SND_PCI_QUIRK(0x8086, 0xd601, "D102GGC", ALC883_3ST_6ch),
+ 	{}
+@@ -8102,7 +8103,7 @@ static struct snd_kcontrol_new alc262_base_mixer[] = {
+ 	HDA_CODEC_MUTE("Front Mic Playback Switch", 0x0b, 0x01, HDA_INPUT),
+ 	HDA_CODEC_VOLUME("Front Mic Boost", 0x19, 0, HDA_INPUT),
+ 	/* HDA_CODEC_VOLUME("PC Beep Playback Volume", 0x0b, 0x05, HDA_INPUT),
+-	   HDA_CODEC_MUTE("PC Beelp Playback Switch", 0x0b, 0x05, HDA_INPUT), */
++	   HDA_CODEC_MUTE("PC Beep Playback Switch", 0x0b, 0x05, HDA_INPUT), */
+ 	HDA_CODEC_VOLUME("Headphone Playback Volume", 0x0D, 0x0, HDA_OUTPUT),
+ 	HDA_CODEC_MUTE("Headphone Playback Switch", 0x15, 0x0, HDA_OUTPUT),
+ 	HDA_CODEC_VOLUME_MONO("Mono Playback Volume", 0x0e, 2, 0x0, HDA_OUTPUT),
+@@ -8124,7 +8125,7 @@ static struct snd_kcontrol_new alc262_hippo1_mixer[] = {
+ 	HDA_CODEC_MUTE("Front Mic Playback Switch", 0x0b, 0x01, HDA_INPUT),
+ 	HDA_CODEC_VOLUME("Front Mic Boost", 0x19, 0, HDA_INPUT),
+ 	/* HDA_CODEC_VOLUME("PC Beep Playback Volume", 0x0b, 0x05, HDA_INPUT),
+-	   HDA_CODEC_MUTE("PC Beelp Playback Switch", 0x0b, 0x05, HDA_INPUT), */
++	   HDA_CODEC_MUTE("PC Beep Playback Switch", 0x0b, 0x05, HDA_INPUT), */
+ 	/*HDA_CODEC_VOLUME("Headphone Playback Volume", 0x0D, 0x0, HDA_OUTPUT),*/
+ 	HDA_CODEC_MUTE("Headphone Playback Switch", 0x1b, 0x0, HDA_OUTPUT),
+ 	{ } /* end */
+@@ -9238,6 +9239,7 @@ static struct snd_pci_quirk alc262_cfg_tbl[] = {
+ 	SND_PCI_QUIRK(0x104d, 0x900e, "Sony ASSAMD", ALC262_SONY_ASSAMD),
+ 	SND_PCI_QUIRK(0x104d, 0x9015, "Sony 0x9015", ALC262_SONY_ASSAMD),
+ 	SND_PCI_QUIRK(0x10cf, 0x1397, "Fujitsu", ALC262_FUJITSU),
++	SND_PCI_QUIRK(0x10cf, 0x142d, "Fujitsu Lifebook E8410", ALC262_FUJITSU),
+ 	SND_PCI_QUIRK(0x144d, 0xc032, "Samsung Q1 Ultra", ALC262_ULTRA),
+ 	SND_PCI_QUIRK(0x17ff, 0x0560, "Benq ED8", ALC262_BENQ_ED8),
+ 	SND_PCI_QUIRK(0x17ff, 0x058d, "Benq T31-16", ALC262_BENQ_T31),
+@@ -12993,8 +12995,8 @@ static struct snd_kcontrol_new alc662_lenovo_101e_mixer[] = {
+ static struct snd_kcontrol_new alc662_eeepc_p701_mixer[] = {
+ 	HDA_CODEC_MUTE("Speaker Playback Switch", 0x14, 0x0, HDA_OUTPUT),
+ 
+-	HDA_CODEC_VOLUME("LineOut Playback Volume", 0x02, 0x0, HDA_OUTPUT),
+-	HDA_CODEC_MUTE("LineOut Playback Switch", 0x1b, 0x0, HDA_OUTPUT),
++	HDA_CODEC_VOLUME("Line-Out Playback Volume", 0x02, 0x0, HDA_OUTPUT),
++	HDA_CODEC_MUTE("Line-Out Playback Switch", 0x1b, 0x0, HDA_OUTPUT),
+ 
+ 	HDA_CODEC_VOLUME("e-Mic Boost", 0x18, 0, HDA_INPUT),
+ 	HDA_CODEC_VOLUME("e-Mic Playback Volume", 0x0b, 0x0, HDA_INPUT),
+@@ -13007,8 +13009,8 @@ static struct snd_kcontrol_new alc662_eeepc_p701_mixer[] = {
+ };
+ 
+ static struct snd_kcontrol_new alc662_eeepc_ep20_mixer[] = {
+-	HDA_CODEC_VOLUME("LineOut Playback Volume", 0x02, 0x0, HDA_OUTPUT),
+-	HDA_CODEC_MUTE("LineOut Playback Switch", 0x14, 0x0, HDA_OUTPUT),
++	HDA_CODEC_VOLUME("Line-Out Playback Volume", 0x02, 0x0, HDA_OUTPUT),
++	HDA_CODEC_MUTE("Line-Out Playback Switch", 0x14, 0x0, HDA_OUTPUT),
+ 	HDA_CODEC_VOLUME("Surround Playback Volume", 0x03, 0x0, HDA_OUTPUT),
+ 	HDA_BIND_MUTE("Surround Playback Switch", 0x03, 2, HDA_INPUT),
+ 	HDA_CODEC_VOLUME_MONO("Center Playback Volume", 0x04, 1, 0x0, HDA_OUTPUT),
+diff --git a/sound/pci/ice1712/phase.c b/sound/pci/ice1712/phase.c
+index 9ab4a9f..5a158b7 100644
+--- a/sound/pci/ice1712/phase.c
++++ b/sound/pci/ice1712/phase.c
+@@ -51,7 +51,7 @@
+ struct phase28_spec {
+ 	unsigned short master[2];
+ 	unsigned short vol[8];
+-} phase28;
++};
+ 
+ /* WM8770 registers */
+ #define WM_DAC_ATTEN		0x00	/* DAC1-8 analog attenuation */
+diff --git a/sound/pci/ice1712/revo.c b/sound/pci/ice1712/revo.c
+index ddd5fc8..301bf92 100644
+--- a/sound/pci/ice1712/revo.c
++++ b/sound/pci/ice1712/revo.c
+@@ -36,7 +36,7 @@
+ struct revo51_spec {
+ 	struct snd_i2c_device *dev;
+ 	struct snd_pt2258 *pt2258;
+-} revo51;
++};
+ 
+ static void revo_i2s_mclk_changed(struct snd_ice1712 *ice)
+ {
+diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
+index 061072c..c52abd0 100644
+--- a/sound/pci/intel8x0.c
++++ b/sound/pci/intel8x0.c
+@@ -1708,6 +1708,12 @@ static struct ac97_pcm ac97_pcm_defs[] __devinitdata = {
+ };
+ 
+ static struct ac97_quirk ac97_quirks[] __devinitdata = {
++        {
++		.subvendor = 0x0e11,
++		.subdevice = 0x000e,
++		.name = "Compaq Deskpro EN",	/* AD1885 */
++		.type = AC97_TUNE_HP_ONLY
++        },
+ 	{
+ 		.subvendor = 0x0e11,
+ 		.subdevice = 0x008a,
+@@ -1740,6 +1746,12 @@ static struct ac97_quirk ac97_quirks[] __devinitdata = {
+ 	},
+ 	{
+ 		.subvendor = 0x1025,
++		.subdevice = 0x0082,
++		.name = "Acer Travelmate 2310",
++		.type = AC97_TUNE_HP_ONLY
++	},
++	{
++		.subvendor = 0x1025,
+ 		.subdevice = 0x0083,
+ 		.name = "Acer Aspire 3003LCi",
+ 		.type = AC97_TUNE_HP_ONLY
+diff --git a/sound/pci/oxygen/hifier.c b/sound/pci/oxygen/hifier.c
+index 3ea1f05..666f69a 100644
+--- a/sound/pci/oxygen/hifier.c
++++ b/sound/pci/oxygen/hifier.c
+@@ -150,6 +150,7 @@ static const struct oxygen_model model_hifier = {
+ 	.shortname = "C-Media CMI8787",
+ 	.longname = "C-Media Oxygen HD Audio",
+ 	.chip = "CMI8788",
++	.owner = THIS_MODULE,
+ 	.init = hifier_init,
+ 	.control_filter = hifier_control_filter,
+ 	.mixer_init = hifier_mixer_init,
+diff --git a/sound/pci/oxygen/virtuoso.c b/sound/pci/oxygen/virtuoso.c
+index 40e92f5..d163397 100644
+--- a/sound/pci/oxygen/virtuoso.c
++++ b/sound/pci/oxygen/virtuoso.c
+@@ -389,6 +389,7 @@ static const struct oxygen_model model_xonar = {
+ 	.shortname = "Asus AV200",
+ 	.longname = "Asus Virtuoso 200",
+ 	.chip = "AV200",
++	.owner = THIS_MODULE,
+ 	.init = xonar_init,
+ 	.control_filter = xonar_control_filter,
+ 	.mixer_init = xonar_mixer_init,
+diff --git a/sound/soc/codecs/tlv320aic3x.c b/sound/soc/codecs/tlv320aic3x.c
+index 710e028..569ecac 100644
+--- a/sound/soc/codecs/tlv320aic3x.c
++++ b/sound/soc/codecs/tlv320aic3x.c
+@@ -681,8 +681,8 @@ static const struct aic3x_rate_divs aic3x_divs[] = {
+ 	{22579200, 48000, 48000, 0x0, 8, 7075},
+ 	{33868800, 48000, 48000, 0x0, 5, 8049},
+ 	/* 64k */
+-	{22579200, 96000, 96000, 0x1, 8, 7075},
+-	{33868800, 96000, 96000, 0x1, 5, 8049},
++	{22579200, 64000, 96000, 0x1, 8, 7075},
++	{33868800, 64000, 96000, 0x1, 5, 8049},
+ 	/* 88.2k */
+ 	{22579200, 88200, 88200, 0x0, 8, 0},
+ 	{33868800, 88200, 88200, 0x0, 5, 3333},
+diff --git a/sound/soc/codecs/wm9712.c b/sound/soc/codecs/wm9712.c
+index 590baea..524f745 100644
+--- a/sound/soc/codecs/wm9712.c
++++ b/sound/soc/codecs/wm9712.c
+@@ -176,7 +176,8 @@ static int wm9712_add_controls(struct snd_soc_codec *codec)
+  * the codec only has a single control that is shared by both channels.
+  * This makes it impossible to determine the audio path.
+  */
+-static int mixer_event (struct snd_soc_dapm_widget *w, int event)
++static int mixer_event(struct snd_soc_dapm_widget *w,
++	struct snd_kcontrol *k, int event)
+ {
+ 	u16 l, r, beep, line, phone, mic, pcm, aux;
+ 
+diff --git a/sound/soc/pxa/corgi.c b/sound/soc/pxa/corgi.c
+index 3f34e53..1a70a6a 100644
+--- a/sound/soc/pxa/corgi.c
++++ b/sound/soc/pxa/corgi.c
+@@ -215,7 +215,8 @@ static int corgi_set_spk(struct snd_kcontrol *kcontrol,
+ 	return 1;
+ }
+ 
+-static int corgi_amp_event(struct snd_soc_dapm_widget *w, int event)
++static int corgi_amp_event(struct snd_soc_dapm_widget *w,
++	struct snd_kcontrol *k, int event)
+ {
+ 	if (SND_SOC_DAPM_EVENT_ON(event))
+ 		set_scoop_gpio(&corgiscoop_device.dev, CORGI_SCP_APM_ON);
+@@ -225,7 +226,8 @@ static int corgi_amp_event(struct snd_soc_dapm_widget *w, int event)
+ 	return 0;
+ }
+ 
+-static int corgi_mic_event(struct snd_soc_dapm_widget *w, int event)
++static int corgi_mic_event(struct snd_soc_dapm_widget *w,
++	struct snd_kcontrol *k, int event)
+ {
+ 	if (SND_SOC_DAPM_EVENT_ON(event))
+ 		set_scoop_gpio(&corgiscoop_device.dev, CORGI_SCP_MIC_BIAS);
+diff --git a/sound/soc/pxa/poodle.c b/sound/soc/pxa/poodle.c
+index 5ae59bd..4fbf8bb 100644
+--- a/sound/soc/pxa/poodle.c
++++ b/sound/soc/pxa/poodle.c
+@@ -196,7 +196,8 @@ static int poodle_set_spk(struct snd_kcontrol *kcontrol,
+ 	return 1;
+ }
+ 
+-static int poodle_amp_event(struct snd_soc_dapm_widget *w, int event)
++static int poodle_amp_event(struct snd_soc_dapm_widget *w,
++	struct snd_kcontrol *k, int event)
+ {
+ 	if (SND_SOC_DAPM_EVENT_ON(event))
+ 		locomo_gpio_write(&poodle_locomo_device.dev,
+diff --git a/sound/soc/pxa/spitz.c b/sound/soc/pxa/spitz.c
+index d56709e..ecca390 100644
+--- a/sound/soc/pxa/spitz.c
++++ b/sound/soc/pxa/spitz.c
+@@ -215,7 +215,8 @@ static int spitz_set_spk(struct snd_kcontrol *kcontrol,
+ 	return 1;
+ }
+ 
+-static int spitz_mic_bias(struct snd_soc_dapm_widget *w, int event)
++static int spitz_mic_bias(struct snd_soc_dapm_widget *w,
++	struct snd_kcontrol *k, int event)
+ {
+ 	if (machine_is_borzoi() || machine_is_spitz()) {
+ 		if (SND_SOC_DAPM_EVENT_ON(event))
+diff --git a/sound/soc/pxa/tosa.c b/sound/soc/pxa/tosa.c
+index e4d40b5..7346d7e 100644
+--- a/sound/soc/pxa/tosa.c
++++ b/sound/soc/pxa/tosa.c
+@@ -135,7 +135,8 @@ static int tosa_set_spk(struct snd_kcontrol *kcontrol,
+ }
+ 
+ /* tosa dapm event handlers */
+-static int tosa_hp_event(struct snd_soc_dapm_widget *w, int event)
++static int tosa_hp_event(struct snd_soc_dapm_widget *w,
++	struct snd_kcontrol *k, int event)
+ {
+ 	if (SND_SOC_DAPM_EVENT_ON(event))
+ 		set_tc6393_gpio(&tc6393_device.dev,TOSA_TC6393_L_MUTE);
+diff --git a/sound/usb/usbaudio.c b/sound/usb/usbaudio.c
+index 8fa9356..675672f 100644
+--- a/sound/usb/usbaudio.c
++++ b/sound/usb/usbaudio.c
+@@ -479,6 +479,33 @@ static int retire_playback_sync_urb_hs(struct snd_usb_substream *subs,
+ 	return 0;
+ }
+ 
++/*
++ * process after E-Mu 0202/0404 high speed playback sync complete
++ *
++ * These devices return the number of samples per packet instead of the number
++ * of samples per microframe.
++ */
++static int retire_playback_sync_urb_hs_emu(struct snd_usb_substream *subs,
++					   struct snd_pcm_runtime *runtime,
++					   struct urb *urb)
++{
++	unsigned int f;
++	unsigned long flags;
++
++	if (urb->iso_frame_desc[0].status == 0 &&
++	    urb->iso_frame_desc[0].actual_length == 4) {
++		f = combine_quad((u8*)urb->transfer_buffer) & 0x0fffffff;
++		f >>= subs->datainterval;
++		if (f >= subs->freqn - subs->freqn / 8 && f <= subs->freqmax) {
++			spin_lock_irqsave(&subs->lock, flags);
++			subs->freqm = f;
++			spin_unlock_irqrestore(&subs->lock, flags);
++		}
++	}
++
++	return 0;
++}
++
+ /* determine the number of frames in the next packet */
+ static int snd_usb_audio_next_packet_size(struct snd_usb_substream *subs)
+ {
+@@ -2219,10 +2246,17 @@ static void init_substream(struct snd_usb_stream *as, int stream, struct audiofo
+ 	subs->stream = as;
+ 	subs->direction = stream;
+ 	subs->dev = as->chip->dev;
+-	if (snd_usb_get_speed(subs->dev) == USB_SPEED_FULL)
++	if (snd_usb_get_speed(subs->dev) == USB_SPEED_FULL) {
+ 		subs->ops = audio_urb_ops[stream];
+-	else
++	} else {
+ 		subs->ops = audio_urb_ops_high_speed[stream];
++		switch (as->chip->usb_id) {
++		case USB_ID(0x041e, 0x3f02): /* E-Mu 0202 USB */
++		case USB_ID(0x041e, 0x3f04): /* E-Mu 0404 USB */
++			subs->ops.retire_sync = retire_playback_sync_urb_hs_emu;
++			break;
++		}
++	}
+ 	snd_pcm_set_ops(as->pcm, stream,
+ 			stream == SNDRV_PCM_STREAM_PLAYBACK ?
+ 			&snd_usb_playback_ops : &snd_usb_capture_ops);

Modified: dists/trunk/linux-2.6/debian/patches/series/1~experimental.1
==============================================================================
--- dists/trunk/linux-2.6/debian/patches/series/1~experimental.1	(original)
+++ dists/trunk/linux-2.6/debian/patches/series/1~experimental.1	Mon Mar  3 13:24:32 2008
@@ -1,3 +1,4 @@
++ bugfix/all/patch-2.6.25-rc3-git4
 + debian/version.patch
 + debian/kernelvariables.patch
 + debian/doc-build-parallel.patch



More information about the Kernel-svn-changes mailing list